11 Commits

Author SHA1 Message Date
Affaan Mustafa
47f508ec21 Revert "Add Kiro IDE support (.kiro/) (#548)"
This reverts commit ce828c1c3c.
2026-03-20 01:58:19 -07:00
Himanshu Sharma
ce828c1c3c Add Kiro IDE support (.kiro/) (#548)
Co-authored-by: Sungmin Hong <hsungmin@amazon.com>
2026-03-20 01:50:35 -07:00
Ofek Gabay
c8f631b046 feat: add block-no-verify hook for Claude Code and Cursor (#649)
Adds npx block-no-verify@1.1.2 as a PreToolUse Bash hook in hooks/hooks.json
and a beforeShellExecution hook in .cursor/hooks.json to prevent AI agents
from bypassing git hooks via the hook-bypass flag.

This closes the last enforcement gap in the ECC security stack — the bypass
flag silently skips pre-commit, commit-msg, and pre-push hooks.

Closes #648

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-20 01:50:31 -07:00
Affaan Mustafa
8511d84042 feat(skills): add rules-distill skill (rebased #561) (#678)
* feat(skills): add rules-distill — extract cross-cutting principles from skills into rules

Applies the skill-stocktake pattern to rules maintenance:
scan skills → extract shared principles → propose rule changes.

Key design decisions:
- Deterministic collection (scan scripts) + LLM judgment (cross-read & verdict)
- 6 verdict types: Append, Revise, New Section, New File, Already Covered, Too Specific
- Anti-abstraction safeguard: 2+ skills evidence, actionable behavior test, violation risk
- Rules full text passed to LLM (no grep pre-filter) for accurate matching
- Never modifies rules automatically — always requires user approval

* fix(skills): address review feedback for rules-distill

Fixes raised by CodeRabbit, Greptile, and cubic:

- Add Prerequisites section documenting skill-stocktake dependency
- Add fallback command when skill-stocktake is not installed
- Fix shell quoting: add IFS= and -r to while-read loops
- Replace hardcoded paths with env var placeholders ($CLAUDE_RULES_DIR, $SKILL_STOCKTAKE_DIR)
- Add json language identifier to code blocks
- Add "How It Works" parent heading for Phase 1/2/3
- Add "Example" section with end-to-end run output
- Add revision.reason/before/after fields to output schema for Revise verdict
- Document timestamp format (date -u +%Y-%m-%dT%H:%M:%SZ)
- Document candidate-id format (kebab-case from principle)
- Use concrete examples in results.json schema

* fix(skills): remove skill-stocktake dependency, add self-contained scripts

Address P1 review feedback:
- Add scan-skills.sh and scan-rules.sh directly in rules-distill/scripts/
  (no external dependency on skill-stocktake)
- Remove Prerequisites section (no longer needed)
- Add cross-batch merge step to prevent 2+ skills requirement
  from being silently broken across batch boundaries
- Fix nested triple-backtick fences (use quadruple backticks)
- Remove head -100 cap (silent truncation)
- Rename "When to Activate" → "When to Use" (ECC standard)
- Remove unnecessary env var placeholders (SKILL.md is a prompt, not a script)

* fix: update skill/command counts in README.md and AGENTS.md

rules-distill added 1 skill + 1 command:
- skills: 108 → 109
- commands: 57 → 58

Updates all count references to pass CI catalog validation.

* fix(skills): address Servitor review feedback for rules-distill

1. Rename SKILL_STOCKTAKE_* env vars to RULES_DISTILL_* for consistency
2. Remove unnecessary observation counting (use_7d/use_30d) from scan-skills.sh
3. Fix header comment: scan.sh → scan-skills.sh
4. Use jq for JSON construction in scan-rules.sh to properly escape
   headings containing special characters (", \)

* fix(skills): address CodeRabbit review — portability and scan scope

1. scan-rules.sh: use jq for error JSON output (proper escaping)
2. scan-rules.sh: replace GNU-only sort -z with portable sort (BSD compat)
3. scan-rules.sh: fix pipefail crash on files without H2 headings
4. scan-skills.sh: scan only SKILL.md files (skip learned/*.md and
   auxiliary docs that lack frontmatter)
5. scan-skills.sh: add portable get_mtime helper (GNU stat/date
   fallback to BSD stat/date)

* fix: sync catalog counts with filesystem (27 agents, 114 skills, 59 commands)

---------

Co-authored-by: Tatsuya Shimomoto <shimo4228@gmail.com>
2026-03-20 01:44:55 -07:00
dependabot[bot]
8a57894394 chore(deps-dev): bump flatted (#675)
Bumps the npm_and_yarn group with 1 update in the / directory: [flatted](https://github.com/WebReflection/flatted).


Updates `flatted` from 3.3.3 to 3.4.2
- [Commits](https://github.com/WebReflection/flatted/compare/v3.3.3...v3.4.2)

---
updated-dependencies:
- dependency-name: flatted
  dependency-version: 3.4.2
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-20 01:42:19 -07:00
Affaan Mustafa
68484da2fc fix: auto-detect ECC root from plugin cache when CLAUDE_PLUGIN_ROOT is unset (#547) (#691)
When ECC is installed as a Claude Code plugin via the marketplace,
scripts live in the plugin cache (~/.claude/plugins/cache/...) but
commands fallback to ~/.claude/ which doesn't have the scripts.

Add resolve-ecc-root.js with a 3-step fallback chain:
  1. CLAUDE_PLUGIN_ROOT env var (existing)
  2. Standard install at ~/.claude/ (existing)
  3. NEW: auto-scan the plugin cache directory

Update sessions.md and skill-health.md commands to use the new
inline resolver. Includes 15 tests covering all fallback paths
including env var priority, standard install, cache discovery,
and the compact INLINE_RESOLVE used in command .md files.
2026-03-20 01:38:15 -07:00
Affaan Mustafa
0b0b66c02f feat: agent compression, inspection logic, governance hooks (#491, #485, #482) (#688)
Implements three roadmap features:

- Agent description compression (#491): New `agent-compress` module with
  catalog/summary/full compression modes and lazy-loading. Reduces ~26k
  token agent descriptions to ~2-3k catalog entries for context efficiency.

- Inspection logic (#485): New `inspection` module that detects recurring
  failure patterns in skill_runs. Groups by skill + normalized failure
  reason, generates structured reports with suggested remediation actions.
  Configurable threshold (default: 3 failures).

- Governance event capture hook (#482): PreToolUse/PostToolUse hook that
  detects secrets, policy violations, approval-required commands, and
  elevated privilege usage. Gated behind ECC_GOVERNANCE_CAPTURE=1 flag.
  Writes to governance_events table via JSON-line stderr output.

59 new tests (16 + 16 + 27), all passing.
2026-03-20 01:38:13 -07:00
Affaan Mustafa
28de7cc420 fix: strip ANSI escape codes from session persistence hooks (#642) (#684)
Windows terminals emit control sequences (cursor movement, screen
clearing) that leaked into session.tmp files and were injected
verbatim into Claude's context on the next session start.

Add a comprehensive stripAnsi() to utils.js that handles CSI, OSC,
charset selection, and bare ESC sequences. Apply it in session-end.js
(when extracting user messages from the transcript) and in
session-start.js (safety net before injecting session content).
2026-03-20 01:38:11 -07:00
Affaan Mustafa
9a478ad676 feat(rules): add Rust language rules (rebased #660) (#686)
* feat(rules): add Rust coding style, hooks, and patterns rules

Add language-specific rules for Rust extending the common rule set:
- coding-style.md: rustfmt, clippy, ownership idioms, error handling,
  iterator patterns, module organization, visibility
- hooks.md: PostToolUse hooks for rustfmt, clippy, cargo check
- patterns.md: trait-based repository, newtype, enum state machines,
  builder, sealed traits, API response envelope

Rules reference existing rust-patterns skill for deep content.

Generated with [Claude Code](https://claude.ai/code)
via [Happy](https://happy.engineering)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Happy <yesreply@happy.engineering>

* feat(rules): add Rust testing and security rules

Add remaining Rust language-specific rules:
- testing.md: cargo test, rstest parameterized tests, mockall mocking
  with mock! macro, tokio async tests, cargo-llvm-cov coverage
- security.md: secrets via env vars, parameterized SQL with sqlx,
  parse-don't-validate input validation, unsafe code audit requirements,
  cargo-audit dependency scanning, proper HTTP error status codes

Generated with [Claude Code](https://claude.ai/code)
via [Happy](https://happy.engineering)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Happy <yesreply@happy.engineering>

* fix(rules): address review feedback on Rust rules

Fixes from Copilot, Greptile, Cubic, and CodeRabbit reviews:
- Add missing imports: use std::borrow::Cow, use anyhow::Context
- Use anyhow::Result<T> consistently (patterns.md, security.md)
- Change sqlx placeholder from ? to $1 (Postgres is most common)
- Remove Cargo.lock from hooks.md paths (auto-generated file)
- Fix tokio::test to show attribute form #[tokio::test]
- Fix mockall mock! name collision, wrap in #[cfg(test)] mod tests
- Fix --test target to match file layout (api_test, not integration)

Generated with [Claude Code](https://claude.ai/code)
via [Happy](https://happy.engineering)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Happy <yesreply@happy.engineering>

* fix: update catalog counts in README.md and AGENTS.md

Update documented counts to match actual repository state after rebase:
- Skills: 109 → 113 (new skills merged to main)
- Commands: 57 → 58 (new command merged to main)

Generated with [Claude Code](https://claude.ai/code)
via [Happy](https://happy.engineering)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Happy <yesreply@happy.engineering>

---------

Co-authored-by: Chris Yau <chris@diveanddev.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Happy <yesreply@happy.engineering>
2026-03-20 01:19:42 -07:00
Affaan Mustafa
52e949a85b fix: sync catalog counts with filesystem (27 agents, 113 skills, 58 commands) (#693) 2026-03-20 01:19:36 -07:00
Affaan Mustafa
07f6156d8a feat: implement --with/--without selective install flags (#679)
Add agent: and skill: component families to the install component
catalog, enabling fine-grained selective install via CLI flags:

  ecc install --profile developer --with lang:typescript --without capability:orchestration
  ecc install --with lang:python --with agent:security-reviewer

Changes:
- Add agent: family (9 entries) and skill: family (10 entries) to
  manifests/install-components.json for granular component addressing
- Update install-components.schema.json to accept agent: and skill:
  family prefixes
- Register agent and skill family prefixes in COMPONENT_FAMILY_PREFIXES
  (scripts/lib/install-manifests.js)
- Add 41 comprehensive tests covering CLI parsing, request normalization,
  component catalog validation, plan resolution, target filtering,
  error handling, and end-to-end install with --with/--without flags

Closes #470
2026-03-20 00:43:32 -07:00
32 changed files with 3994 additions and 27 deletions

View File

@@ -15,6 +15,11 @@
}
],
"beforeShellExecution": [
{
"command": "npx block-no-verify@1.1.2",
"event": "beforeShellExecution",
"description": "Block git hook-bypass flag to protect pre-commit, commit-msg, and pre-push hooks from being skipped"
},
{
"command": "node .cursor/hooks/before-shell-execution.js",
"event": "beforeShellExecution",

View File

@@ -1,6 +1,6 @@
# Everything Claude Code (ECC) — Agent Instructions
This is a **production-ready AI coding plugin** providing 27 specialized agents, 109 skills, 57 commands, and automated hook workflows for software development.
This is a **production-ready AI coding plugin** providing 27 specialized agents, 114 skills, 59 commands, and automated hook workflows for software development.
**Version:** 1.9.0
@@ -142,8 +142,8 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
```
agents/ — 27 specialized subagents
skills/ — 109 workflow skills and domain knowledge
commands/ — 57 slash commands
skills/ — 114 workflow skills and domain knowledge
commands/ — 59 slash commands
hooks/ — Trigger-based automations
rules/ — Always-follow guidelines (common + per-language)
scripts/ — Cross-platform Node.js utilities

View File

@@ -203,7 +203,7 @@ For manual install instructions see the README in the `rules/` folder.
/plugin list everything-claude-code@everything-claude-code
```
**That's it!** You now have access to 27 agents, 109 skills, and 57 commands.
**That's it!** You now have access to 27 agents, 114 skills, and 59 commands.
---
@@ -1070,8 +1070,8 @@ The configuration is automatically detected from `.opencode/opencode.json`.
| Feature | Claude Code | OpenCode | Status |
|---------|-------------|----------|--------|
| Agents | ✅ 27 agents | ✅ 12 agents | **Claude Code leads** |
| Commands | ✅ 57 commands | ✅ 31 commands | **Claude Code leads** |
| Skills | ✅ 109 skills | ✅ 37 skills | **Claude Code leads** |
| Commands | ✅ 59 commands | ✅ 31 commands | **Claude Code leads** |
| Skills | ✅ 114 skills | ✅ 37 skills | **Claude Code leads** |
| Hooks | ✅ 8 event types | ✅ 11 events | **OpenCode has more!** |
| Rules | ✅ 29 rules | ✅ 13 instructions | **Claude Code leads** |
| MCP Servers | ✅ 14 servers | ✅ Full | **Full parity** |

11
commands/rules-distill.md Normal file
View File

@@ -0,0 +1,11 @@
---
description: "Scan skills to extract cross-cutting principles and distill them into rules"
---
# /rules-distill — Distill Principles from Skills into Rules
Scan installed skills, extract cross-cutting principles, and distill them into rules.
## Process
Follow the full workflow defined in the `rules-distill` skill.

View File

@@ -29,8 +29,8 @@ Use `/sessions info` when you need operator-surface context for a swarm: branch,
**Script:**
```bash
node -e "
const sm = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-manager');
const aa = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-aliases');
const sm = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-manager');
const aa = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-aliases');
const path = require('path');
const result = sm.getAllSessions({ limit: 20 });
@@ -70,8 +70,8 @@ Load and display a session's content (by ID or alias).
**Script:**
```bash
node -e "
const sm = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-manager');
const aa = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-aliases');
const sm = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-manager');
const aa = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-aliases');
const id = process.argv[1];
// First try to resolve as alias
@@ -143,8 +143,8 @@ Create a memorable alias for a session.
**Script:**
```bash
node -e "
const sm = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-manager');
const aa = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-aliases');
const sm = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-manager');
const aa = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-aliases');
const sessionId = process.argv[1];
const aliasName = process.argv[2];
@@ -183,7 +183,7 @@ Delete an existing alias.
**Script:**
```bash
node -e "
const aa = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-aliases');
const aa = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-aliases');
const aliasName = process.argv[1];
if (!aliasName) {
@@ -212,8 +212,8 @@ Show detailed information about a session.
**Script:**
```bash
node -e "
const sm = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-manager');
const aa = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-aliases');
const sm = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-manager');
const aa = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-aliases');
const id = process.argv[1];
const resolved = aa.resolveAlias(id);
@@ -262,7 +262,7 @@ Show all session aliases.
**Script:**
```bash
node -e "
const aa = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-aliases');
const aa = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-aliases');
const aliases = aa.listAliases();
console.log('Session Aliases (' + aliases.length + '):');

View File

@@ -13,19 +13,22 @@ Shows a comprehensive health dashboard for all skills in the portfolio with succ
Run the skill health CLI in dashboard mode:
```bash
node "${CLAUDE_PLUGIN_ROOT}/scripts/skills-health.js" --dashboard
ECC_ROOT="${CLAUDE_PLUGIN_ROOT:-$(node -e "var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(!f.existsSync(p.join(d,q))){try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q))){d=c;break}}}catch(x){}}console.log(d)")}"
node "$ECC_ROOT/scripts/skills-health.js" --dashboard
```
For a specific panel only:
```bash
node "${CLAUDE_PLUGIN_ROOT}/scripts/skills-health.js" --dashboard --panel failures
ECC_ROOT="${CLAUDE_PLUGIN_ROOT:-$(node -e "var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(!f.existsSync(p.join(d,q))){try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q))){d=c;break}}}catch(x){}}console.log(d)")}"
node "$ECC_ROOT/scripts/skills-health.js" --dashboard --panel failures
```
For machine-readable output:
```bash
node "${CLAUDE_PLUGIN_ROOT}/scripts/skills-health.js" --dashboard --json
ECC_ROOT="${CLAUDE_PLUGIN_ROOT:-$(node -e "var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(!f.existsSync(p.join(d,q))){try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q))){d=c;break}}}catch(x){}}console.log(d)")}"
node "$ECC_ROOT/scripts/skills-health.js" --dashboard --json
```
## Usage

View File

@@ -2,6 +2,16 @@
"$schema": "https://json.schemastore.org/claude-code-settings.json",
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "npx block-no-verify@1.1.2"
}
],
"description": "Block git hook-bypass flag to protect pre-commit, commit-msg, and pre-push hooks from being skipped"
},
{
"matcher": "Bash",
"hooks": [
@@ -74,6 +84,17 @@
}
],
"description": "Optional InsAIts AI security monitor for Bash/Edit/Write flows. Enable with ECC_ENABLE_INSAITS=1. Requires: pip install insa-its"
},
{
"matcher": "Bash|Write|Edit|MultiEdit",
"hooks": [
{
"type": "command",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:governance-capture\" \"scripts/hooks/governance-capture.js\" \"standard,strict\"",
"timeout": 10
}
],
"description": "Capture governance events (secrets, policy violations, approval requests). Enable with ECC_GOVERNANCE_CAPTURE=1"
}
],
"PreCompact": [
@@ -165,6 +186,17 @@
],
"description": "Warn about console.log statements after edits"
},
{
"matcher": "Bash|Write|Edit|MultiEdit",
"hooks": [
{
"type": "command",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:governance-capture\" \"scripts/hooks/governance-capture.js\" \"standard,strict\"",
"timeout": 10
}
],
"description": "Capture governance events from tool outputs. Enable with ECC_GOVERNANCE_CAPTURE=1"
},
{
"matcher": "*",
"hooks": [

View File

@@ -250,6 +250,158 @@
"modules": [
"document-processing"
]
},
{
"id": "agent:architect",
"family": "agent",
"description": "System design and architecture agent.",
"modules": [
"agents-core"
]
},
{
"id": "agent:code-reviewer",
"family": "agent",
"description": "Code review agent for quality and security checks.",
"modules": [
"agents-core"
]
},
{
"id": "agent:security-reviewer",
"family": "agent",
"description": "Security vulnerability analysis agent.",
"modules": [
"agents-core"
]
},
{
"id": "agent:tdd-guide",
"family": "agent",
"description": "Test-driven development guidance agent.",
"modules": [
"agents-core"
]
},
{
"id": "agent:planner",
"family": "agent",
"description": "Feature implementation planning agent.",
"modules": [
"agents-core"
]
},
{
"id": "agent:build-error-resolver",
"family": "agent",
"description": "Build error resolution agent.",
"modules": [
"agents-core"
]
},
{
"id": "agent:e2e-runner",
"family": "agent",
"description": "Playwright E2E testing agent.",
"modules": [
"agents-core"
]
},
{
"id": "agent:refactor-cleaner",
"family": "agent",
"description": "Dead code cleanup and refactoring agent.",
"modules": [
"agents-core"
]
},
{
"id": "agent:doc-updater",
"family": "agent",
"description": "Documentation update agent.",
"modules": [
"agents-core"
]
},
{
"id": "skill:tdd-workflow",
"family": "skill",
"description": "Test-driven development workflow skill.",
"modules": [
"workflow-quality"
]
},
{
"id": "skill:continuous-learning",
"family": "skill",
"description": "Session pattern extraction and continuous learning skill.",
"modules": [
"workflow-quality"
]
},
{
"id": "skill:eval-harness",
"family": "skill",
"description": "Evaluation harness for AI regression testing.",
"modules": [
"workflow-quality"
]
},
{
"id": "skill:verification-loop",
"family": "skill",
"description": "Verification loop for code quality assurance.",
"modules": [
"workflow-quality"
]
},
{
"id": "skill:strategic-compact",
"family": "skill",
"description": "Strategic context compaction for long sessions.",
"modules": [
"workflow-quality"
]
},
{
"id": "skill:coding-standards",
"family": "skill",
"description": "Language-agnostic coding standards and best practices.",
"modules": [
"framework-language"
]
},
{
"id": "skill:frontend-patterns",
"family": "skill",
"description": "React and frontend engineering patterns.",
"modules": [
"framework-language"
]
},
{
"id": "skill:backend-patterns",
"family": "skill",
"description": "API design, database, and backend engineering patterns.",
"modules": [
"framework-language"
]
},
{
"id": "skill:security-review",
"family": "skill",
"description": "Security review checklist and vulnerability analysis.",
"modules": [
"security"
]
},
{
"id": "skill:deep-research",
"family": "skill",
"description": "Deep research and investigation workflows.",
"modules": [
"research-apis"
]
}
]
}

6
package-lock.json generated
View File

@@ -1133,9 +1133,9 @@
}
},
"node_modules/flatted": {
"version": "3.3.3",
"resolved": "https://registry.npmjs.org/flatted/-/flatted-3.3.3.tgz",
"integrity": "sha512-GX+ysw4PBCz0PzosHDepZGANEuFCMLrnRTiEy9McGjmkCQYwRq4A/X786G/fjM/+OjsWSU1ZrY5qyARZmO/uwg==",
"version": "3.4.2",
"resolved": "https://registry.npmjs.org/flatted/-/flatted-3.4.2.tgz",
"integrity": "sha512-PjDse7RzhcPkIJwy5t7KPWQSZ9cAbzQXcafsetQoD7sOJRQlGikNbx7yZp2OotDnJyrDcbyRq3Ttb18iYOqkxA==",
"dev": true,
"license": "ISC"
},

151
rules/rust/coding-style.md Normal file
View File

@@ -0,0 +1,151 @@
---
paths:
- "**/*.rs"
---
# Rust Coding Style
> This file extends [common/coding-style.md](../common/coding-style.md) with Rust-specific content.
## Formatting
- **rustfmt** for enforcement — always run `cargo fmt` before committing
- **clippy** for lints — `cargo clippy -- -D warnings` (treat warnings as errors)
- 4-space indent (rustfmt default)
- Max line width: 100 characters (rustfmt default)
## Immutability
Rust variables are immutable by default — embrace this:
- Use `let` by default; only use `let mut` when mutation is required
- Prefer returning new values over mutating in place
- Use `Cow<'_, T>` when a function may or may not need to allocate
```rust
use std::borrow::Cow;
// GOOD — immutable by default, new value returned
fn normalize(input: &str) -> Cow<'_, str> {
if input.contains(' ') {
Cow::Owned(input.replace(' ', "_"))
} else {
Cow::Borrowed(input)
}
}
// BAD — unnecessary mutation
fn normalize_bad(input: &mut String) {
*input = input.replace(' ', "_");
}
```
## Naming
Follow standard Rust conventions:
- `snake_case` for functions, methods, variables, modules, crates
- `PascalCase` (UpperCamelCase) for types, traits, enums, type parameters
- `SCREAMING_SNAKE_CASE` for constants and statics
- Lifetimes: short lowercase (`'a`, `'de`) — descriptive names for complex cases (`'input`)
## Ownership and Borrowing
- Borrow (`&T`) by default; take ownership only when you need to store or consume
- Never clone to satisfy the borrow checker without understanding the root cause
- Accept `&str` over `String`, `&[T]` over `Vec<T>` in function parameters
- Use `impl Into<String>` for constructors that need to own a `String`
```rust
// GOOD — borrows when ownership isn't needed
fn word_count(text: &str) -> usize {
text.split_whitespace().count()
}
// GOOD — takes ownership in constructor via Into
fn new(name: impl Into<String>) -> Self {
Self { name: name.into() }
}
// BAD — takes String when &str suffices
fn word_count_bad(text: String) -> usize {
text.split_whitespace().count()
}
```
## Error Handling
- Use `Result<T, E>` and `?` for propagation — never `unwrap()` in production code
- **Libraries**: define typed errors with `thiserror`
- **Applications**: use `anyhow` for flexible error context
- Add context with `.with_context(|| format!("failed to ..."))?`
- Reserve `unwrap()` / `expect()` for tests and truly unreachable states
```rust
// GOOD — library error with thiserror
#[derive(Debug, thiserror::Error)]
pub enum ConfigError {
#[error("failed to read config: {0}")]
Io(#[from] std::io::Error),
#[error("invalid config format: {0}")]
Parse(String),
}
// GOOD — application error with anyhow
use anyhow::Context;
fn load_config(path: &str) -> anyhow::Result<Config> {
let content = std::fs::read_to_string(path)
.with_context(|| format!("failed to read {path}"))?;
toml::from_str(&content)
.with_context(|| format!("failed to parse {path}"))
}
```
## Iterators Over Loops
Prefer iterator chains for transformations; use loops for complex control flow:
```rust
// GOOD — declarative and composable
let active_emails: Vec<&str> = users.iter()
.filter(|u| u.is_active)
.map(|u| u.email.as_str())
.collect();
// GOOD — loop for complex logic with early returns
for user in &users {
if let Some(verified) = verify_email(&user.email)? {
send_welcome(&verified)?;
}
}
```
## Module Organization
Organize by domain, not by type:
```text
src/
├── main.rs
├── lib.rs
├── auth/ # Domain module
│ ├── mod.rs
│ ├── token.rs
│ └── middleware.rs
├── orders/ # Domain module
│ ├── mod.rs
│ ├── model.rs
│ └── service.rs
└── db/ # Infrastructure
├── mod.rs
└── pool.rs
```
## Visibility
- Default to private; use `pub(crate)` for internal sharing
- Only mark `pub` what is part of the crate's public API
- Re-export public API from `lib.rs`
## References
See skill: `rust-patterns` for comprehensive Rust idioms and patterns.

16
rules/rust/hooks.md Normal file
View File

@@ -0,0 +1,16 @@
---
paths:
- "**/*.rs"
- "**/Cargo.toml"
---
# Rust Hooks
> This file extends [common/hooks.md](../common/hooks.md) with Rust-specific content.
## PostToolUse Hooks
Configure in `~/.claude/settings.json`:
- **cargo fmt**: Auto-format `.rs` files after edit
- **cargo clippy**: Run lint checks after editing Rust files
- **cargo check**: Verify compilation after changes (faster than `cargo build`)

168
rules/rust/patterns.md Normal file
View File

@@ -0,0 +1,168 @@
---
paths:
- "**/*.rs"
---
# Rust Patterns
> This file extends [common/patterns.md](../common/patterns.md) with Rust-specific content.
## Repository Pattern with Traits
Encapsulate data access behind a trait:
```rust
pub trait OrderRepository: Send + Sync {
fn find_by_id(&self, id: u64) -> Result<Option<Order>, StorageError>;
fn find_all(&self) -> Result<Vec<Order>, StorageError>;
fn save(&self, order: &Order) -> Result<Order, StorageError>;
fn delete(&self, id: u64) -> Result<(), StorageError>;
}
```
Concrete implementations handle storage details (Postgres, SQLite, in-memory for tests).
## Service Layer
Business logic in service structs; inject dependencies via constructor:
```rust
pub struct OrderService {
repo: Box<dyn OrderRepository>,
payment: Box<dyn PaymentGateway>,
}
impl OrderService {
pub fn new(repo: Box<dyn OrderRepository>, payment: Box<dyn PaymentGateway>) -> Self {
Self { repo, payment }
}
pub fn place_order(&self, request: CreateOrderRequest) -> anyhow::Result<OrderSummary> {
let order = Order::from(request);
self.payment.charge(order.total())?;
let saved = self.repo.save(&order)?;
Ok(OrderSummary::from(saved))
}
}
```
## Newtype Pattern for Type Safety
Prevent argument mix-ups with distinct wrapper types:
```rust
struct UserId(u64);
struct OrderId(u64);
fn get_order(user: UserId, order: OrderId) -> anyhow::Result<Order> {
// Can't accidentally swap user and order IDs at call sites
todo!()
}
```
## Enum State Machines
Model states as enums — make illegal states unrepresentable:
```rust
enum ConnectionState {
Disconnected,
Connecting { attempt: u32 },
Connected { session_id: String },
Failed { reason: String, retries: u32 },
}
fn handle(state: &ConnectionState) {
match state {
ConnectionState::Disconnected => connect(),
ConnectionState::Connecting { attempt } if *attempt > 3 => abort(),
ConnectionState::Connecting { .. } => wait(),
ConnectionState::Connected { session_id } => use_session(session_id),
ConnectionState::Failed { retries, .. } if *retries < 5 => retry(),
ConnectionState::Failed { reason, .. } => log_failure(reason),
}
}
```
Always match exhaustively — no wildcard `_` for business-critical enums.
## Builder Pattern
Use for structs with many optional parameters:
```rust
pub struct ServerConfig {
host: String,
port: u16,
max_connections: usize,
}
impl ServerConfig {
pub fn builder(host: impl Into<String>, port: u16) -> ServerConfigBuilder {
ServerConfigBuilder {
host: host.into(),
port,
max_connections: 100,
}
}
}
pub struct ServerConfigBuilder {
host: String,
port: u16,
max_connections: usize,
}
impl ServerConfigBuilder {
pub fn max_connections(mut self, n: usize) -> Self {
self.max_connections = n;
self
}
pub fn build(self) -> ServerConfig {
ServerConfig {
host: self.host,
port: self.port,
max_connections: self.max_connections,
}
}
}
```
## Sealed Traits for Extensibility Control
Use a private module to seal a trait, preventing external implementations:
```rust
mod private {
pub trait Sealed {}
}
pub trait Format: private::Sealed {
fn encode(&self, data: &[u8]) -> Vec<u8>;
}
pub struct Json;
impl private::Sealed for Json {}
impl Format for Json {
fn encode(&self, data: &[u8]) -> Vec<u8> { todo!() }
}
```
## API Response Envelope
Consistent API responses using a generic enum:
```rust
#[derive(Debug, serde::Serialize)]
#[serde(tag = "status")]
pub enum ApiResponse<T: serde::Serialize> {
#[serde(rename = "ok")]
Ok { data: T },
#[serde(rename = "error")]
Error { message: String },
}
```
## References
See skill: `rust-patterns` for comprehensive patterns including ownership, traits, generics, concurrency, and async.

141
rules/rust/security.md Normal file
View File

@@ -0,0 +1,141 @@
---
paths:
- "**/*.rs"
---
# Rust Security
> This file extends [common/security.md](../common/security.md) with Rust-specific content.
## Secrets Management
- Never hardcode API keys, tokens, or credentials in source code
- Use environment variables: `std::env::var("API_KEY")`
- Fail fast if required secrets are missing at startup
- Keep `.env` files in `.gitignore`
```rust
// BAD
const API_KEY: &str = "sk-abc123...";
// GOOD — environment variable with early validation
fn load_api_key() -> anyhow::Result<String> {
std::env::var("PAYMENT_API_KEY")
.context("PAYMENT_API_KEY must be set")
}
```
## SQL Injection Prevention
- Always use parameterized queries — never format user input into SQL strings
- Use query builder or ORM (sqlx, diesel, sea-orm) with bind parameters
```rust
// BAD — SQL injection via format string
let query = format!("SELECT * FROM users WHERE name = '{name}'");
sqlx::query(&query).fetch_one(&pool).await?;
// GOOD — parameterized query with sqlx
// Placeholder syntax varies by backend: Postgres: $1 | MySQL: ? | SQLite: $1
sqlx::query("SELECT * FROM users WHERE name = $1")
.bind(&name)
.fetch_one(&pool)
.await?;
```
## Input Validation
- Validate all user input at system boundaries before processing
- Use the type system to enforce invariants (newtype pattern)
- Parse, don't validate — convert unstructured data to typed structs at the boundary
- Reject invalid input with clear error messages
```rust
// Parse, don't validate — invalid states are unrepresentable
pub struct Email(String);
impl Email {
pub fn parse(input: &str) -> Result<Self, ValidationError> {
let trimmed = input.trim();
let at_pos = trimmed.find('@')
.filter(|&p| p > 0 && p < trimmed.len() - 1)
.ok_or_else(|| ValidationError::InvalidEmail(input.to_string()))?;
let domain = &trimmed[at_pos + 1..];
if trimmed.len() > 254 || !domain.contains('.') {
return Err(ValidationError::InvalidEmail(input.to_string()));
}
// For production use, prefer a validated email crate (e.g., `email_address`)
Ok(Self(trimmed.to_string()))
}
pub fn as_str(&self) -> &str {
&self.0
}
}
```
## Unsafe Code
- Minimize `unsafe` blocks — prefer safe abstractions
- Every `unsafe` block must have a `// SAFETY:` comment explaining the invariant
- Never use `unsafe` to bypass the borrow checker for convenience
- Audit all `unsafe` code during review — it is a red flag without justification
- Prefer `safe` FFI wrappers around C libraries
```rust
// GOOD — safety comment documents ALL required invariants
let widget: &Widget = {
// SAFETY: `ptr` is non-null, aligned, points to an initialized Widget,
// and no mutable references or mutations exist for its lifetime.
unsafe { &*ptr }
};
// BAD — no safety justification
unsafe { &*ptr }
```
## Dependency Security
- Run `cargo audit` to scan for known CVEs in dependencies
- Run `cargo deny check` for license and advisory compliance
- Use `cargo tree` to audit transitive dependencies
- Keep dependencies updated — set up Dependabot or Renovate
- Minimize dependency count — evaluate before adding new crates
```bash
# Security audit
cargo audit
# Deny advisories, duplicate versions, and restricted licenses
cargo deny check
# Inspect dependency tree
cargo tree
cargo tree -d # Show duplicates only
```
## Error Messages
- Never expose internal paths, stack traces, or database errors in API responses
- Log detailed errors server-side; return generic messages to clients
- Use `tracing` or `log` for structured server-side logging
```rust
// Map errors to appropriate status codes and generic messages
// (Example uses axum; adapt the response type to your framework)
match order_service.find_by_id(id) {
Ok(order) => Ok((StatusCode::OK, Json(order))),
Err(ServiceError::NotFound(_)) => {
tracing::info!(order_id = id, "order not found");
Err((StatusCode::NOT_FOUND, "Resource not found"))
}
Err(e) => {
tracing::error!(order_id = id, error = %e, "unexpected error");
Err((StatusCode::INTERNAL_SERVER_ERROR, "Internal server error"))
}
}
```
## References
See skill: `rust-patterns` for unsafe code guidelines and ownership patterns.
See skill: `security-review` for general security checklists.

154
rules/rust/testing.md Normal file
View File

@@ -0,0 +1,154 @@
---
paths:
- "**/*.rs"
---
# Rust Testing
> This file extends [common/testing.md](../common/testing.md) with Rust-specific content.
## Test Framework
- **`#[test]`** with `#[cfg(test)]` modules for unit tests
- **rstest** for parameterized tests and fixtures
- **proptest** for property-based testing
- **mockall** for trait-based mocking
- **`#[tokio::test]`** for async tests
## Test Organization
```text
my_crate/
├── src/
│ ├── lib.rs # Unit tests in #[cfg(test)] modules
│ ├── auth/
│ │ └── mod.rs # #[cfg(test)] mod tests { ... }
│ └── orders/
│ └── service.rs # #[cfg(test)] mod tests { ... }
├── tests/ # Integration tests (each file = separate binary)
│ ├── api_test.rs
│ ├── db_test.rs
│ └── common/ # Shared test utilities
│ └── mod.rs
└── benches/ # Criterion benchmarks
└── benchmark.rs
```
Unit tests go inside `#[cfg(test)]` modules in the same file. Integration tests go in `tests/`.
## Unit Test Pattern
```rust
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn creates_user_with_valid_email() {
let user = User::new("Alice", "alice@example.com").unwrap();
assert_eq!(user.name, "Alice");
}
#[test]
fn rejects_invalid_email() {
let result = User::new("Bob", "not-an-email");
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("invalid email"));
}
}
```
## Parameterized Tests
```rust
use rstest::rstest;
#[rstest]
#[case("hello", 5)]
#[case("", 0)]
#[case("rust", 4)]
fn test_string_length(#[case] input: &str, #[case] expected: usize) {
assert_eq!(input.len(), expected);
}
```
## Async Tests
```rust
#[tokio::test]
async fn fetches_data_successfully() {
let client = TestClient::new().await;
let result = client.get("/data").await;
assert!(result.is_ok());
}
```
## Mocking with mockall
Define traits in production code; generate mocks in test modules:
```rust
// Production trait — pub so integration tests can import it
pub trait UserRepository {
fn find_by_id(&self, id: u64) -> Option<User>;
}
#[cfg(test)]
mod tests {
use super::*;
use mockall::predicate::eq;
mockall::mock! {
pub Repo {}
impl UserRepository for Repo {
fn find_by_id(&self, id: u64) -> Option<User>;
}
}
#[test]
fn service_returns_user_when_found() {
let mut mock = MockRepo::new();
mock.expect_find_by_id()
.with(eq(42))
.times(1)
.returning(|_| Some(User { id: 42, name: "Alice".into() }));
let service = UserService::new(Box::new(mock));
let user = service.get_user(42).unwrap();
assert_eq!(user.name, "Alice");
}
}
```
## Test Naming
Use descriptive names that explain the scenario:
- `creates_user_with_valid_email()`
- `rejects_order_when_insufficient_stock()`
- `returns_none_when_not_found()`
## Coverage
- Target 80%+ line coverage
- Use **cargo-llvm-cov** for coverage reporting
- Focus on business logic — exclude generated code and FFI bindings
```bash
cargo llvm-cov # Summary
cargo llvm-cov --html # HTML report
cargo llvm-cov --fail-under-lines 80 # Fail if below threshold
```
## Testing Commands
```bash
cargo test # Run all tests
cargo test -- --nocapture # Show println output
cargo test test_name # Run tests matching pattern
cargo test --lib # Unit tests only
cargo test --test api_test # Specific integration test (tests/api_test.rs)
cargo test --doc # Doc tests only
```
## References
See skill: `rust-testing` for comprehensive testing patterns including property-based testing, fixtures, and benchmarking with Criterion.

View File

@@ -26,7 +26,7 @@
"properties": {
"id": {
"type": "string",
"pattern": "^(baseline|lang|framework|capability):[a-z0-9-]+$"
"pattern": "^(baseline|lang|framework|capability|agent|skill):[a-z0-9-]+$"
},
"family": {
"type": "string",
@@ -34,7 +34,9 @@
"baseline",
"language",
"framework",
"capability"
"capability",
"agent",
"skill"
]
},
"description": {

View File

@@ -0,0 +1,280 @@
#!/usr/bin/env node
/**
* Governance Event Capture Hook
*
* PreToolUse/PostToolUse hook that detects governance-relevant events
* and writes them to the governance_events table in the state store.
*
* Captured event types:
* - secret_detected: Hardcoded secrets in tool input/output
* - policy_violation: Actions that violate configured policies
* - security_finding: Security-relevant tool invocations
* - approval_requested: Operations requiring explicit approval
*
* Enable: Set ECC_GOVERNANCE_CAPTURE=1
* Configure session: Set ECC_SESSION_ID for session correlation
*/
'use strict';
const crypto = require('crypto');
const MAX_STDIN = 1024 * 1024;
// Patterns that indicate potential hardcoded secrets
const SECRET_PATTERNS = [
{ name: 'aws_key', pattern: /(?:AKIA|ASIA)[A-Z0-9]{16}/i },
{ name: 'generic_secret', pattern: /(?:secret|password|token|api[_-]?key)\s*[:=]\s*["'][^"']{8,}/i },
{ name: 'private_key', pattern: /-----BEGIN (?:RSA |EC |DSA )?PRIVATE KEY-----/ },
{ name: 'jwt', pattern: /eyJ[A-Za-z0-9_-]{10,}\.eyJ[A-Za-z0-9_-]{10,}\.[A-Za-z0-9_-]{10,}/ },
{ name: 'github_token', pattern: /gh[pousr]_[A-Za-z0-9_]{36,}/ },
];
// Tool names that represent security-relevant operations
const SECURITY_RELEVANT_TOOLS = new Set([
'Bash', // Could execute arbitrary commands
]);
// Commands that require governance approval
const APPROVAL_COMMANDS = [
/git\s+push\s+.*--force/,
/git\s+reset\s+--hard/,
/rm\s+-rf?\s/,
/DROP\s+(?:TABLE|DATABASE)/i,
/DELETE\s+FROM\s+\w+\s*(?:;|$)/i,
];
// File patterns that indicate policy-sensitive paths
const SENSITIVE_PATHS = [
/\.env(?:\.|$)/,
/credentials/i,
/secrets?\./i,
/\.pem$/,
/\.key$/,
/id_rsa/,
];
/**
* Generate a unique event ID.
*/
function generateEventId() {
return `gov-${Date.now()}-${crypto.randomBytes(4).toString('hex')}`;
}
/**
* Scan text content for hardcoded secrets.
* Returns array of { name, match } for each detected secret.
*/
function detectSecrets(text) {
if (!text || typeof text !== 'string') return [];
const findings = [];
for (const { name, pattern } of SECRET_PATTERNS) {
if (pattern.test(text)) {
findings.push({ name });
}
}
return findings;
}
/**
* Check if a command requires governance approval.
*/
function detectApprovalRequired(command) {
if (!command || typeof command !== 'string') return [];
const findings = [];
for (const pattern of APPROVAL_COMMANDS) {
if (pattern.test(command)) {
findings.push({ pattern: pattern.source });
}
}
return findings;
}
/**
* Check if a file path is policy-sensitive.
*/
function detectSensitivePath(filePath) {
if (!filePath || typeof filePath !== 'string') return false;
return SENSITIVE_PATHS.some(pattern => pattern.test(filePath));
}
/**
* Analyze a hook input payload and return governance events to capture.
*
* @param {Object} input - Parsed hook input (tool_name, tool_input, tool_output)
* @param {Object} [context] - Additional context (sessionId, hookPhase)
* @returns {Array<Object>} Array of governance event objects
*/
function analyzeForGovernanceEvents(input, context = {}) {
const events = [];
const toolName = input.tool_name || '';
const toolInput = input.tool_input || {};
const toolOutput = typeof input.tool_output === 'string' ? input.tool_output : '';
const sessionId = context.sessionId || null;
const hookPhase = context.hookPhase || 'unknown';
// 1. Secret detection in tool input content
const inputText = typeof toolInput === 'object'
? JSON.stringify(toolInput)
: String(toolInput);
const inputSecrets = detectSecrets(inputText);
const outputSecrets = detectSecrets(toolOutput);
const allSecrets = [...inputSecrets, ...outputSecrets];
if (allSecrets.length > 0) {
events.push({
id: generateEventId(),
sessionId,
eventType: 'secret_detected',
payload: {
toolName,
hookPhase,
secretTypes: allSecrets.map(s => s.name),
location: inputSecrets.length > 0 ? 'input' : 'output',
severity: 'critical',
},
resolvedAt: null,
resolution: null,
});
}
// 2. Approval-required commands (Bash only)
if (toolName === 'Bash') {
const command = toolInput.command || '';
const approvalFindings = detectApprovalRequired(command);
if (approvalFindings.length > 0) {
events.push({
id: generateEventId(),
sessionId,
eventType: 'approval_requested',
payload: {
toolName,
hookPhase,
command: command.slice(0, 200),
matchedPatterns: approvalFindings.map(f => f.pattern),
severity: 'high',
},
resolvedAt: null,
resolution: null,
});
}
}
// 3. Policy violation: writing to sensitive paths
const filePath = toolInput.file_path || toolInput.path || '';
if (filePath && detectSensitivePath(filePath)) {
events.push({
id: generateEventId(),
sessionId,
eventType: 'policy_violation',
payload: {
toolName,
hookPhase,
filePath: filePath.slice(0, 200),
reason: 'sensitive_file_access',
severity: 'warning',
},
resolvedAt: null,
resolution: null,
});
}
// 4. Security-relevant tool usage tracking
if (SECURITY_RELEVANT_TOOLS.has(toolName) && hookPhase === 'post') {
const command = toolInput.command || '';
const hasElevated = /sudo\s/.test(command) || /chmod\s/.test(command) || /chown\s/.test(command);
if (hasElevated) {
events.push({
id: generateEventId(),
sessionId,
eventType: 'security_finding',
payload: {
toolName,
hookPhase,
command: command.slice(0, 200),
reason: 'elevated_privilege_command',
severity: 'medium',
},
resolvedAt: null,
resolution: null,
});
}
}
return events;
}
/**
* Core hook logic — exported so run-with-flags.js can call directly.
*
* @param {string} rawInput - Raw JSON string from stdin
* @returns {string} The original input (pass-through)
*/
function run(rawInput) {
// Gate on feature flag
if (String(process.env.ECC_GOVERNANCE_CAPTURE || '').toLowerCase() !== '1') {
return rawInput;
}
try {
const input = JSON.parse(rawInput);
const sessionId = process.env.ECC_SESSION_ID || null;
const hookPhase = process.env.CLAUDE_HOOK_EVENT_NAME || 'unknown';
const events = analyzeForGovernanceEvents(input, {
sessionId,
hookPhase: hookPhase.startsWith('Pre') ? 'pre' : 'post',
});
if (events.length > 0) {
// Write events to stderr as JSON-lines for the caller to capture.
// The state store write is async and handled by a separate process
// to avoid blocking the hook pipeline.
for (const event of events) {
process.stderr.write(
`[governance] ${JSON.stringify(event)}\n`
);
}
}
} catch {
// Silently ignore parse errors — never block the tool pipeline.
}
return rawInput;
}
// ── stdin entry point ────────────────────────────────
if (require.main === module) {
let raw = '';
process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (raw.length < MAX_STDIN) {
const remaining = MAX_STDIN - raw.length;
raw += chunk.substring(0, remaining);
}
});
process.stdin.on('end', () => {
const result = run(raw);
process.stdout.write(result);
});
}
module.exports = {
APPROVAL_COMMANDS,
SECRET_PATTERNS,
SECURITY_RELEVANT_TOOLS,
SENSITIVE_PATHS,
analyzeForGovernanceEvents,
detectApprovalRequired,
detectSecrets,
detectSensitivePath,
generateEventId,
run,
};

View File

@@ -21,6 +21,7 @@ const {
readFile,
writeFile,
runCommand,
stripAnsi,
log
} = require('../lib/utils');
@@ -58,8 +59,9 @@ function extractSessionSummary(transcriptPath) {
: Array.isArray(rawContent)
? rawContent.map(c => (c && c.text) || '').join(' ')
: '';
if (text.trim()) {
userMessages.push(text.trim().slice(0, 200));
const cleaned = stripAnsi(text).trim();
if (cleaned) {
userMessages.push(cleaned.slice(0, 200));
}
}

View File

@@ -15,6 +15,7 @@ const {
findFiles,
ensureDir,
readFile,
stripAnsi,
log,
output
} = require('../lib/utils');
@@ -42,7 +43,8 @@ async function main() {
const content = readFile(latest.path);
if (content && !content.includes('[Session context goes here]')) {
// Only inject if the session has actual content (not the blank template)
output(`Previous session summary:\n${content}`);
// Strip ANSI escape codes that may have leaked from terminal output (#642)
output(`Previous session summary:\n${stripAnsi(content)}`);
}
}

View File

@@ -0,0 +1,230 @@
'use strict';
const fs = require('fs');
const path = require('path');
/**
* Parse YAML frontmatter from a markdown string.
* Returns { frontmatter: {}, body: string }.
*/
function parseFrontmatter(content) {
const match = content.match(/^---\r?\n([\s\S]*?)\r?\n---\r?\n([\s\S]*)$/);
if (!match) {
return { frontmatter: {}, body: content };
}
const frontmatter = {};
for (const line of match[1].split('\n')) {
const colonIdx = line.indexOf(':');
if (colonIdx === -1) continue;
const key = line.slice(0, colonIdx).trim();
let value = line.slice(colonIdx + 1).trim();
// Handle JSON arrays (e.g. tools: ["Read", "Grep"])
if (value.startsWith('[') && value.endsWith(']')) {
try {
value = JSON.parse(value);
} catch {
// keep as string
}
}
// Strip surrounding quotes
if (typeof value === 'string' && value.startsWith('"') && value.endsWith('"')) {
value = value.slice(1, -1);
}
frontmatter[key] = value;
}
return { frontmatter, body: match[2] };
}
/**
* Extract the first meaningful paragraph from agent body as a summary.
* Skips headings and blank lines, returns up to maxSentences sentences.
*/
function extractSummary(body, maxSentences = 1) {
const lines = body.split('\n');
const paragraphs = [];
let current = [];
for (const line of lines) {
const trimmed = line.trim();
if (trimmed === '') {
if (current.length > 0) {
paragraphs.push(current.join(' '));
current = [];
}
continue;
}
// Skip headings
if (trimmed.startsWith('#')) {
if (current.length > 0) {
paragraphs.push(current.join(' '));
current = [];
}
continue;
}
// Skip list items, code blocks, etc.
if (trimmed.startsWith('```') || trimmed.startsWith('- **') || trimmed.startsWith('|')) {
continue;
}
current.push(trimmed);
}
if (current.length > 0) {
paragraphs.push(current.join(' '));
}
// Find first non-empty paragraph
const firstParagraph = paragraphs.find(p => p.length > 0);
if (!firstParagraph) {
return '';
}
// Extract up to maxSentences sentences
const sentences = firstParagraph.match(/[^.!?]+[.!?]+/g) || [firstParagraph];
return sentences.slice(0, maxSentences).join(' ').trim();
}
/**
* Load and parse a single agent file.
* Returns the full agent object with frontmatter and body.
*/
function loadAgent(filePath) {
const content = fs.readFileSync(filePath, 'utf8');
const { frontmatter, body } = parseFrontmatter(content);
const fileName = path.basename(filePath, '.md');
return {
fileName,
name: frontmatter.name || fileName,
description: frontmatter.description || '',
tools: Array.isArray(frontmatter.tools) ? frontmatter.tools : [],
model: frontmatter.model || 'sonnet',
body,
byteSize: Buffer.byteLength(content, 'utf8'),
};
}
/**
* Load all agents from a directory.
*/
function loadAgents(agentsDir) {
if (!fs.existsSync(agentsDir)) {
return [];
}
return fs.readdirSync(agentsDir)
.filter(f => f.endsWith('.md'))
.sort()
.map(f => loadAgent(path.join(agentsDir, f)));
}
/**
* Compress an agent to its catalog entry (metadata only).
* This is the minimal representation needed for agent selection.
*/
function compressToCatalog(agent) {
return {
name: agent.name,
description: agent.description,
tools: agent.tools,
model: agent.model,
};
}
/**
* Compress an agent to a summary entry (metadata + first paragraph).
* More context than catalog, less than full body.
*/
function compressToSummary(agent) {
return {
name: agent.name,
description: agent.description,
tools: agent.tools,
model: agent.model,
summary: extractSummary(agent.body),
};
}
/**
* Build a full compressed catalog from a directory of agents.
*
* Modes:
* - 'catalog': name, description, tools, model only (~2-3k tokens for 27 agents)
* - 'summary': catalog + first paragraph summary (~4-5k tokens)
* - 'full': no compression, full body included
*
* Returns { agents: [], stats: { totalAgents, originalBytes, compressedTokenEstimate } }
*/
function buildAgentCatalog(agentsDir, options = {}) {
const mode = options.mode || 'catalog';
const filter = options.filter || null;
let agents = loadAgents(agentsDir);
if (typeof filter === 'function') {
agents = agents.filter(filter);
}
const originalBytes = agents.reduce((sum, a) => sum + a.byteSize, 0);
let compressed;
if (mode === 'catalog') {
compressed = agents.map(compressToCatalog);
} else if (mode === 'summary') {
compressed = agents.map(compressToSummary);
} else {
compressed = agents.map(a => ({
name: a.name,
description: a.description,
tools: a.tools,
model: a.model,
body: a.body,
}));
}
const compressedJson = JSON.stringify(compressed);
// Rough token estimate: ~4 chars per token for English text
const compressedTokenEstimate = Math.ceil(compressedJson.length / 4);
return {
agents: compressed,
stats: {
totalAgents: agents.length,
originalBytes,
compressedBytes: Buffer.byteLength(compressedJson, 'utf8'),
compressedTokenEstimate,
mode,
},
};
}
/**
* Lazy-load a single agent's full content by name from a directory.
* Returns null if not found.
*/
function lazyLoadAgent(agentsDir, agentName) {
const filePath = path.join(agentsDir, `${agentName}.md`);
if (!fs.existsSync(filePath)) {
return null;
}
return loadAgent(filePath);
}
module.exports = {
buildAgentCatalog,
compressToCatalog,
compressToSummary,
extractSummary,
lazyLoadAgent,
loadAgent,
loadAgents,
parseFrontmatter,
};

212
scripts/lib/inspection.js Normal file
View File

@@ -0,0 +1,212 @@
'use strict';
const DEFAULT_FAILURE_THRESHOLD = 3;
const DEFAULT_WINDOW_SIZE = 50;
const FAILURE_OUTCOMES = new Set(['failure', 'failed', 'error']);
/**
* Normalize a failure reason string for grouping.
* Strips timestamps, UUIDs, file paths, and numeric suffixes.
*/
function normalizeFailureReason(reason) {
if (!reason || typeof reason !== 'string') {
return 'unknown';
}
return reason
.trim()
.toLowerCase()
// Strip ISO timestamps (note: already lowercased, so t/z not T/Z)
.replace(/\d{4}-\d{2}-\d{2}[t ]\d{2}:\d{2}:\d{2}[.\dz]*/g, '<timestamp>')
// Strip UUIDs (already lowercased)
.replace(/[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}/g, '<uuid>')
// Strip file paths
.replace(/\/[\w./-]+/g, '<path>')
// Collapse whitespace
.replace(/\s+/g, ' ')
.trim();
}
/**
* Group skill runs by skill ID and normalized failure reason.
*
* @param {Array} skillRuns - Array of skill run objects
* @returns {Map<string, { skillId: string, normalizedReason: string, runs: Array }>}
*/
function groupFailures(skillRuns) {
const groups = new Map();
for (const run of skillRuns) {
const outcome = String(run.outcome || '').toLowerCase();
if (!FAILURE_OUTCOMES.has(outcome)) {
continue;
}
const normalizedReason = normalizeFailureReason(run.failureReason);
const key = `${run.skillId}::${normalizedReason}`;
if (!groups.has(key)) {
groups.set(key, {
skillId: run.skillId,
normalizedReason,
runs: [],
});
}
groups.get(key).runs.push(run);
}
return groups;
}
/**
* Detect recurring failure patterns from skill runs.
*
* @param {Array} skillRuns - Array of skill run objects (newest first)
* @param {Object} [options]
* @param {number} [options.threshold=3] - Minimum failure count to trigger pattern detection
* @returns {Array<Object>} Array of detected patterns sorted by count descending
*/
function detectPatterns(skillRuns, options = {}) {
const threshold = options.threshold ?? DEFAULT_FAILURE_THRESHOLD;
const groups = groupFailures(skillRuns);
const patterns = [];
for (const [, group] of groups) {
if (group.runs.length < threshold) {
continue;
}
const sortedRuns = [...group.runs].sort(
(a, b) => (b.createdAt || '').localeCompare(a.createdAt || '')
);
const firstSeen = sortedRuns[sortedRuns.length - 1].createdAt || null;
const lastSeen = sortedRuns[0].createdAt || null;
const sessionIds = [...new Set(sortedRuns.map(r => r.sessionId).filter(Boolean))];
const versions = [...new Set(sortedRuns.map(r => r.skillVersion).filter(Boolean))];
// Collect unique raw failure reasons for this normalized group
const rawReasons = [...new Set(sortedRuns.map(r => r.failureReason).filter(Boolean))];
patterns.push({
skillId: group.skillId,
normalizedReason: group.normalizedReason,
count: group.runs.length,
firstSeen,
lastSeen,
sessionIds,
versions,
rawReasons,
runIds: sortedRuns.map(r => r.id),
});
}
// Sort by count descending, then by lastSeen descending
return patterns.sort((a, b) => {
if (b.count !== a.count) return b.count - a.count;
return (b.lastSeen || '').localeCompare(a.lastSeen || '');
});
}
/**
* Generate an inspection report from detected patterns.
*
* @param {Array} patterns - Output from detectPatterns()
* @param {Object} [options]
* @param {string} [options.generatedAt] - ISO timestamp for the report
* @returns {Object} Inspection report
*/
function generateReport(patterns, options = {}) {
const generatedAt = options.generatedAt || new Date().toISOString();
if (patterns.length === 0) {
return {
generatedAt,
status: 'clean',
patternCount: 0,
patterns: [],
summary: 'No recurring failure patterns detected.',
};
}
const totalFailures = patterns.reduce((sum, p) => sum + p.count, 0);
const affectedSkills = [...new Set(patterns.map(p => p.skillId))];
return {
generatedAt,
status: 'attention_needed',
patternCount: patterns.length,
totalFailures,
affectedSkills,
patterns: patterns.map(p => ({
skillId: p.skillId,
normalizedReason: p.normalizedReason,
count: p.count,
firstSeen: p.firstSeen,
lastSeen: p.lastSeen,
sessionIds: p.sessionIds,
versions: p.versions,
rawReasons: p.rawReasons.slice(0, 5),
suggestedAction: suggestAction(p),
})),
summary: `Found ${patterns.length} recurring failure pattern(s) across ${affectedSkills.length} skill(s) (${totalFailures} total failures).`,
};
}
/**
* Suggest a remediation action based on pattern characteristics.
*/
function suggestAction(pattern) {
const reason = pattern.normalizedReason;
if (reason.includes('timeout')) {
return 'Increase timeout or optimize skill execution time.';
}
if (reason.includes('permission') || reason.includes('denied') || reason.includes('auth')) {
return 'Check tool permissions and authentication configuration.';
}
if (reason.includes('not found') || reason.includes('missing')) {
return 'Verify required files/dependencies exist before skill execution.';
}
if (reason.includes('parse') || reason.includes('syntax') || reason.includes('json')) {
return 'Review input/output format expectations and add validation.';
}
if (pattern.versions.length > 1) {
return 'Failure spans multiple versions. Consider rollback to last stable version.';
}
return 'Investigate root cause and consider adding error handling.';
}
/**
* Run full inspection pipeline: query skill runs, detect patterns, generate report.
*
* @param {Object} store - State store instance with listRecentSessions, getSessionDetail
* @param {Object} [options]
* @param {number} [options.threshold] - Minimum failure count
* @param {number} [options.windowSize] - Number of recent skill runs to analyze
* @returns {Object} Inspection report
*/
function inspect(store, options = {}) {
const windowSize = options.windowSize ?? DEFAULT_WINDOW_SIZE;
const threshold = options.threshold ?? DEFAULT_FAILURE_THRESHOLD;
const status = store.getStatus({ recentSkillRunLimit: windowSize });
const skillRuns = status.skillRuns.recent || [];
const patterns = detectPatterns(skillRuns, { threshold });
return generateReport(patterns, { generatedAt: status.generatedAt });
}
module.exports = {
DEFAULT_FAILURE_THRESHOLD,
DEFAULT_WINDOW_SIZE,
detectPatterns,
generateReport,
groupFailures,
inspect,
normalizeFailureReason,
suggestAction,
};

View File

@@ -10,6 +10,8 @@ const COMPONENT_FAMILY_PREFIXES = {
language: 'lang:',
framework: 'framework:',
capability: 'capability:',
agent: 'agent:',
skill: 'skill:',
};
const LEGACY_COMPAT_BASE_MODULE_IDS_BY_TARGET = Object.freeze({
claude: [

View File

@@ -0,0 +1,89 @@
'use strict';
const fs = require('fs');
const path = require('path');
const os = require('os');
/**
* Resolve the ECC source root directory.
*
* Tries, in order:
* 1. CLAUDE_PLUGIN_ROOT env var (set by Claude Code for hooks, or by user)
* 2. Standard install location (~/.claude/) — when scripts exist there
* 3. Plugin cache auto-detection — scans ~/.claude/plugins/cache/everything-claude-code/
* 4. Fallback to ~/.claude/ (original behaviour)
*
* @param {object} [options]
* @param {string} [options.homeDir] Override home directory (for testing)
* @param {string} [options.envRoot] Override CLAUDE_PLUGIN_ROOT (for testing)
* @param {string} [options.probe] Relative path used to verify a candidate root
* contains ECC scripts. Default: 'scripts/lib/utils.js'
* @returns {string} Resolved ECC root path
*/
function resolveEccRoot(options = {}) {
const envRoot = options.envRoot !== undefined
? options.envRoot
: (process.env.CLAUDE_PLUGIN_ROOT || '');
if (envRoot && envRoot.trim()) {
return envRoot.trim();
}
const homeDir = options.homeDir || os.homedir();
const claudeDir = path.join(homeDir, '.claude');
const probe = options.probe || path.join('scripts', 'lib', 'utils.js');
// Standard install — files are copied directly into ~/.claude/
if (fs.existsSync(path.join(claudeDir, probe))) {
return claudeDir;
}
// Plugin cache — Claude Code stores marketplace plugins under
// ~/.claude/plugins/cache/<plugin-name>/<org>/<version>/
try {
const cacheBase = path.join(claudeDir, 'plugins', 'cache', 'everything-claude-code');
const orgDirs = fs.readdirSync(cacheBase, { withFileTypes: true });
for (const orgEntry of orgDirs) {
if (!orgEntry.isDirectory()) continue;
const orgPath = path.join(cacheBase, orgEntry.name);
let versionDirs;
try {
versionDirs = fs.readdirSync(orgPath, { withFileTypes: true });
} catch {
continue;
}
for (const verEntry of versionDirs) {
if (!verEntry.isDirectory()) continue;
const candidate = path.join(orgPath, verEntry.name);
if (fs.existsSync(path.join(candidate, probe))) {
return candidate;
}
}
}
} catch {
// Plugin cache doesn't exist or isn't readable — continue to fallback
}
return claudeDir;
}
/**
* Compact inline version for embedding in command .md code blocks.
*
* This is the minified form of resolveEccRoot() suitable for use in
* node -e "..." scripts where require() is not available before the
* root is known.
*
* Usage in commands:
* const _r = <paste INLINE_RESOLVE>;
* const sm = require(_r + '/scripts/lib/session-manager');
*/
const INLINE_RESOLVE = `(()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()`;
module.exports = {
resolveEccRoot,
INLINE_RESOLVE,
};

View File

@@ -464,6 +464,24 @@ function countInFile(filePath, pattern) {
return matches ? matches.length : 0;
}
/**
* Strip all ANSI escape sequences from a string.
*
* Handles:
* - CSI sequences: \x1b[ … <letter> (colors, cursor movement, erase, etc.)
* - OSC sequences: \x1b] … BEL/ST (window titles, hyperlinks)
* - Charset selection: \x1b(B
* - Bare ESC + single letter: \x1b <letter> (e.g. \x1bM for reverse index)
*
* @param {string} str - Input string possibly containing ANSI codes
* @returns {string} Cleaned string with all escape sequences removed
*/
function stripAnsi(str) {
if (typeof str !== 'string') return '';
// eslint-disable-next-line no-control-regex
return str.replace(/\x1b(?:\[[0-9;?]*[A-Za-z]|\][^\x07\x1b]*(?:\x07|\x1b\\)|\([A-Z]|[A-Z])/g, '');
}
/**
* Search for pattern in file and return matching lines with line numbers
*/
@@ -530,6 +548,9 @@ module.exports = {
countInFile,
grepFile,
// String sanitisation
stripAnsi,
// Hook I/O
readStdinJson,
log,

View File

@@ -0,0 +1,264 @@
---
name: rules-distill
description: "Scan skills to extract cross-cutting principles and distill them into rules — append, revise, or create new rule files"
origin: ECC
---
# Rules Distill
Scan installed skills, extract cross-cutting principles that appear in multiple skills, and distill them into rules — appending to existing rule files, revising outdated content, or creating new rule files.
Applies the "deterministic collection + LLM judgment" principle: scripts collect facts exhaustively, then an LLM cross-reads the full context and produces verdicts.
## When to Use
- Periodic rules maintenance (monthly or after installing new skills)
- After a skill-stocktake reveals patterns that should be rules
- When rules feel incomplete relative to the skills being used
## How It Works
The rules distillation process follows three phases:
### Phase 1: Inventory (Deterministic Collection)
#### 1a. Collect skill inventory
```bash
bash ~/.claude/skills/rules-distill/scripts/scan-skills.sh
```
#### 1b. Collect rules index
```bash
bash ~/.claude/skills/rules-distill/scripts/scan-rules.sh
```
#### 1c. Present to user
```
Rules Distillation — Phase 1: Inventory
────────────────────────────────────────
Skills: {N} files scanned
Rules: {M} files ({K} headings indexed)
Proceeding to cross-read analysis...
```
### Phase 2: Cross-read, Match & Verdict (LLM Judgment)
Extraction and matching are unified in a single pass. Rules files are small enough (~800 lines total) that the full text can be provided to the LLM — no grep pre-filtering needed.
#### Batching
Group skills into **thematic clusters** based on their descriptions. Analyze each cluster in a subagent with the full rules text.
#### Cross-batch Merge
After all batches complete, merge candidates across batches:
- Deduplicate candidates with the same or overlapping principles
- Re-check the "2+ skills" requirement using evidence from **all** batches combined — a principle found in 1 skill per batch but 2+ skills total is valid
#### Subagent Prompt
Launch a general-purpose Agent with the following prompt:
````
You are an analyst who cross-reads skills to extract principles that should be promoted to rules.
## Input
- Skills: {full text of skills in this batch}
- Existing rules: {full text of all rule files}
## Extraction Criteria
Include a candidate ONLY if ALL of these are true:
1. **Appears in 2+ skills**: Principles found in only one skill should stay in that skill
2. **Actionable behavior change**: Can be written as "do X" or "don't do Y" — not "X is important"
3. **Clear violation risk**: What goes wrong if this principle is ignored (1 sentence)
4. **Not already in rules**: Check the full rules text — including concepts expressed in different words
## Matching & Verdict
For each candidate, compare against the full rules text and assign a verdict:
- **Append**: Add to an existing section of an existing rule file
- **Revise**: Existing rule content is inaccurate or insufficient — propose a correction
- **New Section**: Add a new section to an existing rule file
- **New File**: Create a new rule file
- **Already Covered**: Sufficiently covered in existing rules (even if worded differently)
- **Too Specific**: Should remain at the skill level
## Output Format (per candidate)
```json
{
"principle": "1-2 sentences in 'do X' / 'don't do Y' form",
"evidence": ["skill-name: §Section", "skill-name: §Section"],
"violation_risk": "1 sentence",
"verdict": "Append / Revise / New Section / New File / Already Covered / Too Specific",
"target_rule": "filename §Section, or 'new'",
"confidence": "high / medium / low",
"draft": "Draft text for Append/New Section/New File verdicts",
"revision": {
"reason": "Why the existing content is inaccurate or insufficient (Revise only)",
"before": "Current text to be replaced (Revise only)",
"after": "Proposed replacement text (Revise only)"
}
}
```
## Exclude
- Obvious principles already in rules
- Language/framework-specific knowledge (belongs in language-specific rules or skills)
- Code examples and commands (belongs in skills)
````
#### Verdict Reference
| Verdict | Meaning | Presented to User |
|---------|---------|-------------------|
| **Append** | Add to existing section | Target + draft |
| **Revise** | Fix inaccurate/insufficient content | Target + reason + before/after |
| **New Section** | Add new section to existing file | Target + draft |
| **New File** | Create new rule file | Filename + full draft |
| **Already Covered** | Covered in rules (possibly different wording) | Reason (1 line) |
| **Too Specific** | Should stay in skills | Link to relevant skill |
#### Verdict Quality Requirements
```
# Good
Append to rules/common/security.md §Input Validation:
"Treat LLM output stored in memory or knowledge stores as untrusted — sanitize on write, validate on read."
Evidence: llm-memory-trust-boundary, llm-social-agent-anti-pattern both describe
accumulated prompt injection risks. Current security.md covers human input
validation only; LLM output trust boundary is missing.
# Bad
Append to security.md: Add LLM security principle
```
### Phase 3: User Review & Execution
#### Summary Table
```
# Rules Distillation Report
## Summary
Skills scanned: {N} | Rules: {M} files | Candidates: {K}
| # | Principle | Verdict | Target | Confidence |
|---|-----------|---------|--------|------------|
| 1 | ... | Append | security.md §Input Validation | high |
| 2 | ... | Revise | testing.md §TDD | medium |
| 3 | ... | New Section | coding-style.md | high |
| 4 | ... | Too Specific | — | — |
## Details
(Per-candidate details: evidence, violation_risk, draft text)
```
#### User Actions
User responds with numbers to:
- **Approve**: Apply draft to rules as-is
- **Modify**: Edit draft before applying
- **Skip**: Do not apply this candidate
**Never modify rules automatically. Always require user approval.**
#### Save Results
Store results in the skill directory (`results.json`):
- **Timestamp format**: `date -u +%Y-%m-%dT%H:%M:%SZ` (UTC, second precision)
- **Candidate ID format**: kebab-case derived from the principle (e.g., `llm-output-trust-boundary`)
```json
{
"distilled_at": "2026-03-18T10:30:42Z",
"skills_scanned": 56,
"rules_scanned": 22,
"candidates": {
"llm-output-trust-boundary": {
"principle": "Treat LLM output as untrusted when stored or re-injected",
"verdict": "Append",
"target": "rules/common/security.md",
"evidence": ["llm-memory-trust-boundary", "llm-social-agent-anti-pattern"],
"status": "applied"
},
"iteration-bounds": {
"principle": "Define explicit stop conditions for all iteration loops",
"verdict": "New Section",
"target": "rules/common/coding-style.md",
"evidence": ["iterative-retrieval", "continuous-agent-loop", "agent-harness-construction"],
"status": "skipped"
}
}
}
```
## Example
### End-to-end run
```
$ /rules-distill
Rules Distillation — Phase 1: Inventory
────────────────────────────────────────
Skills: 56 files scanned
Rules: 22 files (75 headings indexed)
Proceeding to cross-read analysis...
[Subagent analysis: Batch 1 (agent/meta skills) ...]
[Subagent analysis: Batch 2 (coding/pattern skills) ...]
[Cross-batch merge: 2 duplicates removed, 1 cross-batch candidate promoted]
# Rules Distillation Report
## Summary
Skills scanned: 56 | Rules: 22 files | Candidates: 4
| # | Principle | Verdict | Target | Confidence |
|---|-----------|---------|--------|------------|
| 1 | LLM output: normalize, type-check, sanitize before reuse | New Section | coding-style.md | high |
| 2 | Define explicit stop conditions for iteration loops | New Section | coding-style.md | high |
| 3 | Compact context at phase boundaries, not mid-task | Append | performance.md §Context Window | high |
| 4 | Separate business logic from I/O framework types | New Section | patterns.md | high |
## Details
### 1. LLM Output Validation
Verdict: New Section in coding-style.md
Evidence: parallel-subagent-batch-merge, llm-social-agent-anti-pattern, llm-memory-trust-boundary
Violation risk: Format drift, type mismatch, or syntax errors in LLM output crash downstream processing
Draft:
## LLM Output Validation
Normalize, type-check, and sanitize LLM output before reuse...
See skill: parallel-subagent-batch-merge, llm-memory-trust-boundary
[... details for candidates 2-4 ...]
Approve, modify, or skip each candidate by number:
> User: Approve 1, 3. Skip 2, 4.
✓ Applied: coding-style.md §LLM Output Validation
✓ Applied: performance.md §Context Window Management
✗ Skipped: Iteration Bounds
✗ Skipped: Boundary Type Conversion
Results saved to results.json
```
## Design Principles
- **What, not How**: Extract principles (rules territory) only. Code examples and commands stay in skills.
- **Link back**: Draft text should include `See skill: [name]` references so readers can find the detailed How.
- **Deterministic collection, LLM judgment**: Scripts guarantee exhaustiveness; the LLM guarantees contextual understanding.
- **Anti-abstraction safeguard**: The 3-layer filter (2+ skills evidence, actionable behavior test, violation risk) prevents overly abstract principles from entering rules.

View File

@@ -0,0 +1,58 @@
#!/usr/bin/env bash
# scan-rules.sh — enumerate rule files and extract H2 heading index
# Usage: scan-rules.sh [RULES_DIR]
# Output: JSON to stdout
#
# Environment:
# RULES_DISTILL_DIR Override ~/.claude/rules (for testing only)
set -euo pipefail
RULES_DIR="${RULES_DISTILL_DIR:-${1:-$HOME/.claude/rules}}"
if [[ ! -d "$RULES_DIR" ]]; then
jq -n --arg path "$RULES_DIR" '{"error":"rules directory not found","path":$path}' >&2
exit 1
fi
# Collect all .md files (excluding _archived/)
files=()
while IFS= read -r f; do
files+=("$f")
done < <(find "$RULES_DIR" -name '*.md' -not -path '*/_archived/*' -print | sort)
total=${#files[@]}
tmpdir=$(mktemp -d)
_rules_cleanup() { rm -rf "$tmpdir"; }
trap _rules_cleanup EXIT
for i in "${!files[@]}"; do
file="${files[$i]}"
rel_path="${file#"$HOME"/}"
rel_path="~/$rel_path"
# Extract H2 headings (## Title) into a JSON array via jq
headings_json=$({ grep -E '^## ' "$file" 2>/dev/null || true; } | sed 's/^## //' | jq -R . | jq -s '.')
# Get line count
line_count=$(wc -l < "$file" | tr -d ' ')
jq -n \
--arg path "$rel_path" \
--arg file "$(basename "$file")" \
--argjson lines "$line_count" \
--argjson headings "$headings_json" \
'{path:$path,file:$file,lines:$lines,headings:$headings}' \
> "$tmpdir/$i.json"
done
if [[ ${#files[@]} -eq 0 ]]; then
jq -n --arg dir "$RULES_DIR" '{rules_dir:$dir,total:0,rules:[]}'
else
jq -n \
--arg dir "$RULES_DIR" \
--argjson total "$total" \
--argjson rules "$(jq -s '.' "$tmpdir"/*.json)" \
'{rules_dir:$dir,total:$total,rules:$rules}'
fi

View File

@@ -0,0 +1,129 @@
#!/usr/bin/env bash
# scan-skills.sh — enumerate skill files, extract frontmatter and UTC mtime
# Usage: scan-skills.sh [CWD_SKILLS_DIR]
# Output: JSON to stdout
#
# When CWD_SKILLS_DIR is omitted, defaults to $PWD/.claude/skills so the
# script always picks up project-level skills without relying on the caller.
#
# Environment:
# RULES_DISTILL_GLOBAL_DIR Override ~/.claude/skills (for testing only;
# do not set in production — intended for bats tests)
# RULES_DISTILL_PROJECT_DIR Override project dir detection (for testing only)
set -euo pipefail
GLOBAL_DIR="${RULES_DISTILL_GLOBAL_DIR:-$HOME/.claude/skills}"
CWD_SKILLS_DIR="${RULES_DISTILL_PROJECT_DIR:-${1:-$PWD/.claude/skills}}"
# Validate CWD_SKILLS_DIR looks like a .claude/skills path (defense-in-depth).
# Only warn when the path exists — a nonexistent path poses no traversal risk.
if [[ -n "$CWD_SKILLS_DIR" && -d "$CWD_SKILLS_DIR" && "$CWD_SKILLS_DIR" != */.claude/skills* ]]; then
echo "Warning: CWD_SKILLS_DIR does not look like a .claude/skills path: $CWD_SKILLS_DIR" >&2
fi
# Extract a frontmatter field (handles both quoted and unquoted single-line values).
# Does NOT support multi-line YAML blocks (| or >) or nested YAML keys.
extract_field() {
local file="$1" field="$2"
awk -v f="$field" '
BEGIN { fm=0 }
/^---$/ { fm++; next }
fm==1 {
n = length(f) + 2
if (substr($0, 1, n) == f ": ") {
val = substr($0, n+1)
gsub(/^"/, "", val)
gsub(/"$/, "", val)
print val
exit
}
}
fm>=2 { exit }
' "$file"
}
# Get file mtime in UTC ISO8601 (portable: GNU and BSD)
get_mtime() {
local file="$1"
local secs
secs=$(stat -c %Y "$file" 2>/dev/null || stat -f %m "$file" 2>/dev/null) || return 1
date -u -d "@$secs" +%Y-%m-%dT%H:%M:%SZ 2>/dev/null ||
date -u -r "$secs" +%Y-%m-%dT%H:%M:%SZ
}
# Scan a directory and produce a JSON array of skill objects
scan_dir_to_json() {
local dir="$1"
local tmpdir
tmpdir=$(mktemp -d)
local _scan_tmpdir="$tmpdir"
_scan_cleanup() { rm -rf "$_scan_tmpdir"; }
trap _scan_cleanup RETURN
local i=0
while IFS= read -r file; do
local name desc mtime dp
name=$(extract_field "$file" "name")
desc=$(extract_field "$file" "description")
mtime=$(get_mtime "$file")
dp="${file/#$HOME/~}"
jq -n \
--arg path "$dp" \
--arg name "$name" \
--arg description "$desc" \
--arg mtime "$mtime" \
'{path:$path,name:$name,description:$description,mtime:$mtime}' \
> "$tmpdir/$i.json"
i=$((i+1))
done < <(find "$dir" -name "SKILL.md" -type f 2>/dev/null | sort)
if [[ $i -eq 0 ]]; then
echo "[]"
else
jq -s '.' "$tmpdir"/*.json
fi
}
# --- Main ---
global_found="false"
global_count=0
global_skills="[]"
if [[ -d "$GLOBAL_DIR" ]]; then
global_found="true"
global_skills=$(scan_dir_to_json "$GLOBAL_DIR")
global_count=$(echo "$global_skills" | jq 'length')
fi
project_found="false"
project_path=""
project_count=0
project_skills="[]"
if [[ -n "$CWD_SKILLS_DIR" && -d "$CWD_SKILLS_DIR" ]]; then
project_found="true"
project_path="$CWD_SKILLS_DIR"
project_skills=$(scan_dir_to_json "$CWD_SKILLS_DIR")
project_count=$(echo "$project_skills" | jq 'length')
fi
# Merge global + project skills into one array
all_skills=$(jq -s 'add' <(echo "$global_skills") <(echo "$project_skills"))
jq -n \
--arg global_found "$global_found" \
--argjson global_count "$global_count" \
--arg project_found "$project_found" \
--arg project_path "$project_path" \
--argjson project_count "$project_count" \
--argjson skills "$all_skills" \
'{
scan_summary: {
global: { found: ($global_found == "true"), count: $global_count },
project: { found: ($project_found == "true"), path: $project_path, count: $project_count }
},
skills: $skills
}'

View File

@@ -0,0 +1,294 @@
/**
* Tests for governance event capture hook.
*/
const assert = require('assert');
const {
detectSecrets,
detectApprovalRequired,
detectSensitivePath,
analyzeForGovernanceEvents,
run,
} = require('../../scripts/hooks/governance-capture');
async function test(name, fn) {
try {
await fn();
console.log(` \u2713 ${name}`);
return true;
} catch (error) {
console.log(` \u2717 ${name}`);
console.log(` Error: ${error.message}`);
return false;
}
}
async function runTests() {
console.log('\n=== Testing governance-capture ===\n');
let passed = 0;
let failed = 0;
// ── detectSecrets ──────────────────────────────────────────
if (await test('detectSecrets finds AWS access keys', async () => {
const findings = detectSecrets('my key is AKIAIOSFODNN7EXAMPLE');
assert.ok(findings.length > 0);
assert.ok(findings.some(f => f.name === 'aws_key'));
})) passed += 1; else failed += 1;
if (await test('detectSecrets finds generic secrets', async () => {
const findings = detectSecrets('api_key = "sk-proj-abcdefghij1234567890"');
assert.ok(findings.length > 0);
assert.ok(findings.some(f => f.name === 'generic_secret'));
})) passed += 1; else failed += 1;
if (await test('detectSecrets finds private keys', async () => {
const findings = detectSecrets('-----BEGIN RSA PRIVATE KEY-----\nMIIE...');
assert.ok(findings.length > 0);
assert.ok(findings.some(f => f.name === 'private_key'));
})) passed += 1; else failed += 1;
if (await test('detectSecrets finds GitHub tokens', async () => {
const findings = detectSecrets('token: ghp_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghij');
assert.ok(findings.length > 0);
assert.ok(findings.some(f => f.name === 'github_token'));
})) passed += 1; else failed += 1;
if (await test('detectSecrets returns empty array for clean text', async () => {
const findings = detectSecrets('This is a normal log message with no secrets.');
assert.strictEqual(findings.length, 0);
})) passed += 1; else failed += 1;
if (await test('detectSecrets handles null and undefined', async () => {
assert.deepStrictEqual(detectSecrets(null), []);
assert.deepStrictEqual(detectSecrets(undefined), []);
assert.deepStrictEqual(detectSecrets(''), []);
})) passed += 1; else failed += 1;
// ── detectApprovalRequired ─────────────────────────────────
if (await test('detectApprovalRequired flags force push', async () => {
const findings = detectApprovalRequired('git push origin main --force');
assert.ok(findings.length > 0);
})) passed += 1; else failed += 1;
if (await test('detectApprovalRequired flags hard reset', async () => {
const findings = detectApprovalRequired('git reset --hard HEAD~3');
assert.ok(findings.length > 0);
})) passed += 1; else failed += 1;
if (await test('detectApprovalRequired flags rm -rf', async () => {
const findings = detectApprovalRequired('rm -rf /tmp/important');
assert.ok(findings.length > 0);
})) passed += 1; else failed += 1;
if (await test('detectApprovalRequired flags DROP TABLE', async () => {
const findings = detectApprovalRequired('DROP TABLE users');
assert.ok(findings.length > 0);
})) passed += 1; else failed += 1;
if (await test('detectApprovalRequired allows safe commands', async () => {
const findings = detectApprovalRequired('git status');
assert.strictEqual(findings.length, 0);
})) passed += 1; else failed += 1;
if (await test('detectApprovalRequired handles null', async () => {
assert.deepStrictEqual(detectApprovalRequired(null), []);
assert.deepStrictEqual(detectApprovalRequired(''), []);
})) passed += 1; else failed += 1;
// ── detectSensitivePath ────────────────────────────────────
if (await test('detectSensitivePath identifies .env files', async () => {
assert.ok(detectSensitivePath('.env'));
assert.ok(detectSensitivePath('.env.local'));
assert.ok(detectSensitivePath('/project/.env.production'));
})) passed += 1; else failed += 1;
if (await test('detectSensitivePath identifies credential files', async () => {
assert.ok(detectSensitivePath('credentials.json'));
assert.ok(detectSensitivePath('/home/user/.ssh/id_rsa'));
assert.ok(detectSensitivePath('server.key'));
assert.ok(detectSensitivePath('cert.pem'));
})) passed += 1; else failed += 1;
if (await test('detectSensitivePath returns false for normal files', async () => {
assert.ok(!detectSensitivePath('index.js'));
assert.ok(!detectSensitivePath('README.md'));
assert.ok(!detectSensitivePath('package.json'));
})) passed += 1; else failed += 1;
if (await test('detectSensitivePath handles null', async () => {
assert.ok(!detectSensitivePath(null));
assert.ok(!detectSensitivePath(''));
})) passed += 1; else failed += 1;
// ── analyzeForGovernanceEvents ─────────────────────────────
if (await test('analyzeForGovernanceEvents detects secrets in tool input', async () => {
const events = analyzeForGovernanceEvents({
tool_name: 'Write',
tool_input: {
file_path: '/tmp/config.js',
content: 'const key = "AKIAIOSFODNN7EXAMPLE";',
},
});
assert.ok(events.length > 0);
const secretEvent = events.find(e => e.eventType === 'secret_detected');
assert.ok(secretEvent);
assert.strictEqual(secretEvent.payload.severity, 'critical');
})) passed += 1; else failed += 1;
if (await test('analyzeForGovernanceEvents detects approval-required commands', async () => {
const events = analyzeForGovernanceEvents({
tool_name: 'Bash',
tool_input: {
command: 'git push origin main --force',
},
});
assert.ok(events.length > 0);
const approvalEvent = events.find(e => e.eventType === 'approval_requested');
assert.ok(approvalEvent);
assert.strictEqual(approvalEvent.payload.severity, 'high');
})) passed += 1; else failed += 1;
if (await test('analyzeForGovernanceEvents detects sensitive file access', async () => {
const events = analyzeForGovernanceEvents({
tool_name: 'Edit',
tool_input: {
file_path: '/project/.env.production',
old_string: 'DB_URL=old',
new_string: 'DB_URL=new',
},
});
assert.ok(events.length > 0);
const policyEvent = events.find(e => e.eventType === 'policy_violation');
assert.ok(policyEvent);
assert.strictEqual(policyEvent.payload.reason, 'sensitive_file_access');
})) passed += 1; else failed += 1;
if (await test('analyzeForGovernanceEvents detects elevated privilege commands', async () => {
const events = analyzeForGovernanceEvents({
tool_name: 'Bash',
tool_input: { command: 'sudo rm -rf /etc/something' },
}, {
hookPhase: 'post',
});
const securityEvent = events.find(e => e.eventType === 'security_finding');
assert.ok(securityEvent);
assert.strictEqual(securityEvent.payload.reason, 'elevated_privilege_command');
})) passed += 1; else failed += 1;
if (await test('analyzeForGovernanceEvents returns empty for clean inputs', async () => {
const events = analyzeForGovernanceEvents({
tool_name: 'Read',
tool_input: { file_path: '/project/src/index.js' },
});
assert.strictEqual(events.length, 0);
})) passed += 1; else failed += 1;
if (await test('analyzeForGovernanceEvents populates session ID from context', async () => {
const events = analyzeForGovernanceEvents({
tool_name: 'Write',
tool_input: {
file_path: '/project/.env',
content: 'DB_URL=test',
},
}, {
sessionId: 'test-session-123',
});
assert.ok(events.length > 0);
assert.strictEqual(events[0].sessionId, 'test-session-123');
})) passed += 1; else failed += 1;
if (await test('analyzeForGovernanceEvents generates unique event IDs', async () => {
const events1 = analyzeForGovernanceEvents({
tool_name: 'Write',
tool_input: { file_path: '.env', content: '' },
});
const events2 = analyzeForGovernanceEvents({
tool_name: 'Write',
tool_input: { file_path: '.env.local', content: '' },
});
if (events1.length > 0 && events2.length > 0) {
assert.notStrictEqual(events1[0].id, events2[0].id);
}
})) passed += 1; else failed += 1;
// ── run() function ─────────────────────────────────────────
if (await test('run() passes through input when feature flag is off', async () => {
const original = process.env.ECC_GOVERNANCE_CAPTURE;
delete process.env.ECC_GOVERNANCE_CAPTURE;
try {
const input = JSON.stringify({ tool_name: 'Bash', tool_input: { command: 'git push --force' } });
const result = run(input);
assert.strictEqual(result, input);
} finally {
if (original !== undefined) {
process.env.ECC_GOVERNANCE_CAPTURE = original;
}
}
})) passed += 1; else failed += 1;
if (await test('run() passes through input when feature flag is on', async () => {
const original = process.env.ECC_GOVERNANCE_CAPTURE;
process.env.ECC_GOVERNANCE_CAPTURE = '1';
try {
const input = JSON.stringify({ tool_name: 'Read', tool_input: { file_path: 'index.js' } });
const result = run(input);
assert.strictEqual(result, input);
} finally {
if (original !== undefined) {
process.env.ECC_GOVERNANCE_CAPTURE = original;
} else {
delete process.env.ECC_GOVERNANCE_CAPTURE;
}
}
})) passed += 1; else failed += 1;
if (await test('run() handles invalid JSON gracefully', async () => {
const original = process.env.ECC_GOVERNANCE_CAPTURE;
process.env.ECC_GOVERNANCE_CAPTURE = '1';
try {
const result = run('not valid json');
assert.strictEqual(result, 'not valid json');
} finally {
if (original !== undefined) {
process.env.ECC_GOVERNANCE_CAPTURE = original;
} else {
delete process.env.ECC_GOVERNANCE_CAPTURE;
}
}
})) passed += 1; else failed += 1;
if (await test('run() can detect multiple event types in one input', async () => {
// Bash command with force push AND secret in command
const events = analyzeForGovernanceEvents({
tool_name: 'Bash',
tool_input: {
command: 'API_KEY="AKIAIOSFODNN7EXAMPLE" git push --force',
},
});
const eventTypes = events.map(e => e.eventType);
assert.ok(eventTypes.includes('secret_detected'));
assert.ok(eventTypes.includes('approval_requested'));
})) passed += 1; else failed += 1;
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}
runTests();

View File

@@ -0,0 +1,293 @@
/**
* Tests for agent description compression and lazy loading.
*/
const assert = require('assert');
const fs = require('fs');
const os = require('os');
const path = require('path');
const {
parseFrontmatter,
extractSummary,
loadAgent,
loadAgents,
compressToCatalog,
compressToSummary,
buildAgentCatalog,
lazyLoadAgent,
} = require('../../scripts/lib/agent-compress');
function createTempDir(prefix) {
return fs.mkdtempSync(path.join(os.tmpdir(), prefix));
}
function cleanupTempDir(dirPath) {
fs.rmSync(dirPath, { recursive: true, force: true });
}
function writeAgent(dir, name, content) {
fs.writeFileSync(path.join(dir, `${name}.md`), content, 'utf8');
}
const SAMPLE_AGENT = `---
name: test-agent
description: A test agent for unit testing purposes.
tools: ["Read", "Grep", "Glob"]
model: sonnet
---
You are a test agent that validates compression logic.
## Your Role
- Run unit tests
- Validate compression output
- Ensure correctness
## Process
### 1. Setup
- Prepare test fixtures
- Load agent files
### 2. Validate
Check the output format and content.
`;
const MINIMAL_AGENT = `---
name: minimal
description: Minimal agent.
tools: ["Read"]
model: haiku
---
Short body.
`;
async function test(name, fn) {
try {
await fn();
console.log(` \u2713 ${name}`);
return true;
} catch (error) {
console.log(` \u2717 ${name}`);
console.log(` Error: ${error.message}`);
return false;
}
}
async function runTests() {
console.log('\n=== Testing agent-compress ===\n');
let passed = 0;
let failed = 0;
if (await test('parseFrontmatter extracts YAML frontmatter and body', async () => {
const { frontmatter, body } = parseFrontmatter(SAMPLE_AGENT);
assert.strictEqual(frontmatter.name, 'test-agent');
assert.strictEqual(frontmatter.description, 'A test agent for unit testing purposes.');
assert.deepStrictEqual(frontmatter.tools, ['Read', 'Grep', 'Glob']);
assert.strictEqual(frontmatter.model, 'sonnet');
assert.ok(body.includes('You are a test agent'));
})) passed += 1; else failed += 1;
if (await test('parseFrontmatter handles content without frontmatter', async () => {
const { frontmatter, body } = parseFrontmatter('Just a plain document.');
assert.deepStrictEqual(frontmatter, {});
assert.strictEqual(body, 'Just a plain document.');
})) passed += 1; else failed += 1;
if (await test('extractSummary returns the first paragraph of the body', async () => {
const { body } = parseFrontmatter(SAMPLE_AGENT);
const summary = extractSummary(body);
assert.ok(summary.includes('test agent'));
assert.ok(summary.includes('compression logic'));
})) passed += 1; else failed += 1;
if (await test('extractSummary returns empty string for empty body', async () => {
assert.strictEqual(extractSummary(''), '');
assert.strictEqual(extractSummary('# Just a heading'), '');
})) passed += 1; else failed += 1;
if (await test('loadAgent reads and parses a single agent file', async () => {
const tmpDir = createTempDir('ecc-agent-compress-');
try {
writeAgent(tmpDir, 'test-agent', SAMPLE_AGENT);
const agent = loadAgent(path.join(tmpDir, 'test-agent.md'));
assert.strictEqual(agent.name, 'test-agent');
assert.strictEqual(agent.fileName, 'test-agent');
assert.deepStrictEqual(agent.tools, ['Read', 'Grep', 'Glob']);
assert.strictEqual(agent.model, 'sonnet');
assert.ok(agent.byteSize > 0);
assert.ok(agent.body.includes('You are a test agent'));
} finally {
cleanupTempDir(tmpDir);
}
})) passed += 1; else failed += 1;
if (await test('loadAgents reads all .md files from a directory', async () => {
const tmpDir = createTempDir('ecc-agent-compress-');
try {
writeAgent(tmpDir, 'agent-a', SAMPLE_AGENT);
writeAgent(tmpDir, 'agent-b', MINIMAL_AGENT);
const agents = loadAgents(tmpDir);
assert.strictEqual(agents.length, 2);
assert.strictEqual(agents[0].fileName, 'agent-a');
assert.strictEqual(agents[1].fileName, 'agent-b');
} finally {
cleanupTempDir(tmpDir);
}
})) passed += 1; else failed += 1;
if (await test('loadAgents returns empty array for non-existent directory', async () => {
const agents = loadAgents('/tmp/nonexistent-ecc-dir-12345');
assert.deepStrictEqual(agents, []);
})) passed += 1; else failed += 1;
if (await test('compressToCatalog strips body and keeps only metadata', async () => {
const tmpDir = createTempDir('ecc-agent-compress-');
try {
writeAgent(tmpDir, 'test-agent', SAMPLE_AGENT);
const agent = loadAgent(path.join(tmpDir, 'test-agent.md'));
const catalog = compressToCatalog(agent);
assert.strictEqual(catalog.name, 'test-agent');
assert.strictEqual(catalog.description, 'A test agent for unit testing purposes.');
assert.deepStrictEqual(catalog.tools, ['Read', 'Grep', 'Glob']);
assert.strictEqual(catalog.model, 'sonnet');
assert.strictEqual(catalog.body, undefined);
} finally {
cleanupTempDir(tmpDir);
}
})) passed += 1; else failed += 1;
if (await test('compressToSummary includes first paragraph summary', async () => {
const tmpDir = createTempDir('ecc-agent-compress-');
try {
writeAgent(tmpDir, 'test-agent', SAMPLE_AGENT);
const agent = loadAgent(path.join(tmpDir, 'test-agent.md'));
const summary = compressToSummary(agent);
assert.strictEqual(summary.name, 'test-agent');
assert.ok(summary.summary.length > 0);
assert.strictEqual(summary.body, undefined);
} finally {
cleanupTempDir(tmpDir);
}
})) passed += 1; else failed += 1;
if (await test('buildAgentCatalog in catalog mode produces minimal output with stats', async () => {
const tmpDir = createTempDir('ecc-agent-compress-');
try {
writeAgent(tmpDir, 'agent-a', SAMPLE_AGENT);
writeAgent(tmpDir, 'agent-b', MINIMAL_AGENT);
const result = buildAgentCatalog(tmpDir, { mode: 'catalog' });
assert.strictEqual(result.agents.length, 2);
assert.strictEqual(result.stats.totalAgents, 2);
assert.strictEqual(result.stats.mode, 'catalog');
assert.ok(result.stats.originalBytes > 0);
assert.ok(result.stats.compressedBytes > 0);
assert.ok(result.stats.compressedBytes < result.stats.originalBytes);
assert.ok(result.stats.compressedTokenEstimate > 0);
// Catalog entries should not have body
for (const agent of result.agents) {
assert.strictEqual(agent.body, undefined);
assert.ok(agent.name);
assert.ok(agent.description);
}
} finally {
cleanupTempDir(tmpDir);
}
})) passed += 1; else failed += 1;
if (await test('buildAgentCatalog in summary mode includes summaries', async () => {
const tmpDir = createTempDir('ecc-agent-compress-');
try {
writeAgent(tmpDir, 'agent-a', SAMPLE_AGENT);
const result = buildAgentCatalog(tmpDir, { mode: 'summary' });
assert.strictEqual(result.agents.length, 1);
assert.ok(result.agents[0].summary);
assert.strictEqual(result.agents[0].body, undefined);
} finally {
cleanupTempDir(tmpDir);
}
})) passed += 1; else failed += 1;
if (await test('buildAgentCatalog in full mode preserves body', async () => {
const tmpDir = createTempDir('ecc-agent-compress-');
try {
writeAgent(tmpDir, 'agent-a', SAMPLE_AGENT);
const result = buildAgentCatalog(tmpDir, { mode: 'full' });
assert.strictEqual(result.agents.length, 1);
assert.ok(result.agents[0].body.includes('You are a test agent'));
} finally {
cleanupTempDir(tmpDir);
}
})) passed += 1; else failed += 1;
if (await test('buildAgentCatalog supports filter function', async () => {
const tmpDir = createTempDir('ecc-agent-compress-');
try {
writeAgent(tmpDir, 'agent-a', SAMPLE_AGENT);
writeAgent(tmpDir, 'agent-b', MINIMAL_AGENT);
const result = buildAgentCatalog(tmpDir, {
mode: 'catalog',
filter: agent => agent.model === 'haiku',
});
assert.strictEqual(result.agents.length, 1);
assert.strictEqual(result.agents[0].name, 'minimal');
} finally {
cleanupTempDir(tmpDir);
}
})) passed += 1; else failed += 1;
if (await test('lazyLoadAgent loads a single agent by name', async () => {
const tmpDir = createTempDir('ecc-agent-compress-');
try {
writeAgent(tmpDir, 'test-agent', SAMPLE_AGENT);
writeAgent(tmpDir, 'other', MINIMAL_AGENT);
const agent = lazyLoadAgent(tmpDir, 'test-agent');
assert.ok(agent);
assert.strictEqual(agent.name, 'test-agent');
assert.ok(agent.body.includes('You are a test agent'));
} finally {
cleanupTempDir(tmpDir);
}
})) passed += 1; else failed += 1;
if (await test('lazyLoadAgent returns null for non-existent agent', async () => {
const tmpDir = createTempDir('ecc-agent-compress-');
try {
const agent = lazyLoadAgent(tmpDir, 'nonexistent');
assert.strictEqual(agent, null);
} finally {
cleanupTempDir(tmpDir);
}
})) passed += 1; else failed += 1;
if (await test('buildAgentCatalog works with real agents directory', async () => {
const agentsDir = path.join(__dirname, '..', '..', 'agents');
if (!fs.existsSync(agentsDir)) {
// Skip if agents dir doesn't exist (shouldn't happen in this repo)
return;
}
const result = buildAgentCatalog(agentsDir, { mode: 'catalog' });
assert.ok(result.agents.length > 0, 'Should find at least one agent');
assert.ok(result.stats.originalBytes > 0);
assert.ok(result.stats.compressedBytes < result.stats.originalBytes,
'Catalog mode should be smaller than full agent files');
})) passed += 1; else failed += 1;
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}
runTests();

View File

@@ -0,0 +1,232 @@
/**
* Tests for inspection logic — pattern detection from failures.
*/
const assert = require('assert');
const {
normalizeFailureReason,
groupFailures,
detectPatterns,
generateReport,
suggestAction,
DEFAULT_FAILURE_THRESHOLD,
} = require('../../scripts/lib/inspection');
async function test(name, fn) {
try {
await fn();
console.log(` \u2713 ${name}`);
return true;
} catch (error) {
console.log(` \u2717 ${name}`);
console.log(` Error: ${error.message}`);
return false;
}
}
function makeSkillRun(overrides = {}) {
return {
id: overrides.id || `run-${Math.random().toString(36).slice(2, 8)}`,
skillId: overrides.skillId || 'test-skill',
skillVersion: overrides.skillVersion || '1.0.0',
sessionId: overrides.sessionId || 'session-1',
taskDescription: overrides.taskDescription || 'test task',
outcome: overrides.outcome || 'failure',
failureReason: overrides.failureReason || 'generic error',
tokensUsed: overrides.tokensUsed || 500,
durationMs: overrides.durationMs || 1000,
userFeedback: overrides.userFeedback || null,
createdAt: overrides.createdAt || '2026-03-15T08:00:00.000Z',
};
}
async function runTests() {
console.log('\n=== Testing inspection ===\n');
let passed = 0;
let failed = 0;
if (await test('normalizeFailureReason strips timestamps and UUIDs', async () => {
const normalized = normalizeFailureReason(
'Error at 2026-03-15T08:00:00.000Z for id 550e8400-e29b-41d4-a716-446655440000'
);
assert.ok(!normalized.includes('2026'));
assert.ok(!normalized.includes('550e8400'));
assert.ok(normalized.includes('<timestamp>'));
assert.ok(normalized.includes('<uuid>'));
})) passed += 1; else failed += 1;
if (await test('normalizeFailureReason strips file paths', async () => {
const normalized = normalizeFailureReason('File not found: /usr/local/bin/node');
assert.ok(!normalized.includes('/usr/local'));
assert.ok(normalized.includes('<path>'));
})) passed += 1; else failed += 1;
if (await test('normalizeFailureReason handles null and empty values', async () => {
assert.strictEqual(normalizeFailureReason(null), 'unknown');
assert.strictEqual(normalizeFailureReason(''), 'unknown');
assert.strictEqual(normalizeFailureReason(undefined), 'unknown');
})) passed += 1; else failed += 1;
if (await test('groupFailures groups by skillId and normalized reason', async () => {
const runs = [
makeSkillRun({ id: 'r1', skillId: 'skill-a', failureReason: 'timeout' }),
makeSkillRun({ id: 'r2', skillId: 'skill-a', failureReason: 'timeout' }),
makeSkillRun({ id: 'r3', skillId: 'skill-b', failureReason: 'parse error' }),
makeSkillRun({ id: 'r4', skillId: 'skill-a', outcome: 'success' }), // should be excluded
];
const groups = groupFailures(runs);
assert.strictEqual(groups.size, 2);
const skillAGroup = groups.get('skill-a::timeout');
assert.ok(skillAGroup);
assert.strictEqual(skillAGroup.runs.length, 2);
const skillBGroup = groups.get('skill-b::parse error');
assert.ok(skillBGroup);
assert.strictEqual(skillBGroup.runs.length, 1);
})) passed += 1; else failed += 1;
if (await test('groupFailures handles mixed outcome casing', async () => {
const runs = [
makeSkillRun({ id: 'r1', outcome: 'FAILURE', failureReason: 'timeout' }),
makeSkillRun({ id: 'r2', outcome: 'Failed', failureReason: 'timeout' }),
makeSkillRun({ id: 'r3', outcome: 'error', failureReason: 'timeout' }),
];
const groups = groupFailures(runs);
assert.strictEqual(groups.size, 1);
const group = groups.values().next().value;
assert.strictEqual(group.runs.length, 3);
})) passed += 1; else failed += 1;
if (await test('detectPatterns returns empty array when below threshold', async () => {
const runs = [
makeSkillRun({ id: 'r1', failureReason: 'timeout' }),
makeSkillRun({ id: 'r2', failureReason: 'timeout' }),
];
const patterns = detectPatterns(runs, { threshold: 3 });
assert.strictEqual(patterns.length, 0);
})) passed += 1; else failed += 1;
if (await test('detectPatterns detects patterns at or above threshold', async () => {
const runs = [
makeSkillRun({ id: 'r1', failureReason: 'timeout', createdAt: '2026-03-15T08:00:00Z' }),
makeSkillRun({ id: 'r2', failureReason: 'timeout', createdAt: '2026-03-15T08:01:00Z' }),
makeSkillRun({ id: 'r3', failureReason: 'timeout', createdAt: '2026-03-15T08:02:00Z' }),
];
const patterns = detectPatterns(runs, { threshold: 3 });
assert.strictEqual(patterns.length, 1);
assert.strictEqual(patterns[0].count, 3);
assert.strictEqual(patterns[0].skillId, 'test-skill');
assert.strictEqual(patterns[0].normalizedReason, 'timeout');
assert.strictEqual(patterns[0].firstSeen, '2026-03-15T08:00:00Z');
assert.strictEqual(patterns[0].lastSeen, '2026-03-15T08:02:00Z');
assert.strictEqual(patterns[0].runIds.length, 3);
})) passed += 1; else failed += 1;
if (await test('detectPatterns uses default threshold', async () => {
const runs = Array.from({ length: DEFAULT_FAILURE_THRESHOLD }, (_, i) =>
makeSkillRun({ id: `r${i}`, failureReason: 'permission denied' })
);
const patterns = detectPatterns(runs);
assert.strictEqual(patterns.length, 1);
})) passed += 1; else failed += 1;
if (await test('detectPatterns sorts by count descending', async () => {
const runs = [
// 4 timeouts
...Array.from({ length: 4 }, (_, i) =>
makeSkillRun({ id: `t${i}`, skillId: 'skill-a', failureReason: 'timeout' })
),
// 3 parse errors
...Array.from({ length: 3 }, (_, i) =>
makeSkillRun({ id: `p${i}`, skillId: 'skill-b', failureReason: 'parse error' })
),
];
const patterns = detectPatterns(runs, { threshold: 3 });
assert.strictEqual(patterns.length, 2);
assert.strictEqual(patterns[0].count, 4);
assert.strictEqual(patterns[0].skillId, 'skill-a');
assert.strictEqual(patterns[1].count, 3);
assert.strictEqual(patterns[1].skillId, 'skill-b');
})) passed += 1; else failed += 1;
if (await test('detectPatterns groups similar failure reasons with different timestamps', async () => {
const runs = [
makeSkillRun({ id: 'r1', failureReason: 'Error at 2026-03-15T08:00:00Z in /tmp/foo' }),
makeSkillRun({ id: 'r2', failureReason: 'Error at 2026-03-15T09:00:00Z in /tmp/bar' }),
makeSkillRun({ id: 'r3', failureReason: 'Error at 2026-03-15T10:00:00Z in /tmp/baz' }),
];
const patterns = detectPatterns(runs, { threshold: 3 });
assert.strictEqual(patterns.length, 1);
assert.ok(patterns[0].normalizedReason.includes('<timestamp>'));
assert.ok(patterns[0].normalizedReason.includes('<path>'));
})) passed += 1; else failed += 1;
if (await test('detectPatterns tracks unique session IDs and versions', async () => {
const runs = [
makeSkillRun({ id: 'r1', sessionId: 'sess-1', skillVersion: '1.0.0', failureReason: 'err' }),
makeSkillRun({ id: 'r2', sessionId: 'sess-2', skillVersion: '1.0.0', failureReason: 'err' }),
makeSkillRun({ id: 'r3', sessionId: 'sess-1', skillVersion: '1.1.0', failureReason: 'err' }),
];
const patterns = detectPatterns(runs, { threshold: 3 });
assert.strictEqual(patterns.length, 1);
assert.deepStrictEqual(patterns[0].sessionIds.sort(), ['sess-1', 'sess-2']);
assert.deepStrictEqual(patterns[0].versions.sort(), ['1.0.0', '1.1.0']);
})) passed += 1; else failed += 1;
if (await test('generateReport returns clean status with no patterns', async () => {
const report = generateReport([]);
assert.strictEqual(report.status, 'clean');
assert.strictEqual(report.patternCount, 0);
assert.ok(report.summary.includes('No recurring'));
assert.ok(report.generatedAt);
})) passed += 1; else failed += 1;
if (await test('generateReport produces structured report from patterns', async () => {
const runs = [
...Array.from({ length: 3 }, (_, i) =>
makeSkillRun({ id: `r${i}`, skillId: 'my-skill', failureReason: 'timeout' })
),
];
const patterns = detectPatterns(runs, { threshold: 3 });
const report = generateReport(patterns, { generatedAt: '2026-03-15T09:00:00Z' });
assert.strictEqual(report.status, 'attention_needed');
assert.strictEqual(report.patternCount, 1);
assert.strictEqual(report.totalFailures, 3);
assert.deepStrictEqual(report.affectedSkills, ['my-skill']);
assert.strictEqual(report.patterns[0].skillId, 'my-skill');
assert.ok(report.patterns[0].suggestedAction);
assert.strictEqual(report.generatedAt, '2026-03-15T09:00:00Z');
})) passed += 1; else failed += 1;
if (await test('suggestAction returns timeout-specific advice', async () => {
const action = suggestAction({ normalizedReason: 'timeout after 30s', versions: ['1.0.0'] });
assert.ok(action.toLowerCase().includes('timeout'));
})) passed += 1; else failed += 1;
if (await test('suggestAction returns permission-specific advice', async () => {
const action = suggestAction({ normalizedReason: 'permission denied', versions: ['1.0.0'] });
assert.ok(action.toLowerCase().includes('permission'));
})) passed += 1; else failed += 1;
if (await test('suggestAction returns version-span advice when multiple versions affected', async () => {
const action = suggestAction({ normalizedReason: 'something broke', versions: ['1.0.0', '1.1.0'] });
assert.ok(action.toLowerCase().includes('version'));
})) passed += 1; else failed += 1;
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}
runTests();

View File

@@ -0,0 +1,247 @@
/**
* Tests for scripts/lib/resolve-ecc-root.js
*
* Covers the ECC root resolution fallback chain:
* 1. CLAUDE_PLUGIN_ROOT env var
* 2. Standard install (~/.claude/)
* 3. Plugin cache auto-detection
* 4. Fallback to ~/.claude/
*/
const assert = require('assert');
const fs = require('fs');
const os = require('os');
const path = require('path');
const { resolveEccRoot, INLINE_RESOLVE } = require('../../scripts/lib/resolve-ecc-root');
function test(name, fn) {
try {
fn();
console.log(` \u2713 ${name}`);
return true;
} catch (error) {
console.log(` \u2717 ${name}`);
console.log(` Error: ${error.message}`);
return false;
}
}
function createTempDir() {
return fs.mkdtempSync(path.join(os.tmpdir(), 'ecc-root-test-'));
}
function setupStandardInstall(homeDir) {
const claudeDir = path.join(homeDir, '.claude');
const scriptDir = path.join(claudeDir, 'scripts', 'lib');
fs.mkdirSync(scriptDir, { recursive: true });
fs.writeFileSync(path.join(scriptDir, 'utils.js'), '// stub');
return claudeDir;
}
function setupPluginCache(homeDir, orgName, version) {
const cacheDir = path.join(
homeDir, '.claude', 'plugins', 'cache',
'everything-claude-code', orgName, version
);
const scriptDir = path.join(cacheDir, 'scripts', 'lib');
fs.mkdirSync(scriptDir, { recursive: true });
fs.writeFileSync(path.join(scriptDir, 'utils.js'), '// stub');
return cacheDir;
}
function runTests() {
console.log('\n=== Testing resolve-ecc-root.js ===\n');
let passed = 0;
let failed = 0;
// ─── Env Var Priority ───
if (test('returns CLAUDE_PLUGIN_ROOT when set', () => {
const result = resolveEccRoot({ envRoot: '/custom/plugin/root' });
assert.strictEqual(result, '/custom/plugin/root');
})) passed++; else failed++;
if (test('trims whitespace from CLAUDE_PLUGIN_ROOT', () => {
const result = resolveEccRoot({ envRoot: ' /trimmed/root ' });
assert.strictEqual(result, '/trimmed/root');
})) passed++; else failed++;
if (test('skips empty CLAUDE_PLUGIN_ROOT', () => {
const homeDir = createTempDir();
try {
setupStandardInstall(homeDir);
const result = resolveEccRoot({ envRoot: '', homeDir });
assert.strictEqual(result, path.join(homeDir, '.claude'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('skips whitespace-only CLAUDE_PLUGIN_ROOT', () => {
const homeDir = createTempDir();
try {
setupStandardInstall(homeDir);
const result = resolveEccRoot({ envRoot: ' ', homeDir });
assert.strictEqual(result, path.join(homeDir, '.claude'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
// ─── Standard Install ───
if (test('finds standard install at ~/.claude/', () => {
const homeDir = createTempDir();
try {
setupStandardInstall(homeDir);
const result = resolveEccRoot({ envRoot: '', homeDir });
assert.strictEqual(result, path.join(homeDir, '.claude'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
// ─── Plugin Cache Auto-Detection ───
if (test('discovers plugin root from cache directory', () => {
const homeDir = createTempDir();
try {
const expected = setupPluginCache(homeDir, 'everything-claude-code', '1.8.0');
const result = resolveEccRoot({ envRoot: '', homeDir });
assert.strictEqual(result, expected);
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('prefers standard install over plugin cache', () => {
const homeDir = createTempDir();
try {
const claudeDir = setupStandardInstall(homeDir);
setupPluginCache(homeDir, 'everything-claude-code', '1.8.0');
const result = resolveEccRoot({ envRoot: '', homeDir });
assert.strictEqual(result, claudeDir,
'Standard install should take precedence over plugin cache');
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('handles multiple versions in plugin cache', () => {
const homeDir = createTempDir();
try {
setupPluginCache(homeDir, 'everything-claude-code', '1.7.0');
const expected = setupPluginCache(homeDir, 'everything-claude-code', '1.8.0');
const result = resolveEccRoot({ envRoot: '', homeDir });
// Should find one of them (either is valid)
assert.ok(
result === expected ||
result === path.join(homeDir, '.claude', 'plugins', 'cache', 'everything-claude-code', 'everything-claude-code', '1.7.0'),
'Should resolve to a valid plugin cache directory'
);
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
// ─── Fallback ───
if (test('falls back to ~/.claude/ when nothing is found', () => {
const homeDir = createTempDir();
try {
// Create ~/.claude but don't put scripts there
fs.mkdirSync(path.join(homeDir, '.claude'), { recursive: true });
const result = resolveEccRoot({ envRoot: '', homeDir });
assert.strictEqual(result, path.join(homeDir, '.claude'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('falls back gracefully when ~/.claude/ does not exist', () => {
const homeDir = createTempDir();
try {
const result = resolveEccRoot({ envRoot: '', homeDir });
assert.strictEqual(result, path.join(homeDir, '.claude'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
// ─── Custom Probe ───
if (test('supports custom probe path', () => {
const homeDir = createTempDir();
try {
const claudeDir = path.join(homeDir, '.claude');
fs.mkdirSync(path.join(claudeDir, 'custom'), { recursive: true });
fs.writeFileSync(path.join(claudeDir, 'custom', 'marker.js'), '// probe');
const result = resolveEccRoot({
envRoot: '',
homeDir,
probe: path.join('custom', 'marker.js'),
});
assert.strictEqual(result, claudeDir);
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
// ─── INLINE_RESOLVE ───
if (test('INLINE_RESOLVE is a non-empty string', () => {
assert.ok(typeof INLINE_RESOLVE === 'string');
assert.ok(INLINE_RESOLVE.length > 50, 'Should be a substantial inline expression');
})) passed++; else failed++;
if (test('INLINE_RESOLVE returns CLAUDE_PLUGIN_ROOT when set', () => {
const { execFileSync } = require('child_process');
const result = execFileSync('node', [
'-e', `console.log(${INLINE_RESOLVE})`,
], {
env: { ...process.env, CLAUDE_PLUGIN_ROOT: '/inline/test/root' },
encoding: 'utf8',
}).trim();
assert.strictEqual(result, '/inline/test/root');
})) passed++; else failed++;
if (test('INLINE_RESOLVE discovers plugin cache when env var is unset', () => {
const homeDir = createTempDir();
try {
const expected = setupPluginCache(homeDir, 'everything-claude-code', '1.9.0');
const { execFileSync } = require('child_process');
const result = execFileSync('node', [
'-e', `console.log(${INLINE_RESOLVE})`,
], {
env: { PATH: process.env.PATH, HOME: homeDir },
encoding: 'utf8',
}).trim();
assert.strictEqual(result, expected);
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('INLINE_RESOLVE falls back to ~/.claude/ when nothing found', () => {
const homeDir = createTempDir();
try {
const { execFileSync } = require('child_process');
const result = execFileSync('node', [
'-e', `console.log(${INLINE_RESOLVE})`,
], {
env: { PATH: process.env.PATH, HOME: homeDir },
encoding: 'utf8',
}).trim();
assert.strictEqual(result, path.join(homeDir, '.claude'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}
runTests();

View File

@@ -0,0 +1,718 @@
/**
* Tests for --with / --without selective install flags (issue #470)
*
* Covers:
* - CLI argument parsing for --with and --without
* - Request normalization with include/exclude component IDs
* - Component-to-module expansion via the manifest catalog
* - End-to-end install plans with --with and --without
* - Validation and error handling for unknown component IDs
* - Combined --profile + --with + --without flows
* - Standalone --with without a profile
* - agent: and skill: component families
*/
const assert = require('assert');
const fs = require('fs');
const os = require('os');
const path = require('path');
const {
parseInstallArgs,
normalizeInstallRequest,
} = require('../../scripts/lib/install/request');
const {
loadInstallManifests,
listInstallComponents,
resolveInstallPlan,
} = require('../../scripts/lib/install-manifests');
function test(name, fn) {
try {
fn();
console.log(` \u2713 ${name}`);
return true;
} catch (error) {
console.log(` \u2717 ${name}`);
console.log(` Error: ${error.message}`);
return false;
}
}
function runTests() {
console.log('\n=== Testing --with / --without selective install flags ===\n');
let passed = 0;
let failed = 0;
// ─── CLI Argument Parsing ───
if (test('parses single --with flag', () => {
const parsed = parseInstallArgs([
'node', 'install-apply.js',
'--profile', 'core',
'--with', 'lang:typescript',
]);
assert.deepStrictEqual(parsed.includeComponentIds, ['lang:typescript']);
assert.deepStrictEqual(parsed.excludeComponentIds, []);
})) passed++; else failed++;
if (test('parses single --without flag', () => {
const parsed = parseInstallArgs([
'node', 'install-apply.js',
'--profile', 'developer',
'--without', 'capability:orchestration',
]);
assert.deepStrictEqual(parsed.excludeComponentIds, ['capability:orchestration']);
assert.deepStrictEqual(parsed.includeComponentIds, []);
})) passed++; else failed++;
if (test('parses multiple --with flags', () => {
const parsed = parseInstallArgs([
'node', 'install-apply.js',
'--with', 'lang:typescript',
'--with', 'framework:nextjs',
'--with', 'capability:database',
]);
assert.deepStrictEqual(parsed.includeComponentIds, [
'lang:typescript',
'framework:nextjs',
'capability:database',
]);
})) passed++; else failed++;
if (test('parses multiple --without flags', () => {
const parsed = parseInstallArgs([
'node', 'install-apply.js',
'--profile', 'full',
'--without', 'capability:media',
'--without', 'capability:social',
]);
assert.deepStrictEqual(parsed.excludeComponentIds, [
'capability:media',
'capability:social',
]);
})) passed++; else failed++;
if (test('parses combined --with and --without flags', () => {
const parsed = parseInstallArgs([
'node', 'install-apply.js',
'--profile', 'developer',
'--with', 'lang:typescript',
'--with', 'framework:nextjs',
'--without', 'capability:orchestration',
]);
assert.strictEqual(parsed.profileId, 'developer');
assert.deepStrictEqual(parsed.includeComponentIds, ['lang:typescript', 'framework:nextjs']);
assert.deepStrictEqual(parsed.excludeComponentIds, ['capability:orchestration']);
})) passed++; else failed++;
if (test('ignores empty --with values', () => {
const parsed = parseInstallArgs([
'node', 'install-apply.js',
'--with', '',
'--with', 'lang:python',
]);
assert.deepStrictEqual(parsed.includeComponentIds, ['lang:python']);
})) passed++; else failed++;
if (test('ignores empty --without values', () => {
const parsed = parseInstallArgs([
'node', 'install-apply.js',
'--profile', 'core',
'--without', '',
'--without', 'capability:media',
]);
assert.deepStrictEqual(parsed.excludeComponentIds, ['capability:media']);
})) passed++; else failed++;
// ─── Request Normalization ───
if (test('normalizes --with-only request as manifest mode', () => {
const request = normalizeInstallRequest({
target: 'claude',
profileId: null,
moduleIds: [],
includeComponentIds: ['lang:typescript'],
excludeComponentIds: [],
languages: [],
});
assert.strictEqual(request.mode, 'manifest');
assert.deepStrictEqual(request.includeComponentIds, ['lang:typescript']);
assert.deepStrictEqual(request.excludeComponentIds, []);
})) passed++; else failed++;
if (test('normalizes --profile + --with + --without as manifest mode', () => {
const request = normalizeInstallRequest({
target: 'cursor',
profileId: 'developer',
moduleIds: [],
includeComponentIds: ['lang:typescript', 'framework:nextjs'],
excludeComponentIds: ['capability:orchestration'],
languages: [],
});
assert.strictEqual(request.mode, 'manifest');
assert.strictEqual(request.profileId, 'developer');
assert.deepStrictEqual(request.includeComponentIds, ['lang:typescript', 'framework:nextjs']);
assert.deepStrictEqual(request.excludeComponentIds, ['capability:orchestration']);
})) passed++; else failed++;
if (test('rejects --with combined with legacy language arguments', () => {
assert.throws(
() => normalizeInstallRequest({
target: 'claude',
profileId: null,
moduleIds: [],
includeComponentIds: ['lang:typescript'],
excludeComponentIds: [],
languages: ['python'],
}),
/cannot be combined/
);
})) passed++; else failed++;
if (test('rejects --without combined with legacy language arguments', () => {
assert.throws(
() => normalizeInstallRequest({
target: 'claude',
profileId: null,
moduleIds: [],
includeComponentIds: [],
excludeComponentIds: ['capability:media'],
languages: ['typescript'],
}),
/cannot be combined/
);
})) passed++; else failed++;
if (test('deduplicates repeated --with component IDs', () => {
const request = normalizeInstallRequest({
target: 'claude',
profileId: null,
moduleIds: [],
includeComponentIds: ['lang:typescript', 'lang:typescript', 'lang:python'],
excludeComponentIds: [],
languages: [],
});
assert.deepStrictEqual(request.includeComponentIds, ['lang:typescript', 'lang:python']);
})) passed++; else failed++;
if (test('deduplicates repeated --without component IDs', () => {
const request = normalizeInstallRequest({
target: 'claude',
profileId: 'full',
moduleIds: [],
includeComponentIds: [],
excludeComponentIds: ['capability:media', 'capability:media', 'capability:social'],
languages: [],
});
assert.deepStrictEqual(request.excludeComponentIds, ['capability:media', 'capability:social']);
})) passed++; else failed++;
// ─── Component Catalog Validation ───
if (test('component catalog includes lang: family entries', () => {
const components = listInstallComponents({ family: 'language' });
assert.ok(components.some(c => c.id === 'lang:typescript'), 'Should have lang:typescript');
assert.ok(components.some(c => c.id === 'lang:python'), 'Should have lang:python');
assert.ok(components.some(c => c.id === 'lang:go'), 'Should have lang:go');
assert.ok(components.some(c => c.id === 'lang:java'), 'Should have lang:java');
})) passed++; else failed++;
if (test('component catalog includes framework: family entries', () => {
const components = listInstallComponents({ family: 'framework' });
assert.ok(components.some(c => c.id === 'framework:react'), 'Should have framework:react');
assert.ok(components.some(c => c.id === 'framework:nextjs'), 'Should have framework:nextjs');
assert.ok(components.some(c => c.id === 'framework:django'), 'Should have framework:django');
assert.ok(components.some(c => c.id === 'framework:springboot'), 'Should have framework:springboot');
})) passed++; else failed++;
if (test('component catalog includes capability: family entries', () => {
const components = listInstallComponents({ family: 'capability' });
assert.ok(components.some(c => c.id === 'capability:database'), 'Should have capability:database');
assert.ok(components.some(c => c.id === 'capability:security'), 'Should have capability:security');
assert.ok(components.some(c => c.id === 'capability:orchestration'), 'Should have capability:orchestration');
})) passed++; else failed++;
if (test('component catalog includes agent: family entries', () => {
const components = listInstallComponents({ family: 'agent' });
assert.ok(components.length > 0, 'Should have at least one agent component');
assert.ok(components.some(c => c.id === 'agent:security-reviewer'), 'Should have agent:security-reviewer');
})) passed++; else failed++;
if (test('component catalog includes skill: family entries', () => {
const components = listInstallComponents({ family: 'skill' });
assert.ok(components.length > 0, 'Should have at least one skill component');
assert.ok(components.some(c => c.id === 'skill:continuous-learning'), 'Should have skill:continuous-learning');
})) passed++; else failed++;
// ─── Install Plan Resolution with --with ───
if (test('--with alone resolves component modules and their dependencies', () => {
const plan = resolveInstallPlan({
includeComponentIds: ['lang:typescript'],
target: 'claude',
});
assert.ok(plan.selectedModuleIds.includes('framework-language'),
'Should include the module behind lang:typescript');
assert.ok(plan.selectedModuleIds.includes('rules-core'),
'Should include framework-language dependency rules-core');
assert.ok(plan.selectedModuleIds.includes('platform-configs'),
'Should include framework-language dependency platform-configs');
})) passed++; else failed++;
if (test('--with adds modules on top of a profile', () => {
const plan = resolveInstallPlan({
profileId: 'core',
includeComponentIds: ['capability:security'],
target: 'claude',
});
// core profile modules
assert.ok(plan.selectedModuleIds.includes('rules-core'));
assert.ok(plan.selectedModuleIds.includes('workflow-quality'));
// added by --with
assert.ok(plan.selectedModuleIds.includes('security'),
'Should include security module from --with');
})) passed++; else failed++;
if (test('multiple --with flags union their modules', () => {
const plan = resolveInstallPlan({
includeComponentIds: ['lang:typescript', 'capability:database'],
target: 'claude',
});
assert.ok(plan.selectedModuleIds.includes('framework-language'),
'Should include framework-language from lang:typescript');
assert.ok(plan.selectedModuleIds.includes('database'),
'Should include database from capability:database');
})) passed++; else failed++;
// ─── Install Plan Resolution with --without ───
if (test('--without excludes modules from a profile', () => {
const plan = resolveInstallPlan({
profileId: 'developer',
excludeComponentIds: ['capability:orchestration'],
target: 'claude',
});
assert.ok(!plan.selectedModuleIds.includes('orchestration'),
'Should exclude orchestration module');
assert.ok(plan.excludedModuleIds.includes('orchestration'),
'Should report orchestration as excluded');
// rest of developer profile should remain
assert.ok(plan.selectedModuleIds.includes('rules-core'));
assert.ok(plan.selectedModuleIds.includes('framework-language'));
assert.ok(plan.selectedModuleIds.includes('database'));
})) passed++; else failed++;
if (test('multiple --without flags exclude multiple modules', () => {
const plan = resolveInstallPlan({
profileId: 'full',
excludeComponentIds: ['capability:media', 'capability:social', 'capability:supply-chain'],
target: 'claude',
});
assert.ok(!plan.selectedModuleIds.includes('media-generation'));
assert.ok(!plan.selectedModuleIds.includes('social-distribution'));
assert.ok(!plan.selectedModuleIds.includes('supply-chain-domain'));
assert.ok(plan.excludedModuleIds.includes('media-generation'));
assert.ok(plan.excludedModuleIds.includes('social-distribution'));
assert.ok(plan.excludedModuleIds.includes('supply-chain-domain'));
})) passed++; else failed++;
// ─── Combined --with + --without ───
if (test('--with and --without work together on a profile', () => {
const plan = resolveInstallPlan({
profileId: 'developer',
includeComponentIds: ['capability:security'],
excludeComponentIds: ['capability:orchestration'],
target: 'claude',
});
assert.ok(plan.selectedModuleIds.includes('security'),
'Should include security from --with');
assert.ok(!plan.selectedModuleIds.includes('orchestration'),
'Should exclude orchestration from --without');
assert.ok(plan.selectedModuleIds.includes('rules-core'),
'Should keep profile base modules');
})) passed++; else failed++;
if (test('--without on a dependency of --with raises an error', () => {
assert.throws(
() => resolveInstallPlan({
includeComponentIds: ['capability:social'],
excludeComponentIds: ['capability:content'],
}),
/depends on excluded module/
);
})) passed++; else failed++;
// ─── Validation Errors ───
if (test('throws for unknown component ID in --with', () => {
assert.throws(
() => resolveInstallPlan({
includeComponentIds: ['lang:brainfuck-plus-plus'],
}),
/Unknown install component/
);
})) passed++; else failed++;
if (test('throws for unknown component ID in --without', () => {
assert.throws(
() => resolveInstallPlan({
profileId: 'core',
excludeComponentIds: ['capability:teleportation'],
}),
/Unknown install component/
);
})) passed++; else failed++;
if (test('throws when all modules are excluded', () => {
assert.throws(
() => resolveInstallPlan({
profileId: 'core',
excludeComponentIds: [
'baseline:rules',
'baseline:agents',
'baseline:commands',
'baseline:hooks',
'baseline:platform',
'baseline:workflow',
],
target: 'claude',
}),
/excludes every requested install module/
);
})) passed++; else failed++;
// ─── Target-Specific Behavior ───
if (test('--with respects target compatibility filtering', () => {
const plan = resolveInstallPlan({
includeComponentIds: ['capability:orchestration'],
target: 'cursor',
});
// orchestration module only supports claude, codex, opencode
assert.ok(!plan.selectedModuleIds.includes('orchestration'),
'Should skip orchestration for cursor target');
assert.ok(plan.skippedModuleIds.includes('orchestration'),
'Should report orchestration as skipped for cursor');
})) passed++; else failed++;
if (test('--without with agent: component excludes the agent module', () => {
const plan = resolveInstallPlan({
profileId: 'core',
excludeComponentIds: ['agent:security-reviewer'],
target: 'claude',
});
// agent:security-reviewer maps to agents-core module
// Since core profile includes agents-core and it is excluded, it should be gone
assert.ok(!plan.selectedModuleIds.includes('agents-core'),
'Should exclude agents-core when agent:security-reviewer is excluded');
assert.ok(plan.excludedModuleIds.includes('agents-core'),
'Should report agents-core as excluded');
})) passed++; else failed++;
if (test('--with agent: component includes the agents-core module', () => {
const plan = resolveInstallPlan({
includeComponentIds: ['agent:security-reviewer'],
target: 'claude',
});
assert.ok(plan.selectedModuleIds.includes('agents-core'),
'Should include agents-core module from agent:security-reviewer');
})) passed++; else failed++;
if (test('--with skill: component includes the parent skill module', () => {
const plan = resolveInstallPlan({
includeComponentIds: ['skill:continuous-learning'],
target: 'claude',
});
assert.ok(plan.selectedModuleIds.includes('workflow-quality'),
'Should include workflow-quality module from skill:continuous-learning');
})) passed++; else failed++;
// ─── Help Text ───
if (test('help text documents --with and --without flags', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
const result = execFileSync('node', [scriptPath, '--help'], {
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
assert.ok(result.includes('--with'), 'Help should mention --with');
assert.ok(result.includes('--without'), 'Help should mention --without');
assert.ok(result.includes('component'), 'Help should describe components');
})) passed++; else failed++;
// ─── End-to-End Dry-Run ───
if (test('end-to-end: --profile developer --with capability:security --without capability:orchestration --dry-run', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
const homeDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-e2e-'));
const projectDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-e2e-project-'));
try {
const result = execFileSync('node', [
scriptPath,
'--profile', 'developer',
'--with', 'capability:security',
'--without', 'capability:orchestration',
'--dry-run',
], {
cwd: projectDir,
env: { ...process.env, HOME: homeDir },
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
assert.ok(result.includes('Mode: manifest'), 'Should be manifest mode');
assert.ok(result.includes('Profile: developer'), 'Should show developer profile');
assert.ok(result.includes('capability:security'), 'Should show included component');
assert.ok(result.includes('capability:orchestration'), 'Should show excluded component');
assert.ok(result.includes('security'), 'Selected modules should include security');
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
fs.rmSync(projectDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('end-to-end: --with lang:python --with agent:security-reviewer --dry-run', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
const homeDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-e2e-'));
const projectDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-e2e-project-'));
try {
const result = execFileSync('node', [
scriptPath,
'--with', 'lang:python',
'--with', 'agent:security-reviewer',
'--dry-run',
], {
cwd: projectDir,
env: { ...process.env, HOME: homeDir },
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
assert.ok(result.includes('Mode: manifest'), 'Should be manifest mode');
assert.ok(result.includes('lang:python'), 'Should show lang:python as included');
assert.ok(result.includes('agent:security-reviewer'), 'Should show agent:security-reviewer as included');
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
fs.rmSync(projectDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('end-to-end: --with with unknown component fails cleanly', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
let exitCode = 0;
let stderr = '';
try {
execFileSync('node', [
scriptPath,
'--with', 'lang:nonexistent-language',
'--dry-run',
], {
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
} catch (error) {
exitCode = error.status || 1;
stderr = error.stderr || '';
}
assert.strictEqual(exitCode, 1, 'Should exit with error code 1');
assert.ok(stderr.includes('Unknown install component'), 'Should report unknown component');
})) passed++; else failed++;
if (test('end-to-end: --without with unknown component fails cleanly', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
let exitCode = 0;
let stderr = '';
try {
execFileSync('node', [
scriptPath,
'--profile', 'core',
'--without', 'capability:nonexistent',
'--dry-run',
], {
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
} catch (error) {
exitCode = error.status || 1;
stderr = error.stderr || '';
}
assert.strictEqual(exitCode, 1, 'Should exit with error code 1');
assert.ok(stderr.includes('Unknown install component'), 'Should report unknown component');
})) passed++; else failed++;
// ─── End-to-End Actual Install ───
if (test('end-to-end: installs --profile core --with capability:security and writes state', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
const homeDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-install-'));
const projectDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-install-project-'));
try {
const result = execFileSync('node', [
scriptPath,
'--profile', 'core',
'--with', 'capability:security',
], {
cwd: projectDir,
env: { ...process.env, HOME: homeDir },
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
const claudeRoot = path.join(homeDir, '.claude');
// Security skill should be installed (from --with)
assert.ok(fs.existsSync(path.join(claudeRoot, 'skills', 'security-review', 'SKILL.md')),
'Should install security-review skill from --with');
// Core profile modules should be installed
assert.ok(fs.existsSync(path.join(claudeRoot, 'rules', 'common', 'coding-style.md')),
'Should install core rules');
// Install state should record include/exclude
const statePath = path.join(claudeRoot, 'ecc', 'install-state.json');
const state = JSON.parse(fs.readFileSync(statePath, 'utf8'));
assert.strictEqual(state.request.profile, 'core');
assert.deepStrictEqual(state.request.includeComponents, ['capability:security']);
assert.deepStrictEqual(state.request.excludeComponents, []);
assert.ok(state.resolution.selectedModules.includes('security'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
fs.rmSync(projectDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('end-to-end: installs --profile developer --without capability:orchestration and state reflects exclusion', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
const homeDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-install-'));
const projectDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-install-project-'));
try {
execFileSync('node', [
scriptPath,
'--profile', 'developer',
'--without', 'capability:orchestration',
], {
cwd: projectDir,
env: { ...process.env, HOME: homeDir },
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
const claudeRoot = path.join(homeDir, '.claude');
// Orchestration skills should NOT be installed (from --without)
assert.ok(!fs.existsSync(path.join(claudeRoot, 'skills', 'dmux-workflows', 'SKILL.md')),
'Should not install orchestration skills');
// Developer profile base modules should be installed
assert.ok(fs.existsSync(path.join(claudeRoot, 'rules', 'common', 'coding-style.md')),
'Should install core rules');
assert.ok(fs.existsSync(path.join(claudeRoot, 'skills', 'tdd-workflow', 'SKILL.md')),
'Should install workflow skills');
const statePath = path.join(claudeRoot, 'ecc', 'install-state.json');
const state = JSON.parse(fs.readFileSync(statePath, 'utf8'));
assert.strictEqual(state.request.profile, 'developer');
assert.deepStrictEqual(state.request.excludeComponents, ['capability:orchestration']);
assert.ok(!state.resolution.selectedModules.includes('orchestration'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
fs.rmSync(projectDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('end-to-end: --with alone (no profile) installs just the component modules', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
const homeDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-install-'));
const projectDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-install-project-'));
try {
execFileSync('node', [
scriptPath,
'--with', 'lang:typescript',
], {
cwd: projectDir,
env: { ...process.env, HOME: homeDir },
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
const claudeRoot = path.join(homeDir, '.claude');
// framework-language skill (from lang:typescript) should be installed
assert.ok(fs.existsSync(path.join(claudeRoot, 'skills', 'coding-standards', 'SKILL.md')),
'Should install framework-language skills');
// Its dependencies should be installed
assert.ok(fs.existsSync(path.join(claudeRoot, 'rules', 'common', 'coding-style.md')),
'Should install dependency rules-core');
const statePath = path.join(claudeRoot, 'ecc', 'install-state.json');
const state = JSON.parse(fs.readFileSync(statePath, 'utf8'));
assert.strictEqual(state.request.profile, null);
assert.deepStrictEqual(state.request.includeComponents, ['lang:typescript']);
assert.ok(state.resolution.selectedModules.includes('framework-language'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
fs.rmSync(projectDir, { recursive: true, force: true });
}
})) passed++; else failed++;
// ─── JSON output mode ───
if (test('end-to-end: --dry-run --json includes component selections in output', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
const homeDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-e2e-'));
const projectDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-e2e-project-'));
try {
const output = execFileSync('node', [
scriptPath,
'--profile', 'core',
'--with', 'capability:database',
'--without', 'baseline:hooks',
'--dry-run',
'--json',
], {
cwd: projectDir,
env: { ...process.env, HOME: homeDir },
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
const json = JSON.parse(output);
assert.strictEqual(json.dryRun, true);
assert.ok(json.plan, 'Should include plan object');
assert.ok(
json.plan.includedComponentIds.includes('capability:database'),
'JSON output should include capability:database in included components'
);
assert.ok(
json.plan.excludedComponentIds.includes('baseline:hooks'),
'JSON output should include baseline:hooks in excluded components'
);
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
fs.rmSync(projectDir, { recursive: true, force: true });
}
})) passed++; else failed++;
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}
runTests();

View File

@@ -2424,6 +2424,65 @@ function runTests() {
}
})) passed++; else failed++;
// ─── stripAnsi ───
console.log('\nstripAnsi:');
if (test('strips SGR color codes (\\x1b[...m)', () => {
assert.strictEqual(utils.stripAnsi('\x1b[31mRed text\x1b[0m'), 'Red text');
assert.strictEqual(utils.stripAnsi('\x1b[1;36mBold cyan\x1b[0m'), 'Bold cyan');
})) passed++; else failed++;
if (test('strips cursor movement sequences (\\x1b[H, \\x1b[2J, \\x1b[3J)', () => {
// These are the exact sequences reported in issue #642
assert.strictEqual(utils.stripAnsi('\x1b[H\x1b[2J\x1b[3JHello'), 'Hello');
assert.strictEqual(utils.stripAnsi('before\x1b[Hafter'), 'beforeafter');
})) passed++; else failed++;
if (test('strips cursor position sequences (\\x1b[row;colH)', () => {
assert.strictEqual(utils.stripAnsi('\x1b[5;10Hplaced'), 'placed');
})) passed++; else failed++;
if (test('strips erase line sequences (\\x1b[K, \\x1b[2K)', () => {
assert.strictEqual(utils.stripAnsi('line\x1b[Kend'), 'lineend');
assert.strictEqual(utils.stripAnsi('line\x1b[2Kend'), 'lineend');
})) passed++; else failed++;
if (test('strips OSC sequences (window title, hyperlinks)', () => {
// OSC terminated by BEL (\x07)
assert.strictEqual(utils.stripAnsi('\x1b]0;My Title\x07content'), 'content');
// OSC terminated by ST (\x1b\\)
assert.strictEqual(utils.stripAnsi('\x1b]8;;https://example.com\x1b\\link\x1b]8;;\x1b\\'), 'link');
})) passed++; else failed++;
if (test('strips charset selection (\\x1b(B)', () => {
assert.strictEqual(utils.stripAnsi('\x1b(Bnormal'), 'normal');
})) passed++; else failed++;
if (test('strips bare ESC + letter (\\x1bM reverse index)', () => {
assert.strictEqual(utils.stripAnsi('line\x1bMup'), 'lineup');
})) passed++; else failed++;
if (test('handles mixed ANSI sequences in one string', () => {
const input = '\x1b[H\x1b[2J\x1b[1;36mSession\x1b[0m summary\x1b[K';
assert.strictEqual(utils.stripAnsi(input), 'Session summary');
})) passed++; else failed++;
if (test('returns empty string for non-string input', () => {
assert.strictEqual(utils.stripAnsi(null), '');
assert.strictEqual(utils.stripAnsi(undefined), '');
assert.strictEqual(utils.stripAnsi(42), '');
})) passed++; else failed++;
if (test('preserves string with no ANSI codes', () => {
assert.strictEqual(utils.stripAnsi('plain text'), 'plain text');
assert.strictEqual(utils.stripAnsi(''), '');
})) passed++; else failed++;
if (test('handles CSI with question mark parameter (DEC private modes)', () => {
// e.g. \x1b[?25h (show cursor), \x1b[?25l (hide cursor)
assert.strictEqual(utils.stripAnsi('\x1b[?25hvisible\x1b[?25l'), 'visible');
})) passed++; else failed++;
// Summary
console.log('\n=== Test Results ===');
console.log(`Passed: ${passed}`);