214 Commits

Author SHA1 Message Date
Affaan Mustafa
db27ba1eb2 chore: update README stats and add Codex platform support
- Stars: 42K+ -> 50K+, forks: 5K+ -> 6K+, contributors: 24 -> 30
- Skills: 43 -> 44 (search-first), commands: 31 -> 32 (learn-eval)
- Add Codex to supported platforms in FAQ
- Add search-first skill and learn-eval command to directory listing
- Update OpenCode feature parity table counts
2026-02-23 06:56:00 -08:00
Affaan Mustafa
3c833d8922 Merge pull request #265 from shimo4228/feat/skills/skill-stocktake
feat(skills): add skill-stocktake skill
2026-02-23 06:55:38 -08:00
Affaan Mustafa
156b89ed30 Merge pull request #268 from pangerlkr/patch-6
feat: add scripts/codemaps/generate.ts — fixes #247
2026-02-23 06:55:35 -08:00
Pangerkumzuk Longkumer
41ce1a52e5 feat: add scripts/codemaps/generate.ts codemap generator Fixes #247 - The generate.ts script referenced in agents/doc-updater.md was missing from the repository. This adds the actual implementation. The script: - Recursively walks the src directory (skipping node_modules, dist, etc.) - Classifies files into 5 areas: frontend, backend, database, integrations, workers - Generates docs/CODEMAPS/INDEX.md + one .md per area - Uses the codemap format defined in doc-updater.md - Supports optional srcDir argument: npx tsx scripts/codemaps/generate.ts [srcDir]
This script scans the current working directory and generates architectural codemap documentation in the specified output directory. It classifies files into areas such as frontend, backend, database, integrations, and workers, and creates markdown files for each area along with an index.
2026-02-22 16:19:16 +05:30
Jongchan
6f94c2e28f fix(search-first): add missing skill name frontmatter (#266)
fix: add missing name frontmatter for search-first skill
2026-02-21 16:04:39 -08:00
Tatsuya Shimomoto
91b7ccf56f feat(skills): add skill-stocktake skill 2026-02-22 04:29:40 +09:00
Affaan Mustafa
7daa830da9 Merge pull request #263 from shimo4228/feat/commands/learn-eval
feat(commands): add learn-eval command
2026-02-20 21:17:57 -08:00
Affaan Mustafa
7e57d1b831 Merge pull request #262 from shimo4228/feat/skills/search-first
feat(skills): add search-first skill
2026-02-20 21:17:54 -08:00
Tatsuya Shimomoto
ff47dace11 feat(commands): add learn-eval command 2026-02-21 12:10:39 +09:00
Tatsuya Shimomoto
c9dc53e862 feat(skills): add search-first skill 2026-02-21 12:10:25 +09:00
Affaan Mustafa
c8f54481b8 chore: update Sonnet model references from 4.5 to 4.6
chore: update Sonnet model references from 4.5 to 4.6
2026-02-20 10:59:12 -08:00
Affaan Mustafa
294fc4aad8 fix: CI/Test for issue #226 (hook override bug)
Fixed CI / Test for (issue#226)
2026-02-20 10:59:10 -08:00
yptse123
81aa8a72c3 chore: update Sonnet model references from 4.5 to 4.6
Update MODEL_SONNET constant and all documentation references
to reflect the new claude-sonnet-4-6 model version.
2026-02-20 18:08:10 +08:00
Affaan Mustafa
0e9f613fd1 Revert "feat(ecc): prune plugin 43→12 items, promote 7 rules to .claude/rules/ (#245)"
This reverts commit 1bd68ff534.
2026-02-20 01:11:30 -08:00
park-kyungchan
1bd68ff534 feat(ecc): prune plugin 43→12 items, promote 7 rules to .claude/rules/ (#245)
ECC community plugin pruning: removed 530+ non-essential files
(.cursor/, .opencode/, docs/ja-JP, docs/zh-CN, docs/zh-TW,
language-specific skills/agents/rules). Retained 4 agents,
3 commands, 5 skills. Promoted 13 rule files (8 common + 5
typescript) to .claude/rules/ for CC native loading. Extracted
reusable patterns to EXTRACTED-PATTERNS.md.
2026-02-19 22:34:51 -08:00
Affaan Mustafa
24047351c2 Merge pull request #251 from gangqian68/main
docs: add CLAUDE.md for Claude Code guidance
2026-02-19 04:56:49 -08:00
qian gang
66959c1dca docs: add CLAUDE.md for Claude Code guidance
Add project-level CLAUDE.md with test commands, architecture
overview, key commands, and contribution guidelines.
2026-02-19 16:50:08 +08:00
Pangerkumzuk Longkumer
5a0f6e9e1e Merge pull request #12 from pangerlkr/copilot/fix-ci-test-failures-again
Fix Windows CI test failures - platform-specific test adjustments
2026-02-19 06:43:40 +05:30
copilot-swe-agent[bot]
cf61ef7539 Fix Windows CI test failures - add platform checks and USERPROFILE support
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 15:10:32 +00:00
copilot-swe-agent[bot]
07e23e3e64 Initial plan 2026-02-18 15:00:14 +00:00
Pangerkumzuk Longkumer
8fc49ba0e8 Merge pull request #11 from pangerlkr/copilot/fix-ci-test-failures
Fix session-manager tests failing in CI due to missing test isolation
2026-02-18 20:29:40 +05:30
copilot-swe-agent[bot]
b90448aef6 Clarify cleanup comment scope in session-manager tests
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 14:05:46 +00:00
copilot-swe-agent[bot]
caab908be8 Fix session-manager test environment for Rounds 95-98
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 14:02:11 +00:00
copilot-swe-agent[bot]
7021d1f6cf Initial plan 2026-02-18 13:51:40 +00:00
Pangerkumzuk Longkumer
3ad211b01b Merge pull request #10 from pangerlkr/copilot/fix-matrix-test-failures
Fix platform-specific hook blocking tests for CI matrix
2026-02-18 19:21:21 +05:30
copilot-swe-agent[bot]
f61c9b0caf Fix integration/hooks tests to handle Windows platform differences
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 13:42:05 +00:00
copilot-swe-agent[bot]
b682ac7d79 Initial plan 2026-02-18 13:36:31 +00:00
Pangerkumzuk Longkumer
e1fca6e84d Merge branch 'affaan-m:main' into main 2026-02-18 18:33:04 +05:30
Pangerkumzuk Longkumer
07530ace5f Merge pull request #9 from pangerlkr/claude/fix-agentshield-security-scan
Fix test failures and remove broken AgentShield workflow
2026-02-18 13:42:01 +05:30
anthropic-code-agent[bot]
00464b6f60 Fix failing workflows: trim action in getCommandPattern and remove broken AgentShield scan
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 08:06:25 +00:00
anthropic-code-agent[bot]
0c78a7c779 Initial plan 2026-02-18 08:00:23 +00:00
Pangerkumzuk Longkumer
fca997001e Merge pull request #7 from pangerlkr/copilot/fix-workflow-failures
Fix stdin size limit enforcement in hook scripts
2026-02-18 13:16:01 +05:30
copilot-swe-agent[bot]
1eca3c9130 Fix stdin overflow bug in hook scripts - truncate chunks to stay within MAX_STDIN limit
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 07:40:12 +00:00
copilot-swe-agent[bot]
defcdc356e Initial plan 2026-02-18 07:28:13 +00:00
Pangerkumzuk Longkumer
b548ce47c9 Merge pull request #6 from pangerlkr/copilot/fix-workflow-actions
Fix copilot-setup-steps.yml YAML structure and address review feedback
2026-02-18 12:56:29 +05:30
copilot-swe-agent[bot]
90e6a8c63b Fix copilot-setup-steps.yml and address PR review comments
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 07:22:05 +00:00
copilot-swe-agent[bot]
c68f7efcdc Initial plan 2026-02-18 07:16:12 +00:00
Pangerkumzuk Longkumer
aa805d5240 Merge pull request #5 from pangerlkr/claude/fix-workflow-actions
Fix ESLint errors in test files and package manager
2026-02-18 12:42:38 +05:30
anthropic-code-agent[bot]
c5ca3c698c Fix ESLint errors in test files and package-manager.js
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 07:04:29 +00:00
anthropic-code-agent[bot]
7e928572c7 Initial plan 2026-02-18 06:58:35 +00:00
Pangerkumzuk Longkumer
0bf47bbb41 Update print statement from 'Hfix: update package manager tests and add summaryello' to 'Goodbye' 2026-02-18 07:29:16 +05:30
Pangerkumzuk Longkumer
2ad888ca82 Refactor console log formatting in tests 2026-02-18 07:21:58 +05:30
Pangerkumzuk Longkumer
8966282e48 fix: add comments to empty catch blocks (no-empty ESLint) 2026-02-18 07:18:40 +05:30
Pangerkumzuk Longkumer
3d97985559 fix: remove unused execFileSync import (no-unused-vars ESLint) 2026-02-18 07:15:53 +05:30
Pangerkumzuk Longkumer
d54124afad fix: remove useless escape characters in regex patterns (no-useless-escape ESLint) 2026-02-18 07:14:32 +05:30
Affaan Mustafa
0b11849f1e chore: update skill count from 37 to 43, add 5 new skills to directory listing
New community skills: content-hash-cache-pattern, cost-aware-llm-pipeline,
regex-vs-llm-structured-text, swift-actor-persistence, swift-protocol-di-testing
2026-02-16 20:04:57 -08:00
Affaan Mustafa
2c26d2d67c fix: add missing process.exit(0) to early return in post-edit-console-warn hook 2026-02-16 20:03:12 -08:00
Pangerkumzuk Longkumer
fdda6cbcd9 Merge branch 'main' into main 2026-02-17 07:00:12 +05:30
Affaan Mustafa
5cb9c1c2a5 Merge pull request #223 from shimo4228/feat/skills/regex-vs-llm-structured-text
feat(skills): add regex-vs-llm-structured-text skill
2026-02-16 14:19:02 -08:00
Affaan Mustafa
595127954f Merge pull request #222 from shimo4228/feat/skills/content-hash-cache-pattern
feat(skills): add content-hash-cache-pattern skill
2026-02-16 14:18:59 -08:00
Affaan Mustafa
bb084229aa Merge pull request #221 from shimo4228/feat/skills/swift-actor-persistence
feat(skills): add swift-actor-persistence skill
2026-02-16 14:18:57 -08:00
Affaan Mustafa
849bb3b425 Merge pull request #220 from shimo4228/feat/skills/swift-protocol-di-testing
feat(skills): add swift-protocol-di-testing skill
2026-02-16 14:18:55 -08:00
Affaan Mustafa
4db215f60d Merge pull request #219 from shimo4228/feat/skills/cost-aware-llm-pipeline
feat(skills): add cost-aware-llm-pipeline skill
2026-02-16 14:18:53 -08:00
Affaan Mustafa
bb1486c404 Merge pull request #229 from voidforall/feat/cpp-coding-standards
feat: add cpp coding standards skill
2026-02-16 14:18:35 -08:00
Affaan Mustafa
9339d4c88c Merge pull request #228 from hrygo/fix/observe.sh-timestamp-export
fix: correct TIMESTAMP environment variable syntax in observe.sh
2026-02-16 14:18:13 -08:00
Affaan Mustafa
2497a9b6e5 Merge pull request #241 from sungpeo/mkdir-rules-readme
docs: require default directory creation for initial user-level rules
2026-02-16 14:18:10 -08:00
Affaan Mustafa
e449471ed3 Merge pull request #230 from neal-zhu/fix/whitelist-plan-files-in-doc-hook
fix: whitelist .claude/plans/ in doc file creation hook
2026-02-16 14:18:08 -08:00
Sungpeo Kook
cad8db21b7 docs: add mkdir for rules directory in ja-JP and zh-CN READMEs 2026-02-17 01:48:37 +09:00
Sungpeo Kook
9d9258c7e1 docs: require default directory creation for initial user-level rules 2026-02-17 01:32:56 +09:00
Pangerkumzuk Longkumer
08ee723e85 Merge pull request #3 from pangerlkr/copilot/fix-markdownlint-errors-again
Fix markdownlint errors: MD038, MD058, MD025, MD034
2026-02-16 19:23:01 +05:30
Pangerkumzuk Longkumer
f11347a708 Merge pull request #4 from pangerlkr/copilot/fix-markdownlint-errors-another-one
Fix markdownlint errors (MD038, MD058, MD025, MD034)
2026-02-16 19:20:56 +05:30
copilot-swe-agent[bot]
586637f94c Revert unrelated package-lock.json changes
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-16 03:01:15 +00:00
copilot-swe-agent[bot]
2b6ff6b55e Initial plan for markdownlint error fixes
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-16 02:58:12 +00:00
copilot-swe-agent[bot]
2be6e09501 Initial plan 2026-02-16 02:55:40 +00:00
copilot-swe-agent[bot]
b1d47b22ea Initial plan 2026-02-16 02:37:38 +00:00
Pangerkumzuk Longkumer
9dd4f4409b Merge pull request #2 from pangerlkr/copilot/fix-markdownlint-errors
Fix markdownlint CI failures (MD038, MD058, MD025, MD034)
2026-02-16 07:58:23 +05:30
copilot-swe-agent[bot]
c5de2a7bf7 Remove misleading comments about trailing spaces
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-16 02:22:49 +00:00
copilot-swe-agent[bot]
af24c617bb Fix all markdownlint errors (MD038, MD058, MD025, MD034)
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-16 02:22:14 +00:00
copilot-swe-agent[bot]
2ca903d4c5 Initial plan 2026-02-16 02:20:28 +00:00
Pangerkumzuk Longkumer
4d98d9f125 Add Go environment setup step to workflow
Added a step to set up the Go environment in GitHub Actions workflow.
2026-02-16 07:10:39 +05:30
Maohua Zhu
40e80bcc61 fix: whitelist .claude/plans/ in doc file creation hook
The PreToolUse Write hook blocks creation of .md files to prevent
unnecessary documentation sprawl. However, it also blocks Claude Code's
built-in plan mode from writing plan files to .claude/plans/*.md,
causing "BLOCKED: Unnecessary documentation file creation" errors
every time a plan is created or updated.

Add .claude/plans/ path to the whitelist so plan files are not blocked.
2026-02-15 08:49:15 +08:00
Lin Yuan
eaf710847f fix syntax in illustrative example 2026-02-14 20:36:36 +00:00
voidforall
b169a2e1dd Update .cursor/skills/cpp-coding-standards/SKILL.md
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-02-14 20:33:59 +00:00
Lin Yuan
8b4aac4e56 feat: add cpp-coding-standards skill to skills/ and update README 2026-02-14 20:26:33 +00:00
Lin Yuan
08f60355d4 feat: add cpp-coding-standards skill based on C++ Core Guidelines
Comprehensive coding standards for modern C++ (C++17/20/23) derived
from isocpp.github.io/CppCoreGuidelines. Covers philosophy, functions,
classes, resource management, expressions, error handling, immutability,
concurrency, templates, standard library, enumerations, naming, and
performance with DO/DON'T examples and a pre-commit checklist.
2026-02-14 20:20:27 +00:00
黄飞虹
1f74889dbf fix: correct TIMESTAMP environment variable syntax in observe.sh
The inline environment variable syntax `TIMESTAMP="$timestamp" echo ...` 
does not work correctly because:
1. The pipe creates a subshell that doesn't inherit the variable
2. The environment variable is set for echo, not for the piped python

Fixed by using `export` and separating the commands:
- export TIMESTAMP="$timestamp"
- echo "$INPUT_JSON" | python3 -c "..."

This ensures the TIMESTAMP variable is available to the python subprocess.

Fixes #227

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-14 21:25:50 +08:00
Tatsuya Shimomoto
82d751556c feat(skills): add regex-vs-llm-structured-text skill
Decision framework and hybrid pipeline for choosing between regex
and LLM when parsing structured text.
2026-02-14 12:33:03 +09:00
Tatsuya Shimomoto
3847cc0e0d feat(skills): add content-hash-cache-pattern skill
SHA-256 content-hash based file caching with service layer
separation for expensive processing pipelines.
2026-02-14 12:31:30 +09:00
Tatsuya Shimomoto
94eaaad238 feat(skills): add swift-actor-persistence skill
Thread-safe data persistence patterns using Swift actors with
in-memory cache and file-backed storage.
2026-02-14 12:30:22 +09:00
Tatsuya Shimomoto
ab5be936e9 feat(skills): add swift-protocol-di-testing skill
Protocol-based dependency injection patterns for testable Swift code
with Swift Testing framework examples.
2026-02-14 12:18:48 +09:00
Tatsuya Shimomoto
219bd1ff88 feat(skills): add cost-aware-llm-pipeline skill
Cost optimization patterns for LLM API usage combining model routing,
budget tracking, retry logic, and prompt caching.
2026-02-14 12:16:05 +09:00
Affaan Mustafa
4ff6831b2b Delete llms.txt
not necessary + opencode focused, open to a PR for a new one
2026-02-13 18:47:48 -08:00
Affaan Mustafa
182e9e78b9 test: add 3 edge-case tests for readFile binary, output() NaN/Infinity, loadAliases __proto__ safety
Round 125: Tests for readFile returning garbled strings (not null) on binary
files, output() handling undefined/NaN/Infinity as non-objects logged directly
(and JSON.stringify converting NaN/Infinity to null in objects), and loadAliases
with __proto__ key in JSON proving no prototype pollution occurs.
Total: 935 tests, all passing.
2026-02-13 18:44:07 -08:00
Affaan Mustafa
0250de793a test: add 3 edge-case tests for findFiles dotfiles, getAllSessions date format, parseSessionMetadata title regex
Round 124: Tests for findFiles matching dotfiles (unlike shell glob where *
excludes hidden files), getAllSessions strict date equality filter (wrong format
silently returns empty), and parseSessionMetadata title regex edge cases
(no space after #, ## heading, multiple H1, greedy \s+ crossing newlines).
Total: 932 tests, all passing.
2026-02-13 18:40:07 -08:00
Affaan Mustafa
88fa1bdbbc test: add 3 edge-case tests for countInFile overlapping, replaceInFile $& tokens, parseSessionMetadata CRLF
Round 123: Tests for countInFile non-overlapping regex match behavior (aaa with
/aa/g returns 1 not 2), replaceInFile with $& and $$ substitution tokens in
replacement strings, and parseSessionMetadata CRLF section boundary bleed where
\n\n fails to match \r\n\r\n. Total: 929 tests, all passing.
2026-02-13 18:36:09 -08:00
Affaan Mustafa
2753db3a48 test: add 3 edge-case tests for findFiles dot escaping, listAliases limit falsy values, getSessionById old format
Round 122: Tests for findFiles glob dot escaping (*.txt must not match filetxt),
listAliases limit=0/negative/NaN returning all due to JS falsy check, and
getSessionById matching old YYYY-MM-DD-session.tmp filenames via noIdMatch path.
Total: 926 tests, all passing.
2026-02-13 18:30:42 -08:00
Affaan Mustafa
e50b05384a test: add Round 121 tests for findFiles ? glob, setAlias path validation, and time metadata extraction
- findFiles: ? glob pattern matches single character only (converted to . regex)
- setAlias: rejects null, empty, whitespace-only, and non-string sessionPath values
- parseSessionMetadata: Started/Last Updated time extraction — present, missing, loose regex

Total tests: 923
2026-02-13 18:25:56 -08:00
Affaan Mustafa
26f3c88902 test: add Round 120 tests for replaceInFile empty search, setAlias length boundary, and notes extraction
- replaceInFile: empty string search — replace prepends at pos 0, replaceAll inserts between every char
- setAlias: 128-char alias accepted (boundary), 129-char rejected (> 128 check)
- parseSessionMetadata: "Notes for Next Session" extraction — last section, empty, ### boundary, markdown

Total tests: 920
2026-02-13 18:23:55 -08:00
Affaan Mustafa
df2d3a6d54 test: add Round 119 tests for appendFile type safety, renameAlias reserved names, and context extraction
- appendFile: null/undefined/number content throws TypeError (no try/catch like writeFile)
- renameAlias: reserved names rejected for newAlias (parallel check to setAlias, case-insensitive)
- parseSessionMetadata: "Context to Load" code block extraction — missing close, nested blocks, empty

Total tests: 917
2026-02-13 18:21:39 -08:00
Affaan Mustafa
25c5d58c44 test: add Round 118 edge-case tests for writeFile type safety, renameAlias self, and reserved alias names
- writeFile: null/undefined/number content throws TypeError (no try/catch unlike replaceInFile)
- renameAlias: same-name rename returns "already exists" (no self-rename short-circuit)
- setAlias: reserved names (list, help, remove, delete, create, set) rejected case-insensitively

Total tests: 914
2026-02-13 18:19:21 -08:00
Affaan Mustafa
06af1acb8d test: add Round 117 edge-case tests for grepFile CRLF, getSessionSize boundaries, and parseSessionFilename case
- grepFile: CRLF content leaves trailing \r on lines, breaking anchored $ patterns
- getSessionSize: boundary formatting at 0B, 1023B→"1023 B", 1024B→"1.0 KB", non-existent→"0 B"
- parseSessionFilename: [a-z0-9] regex rejects uppercase short IDs (case-sensitive design)

Total tests: 911
2026-02-13 18:16:10 -08:00
Affaan Mustafa
6a0b231d34 test: add Round 116 edge-case tests for replaceInFile null coercion, loadAliases extra fields, and ensureDir null path
- replaceInFile: null/undefined replacement coerced to string "null"/"undefined" by JS String.replace ToString
- loadAliases: extra unknown JSON fields silently preserved through load/save round-trip (loose validation)
- ensureDir: null/undefined path throws wrapped Error (ERR_INVALID_ARG_TYPE → re-thrown)

Total tests: 908
2026-02-13 18:11:58 -08:00
Affaan Mustafa
a563df2a52 test: add edge-case tests for countInFile empty pattern, parseSessionMetadata CRLF, and updateAliasTitle empty string coercion (round 115) 2026-02-13 18:05:28 -08:00
Affaan Mustafa
53e06a8850 test: add edge-case tests for listAliases type coercion, replaceInFile options.all with RegExp, and output BigInt serialization (round 114) 2026-02-13 18:01:25 -08:00
Affaan Mustafa
93633e44f2 test: add 3 tests for century leap years, zero-width regex, and markdown titles (Round 113)
- parseSessionFilename rejects Feb 29 in century non-leap years (1900, 2100) but accepts 2000/2400
- replaceInFile with /(?:)/g zero-width regex inserts at every position boundary
- parseSessionMetadata preserves raw markdown formatting (**bold**, `code`, _italic_) in titles

Total: 899 tests
2026-02-13 17:54:48 -08:00
Affaan Mustafa
791da32c6b test: add 3 tests for Unicode alias rejection, newline-in-path heuristic, and read-only append (Round 112)
- resolveAlias rejects Unicode characters (accented, CJK, emoji, Cyrillic homoglyphs)
- getSessionStats treats absolute .tmp paths with embedded newlines as content, not file paths
- appendSessionContent returns false on EACCES for read-only files

Total: 896 tests
2026-02-13 17:47:50 -08:00
Affaan Mustafa
635eb108ab test: add 3 tests for nested backtick context truncation, newline args injection, alias 128-char boundary
Round 111: Tests for parseSessionMetadata context regex truncation at
nested triple backticks (lazy [\s\S]*? stops early), getExecCommand
accepting newline/tab/CR in args via \s in SAFE_ARGS_REGEX, and setAlias
accepting exactly 128-character alias (off-by-one boundary). 893 tests total.
2026-02-13 17:41:58 -08:00
Affaan Mustafa
1e740724ca test: add 3 tests for findFiles root-unreadable, parseSessionFilename year 0000, uppercase ID rejection
Round 110: Tests for findFiles with unreadable root directory returning
empty array (vs Round 71 which tested subdirectory), parseSessionFilename
year 0000 exposing JS Date 0-99→1900-1999 mapping quirk, and uppercase
session ID rejection by [a-z0-9]{8,} regex. 890 tests total.
2026-02-13 17:30:38 -08:00
Affaan Mustafa
6737f3245b test: add 3 tests for appendFile new-file creation, getExecCommand traversal, getAllSessions non-session skip
Round 109:
- appendFile creating new file in non-existent directory (ensureDir + appendFileSync)
- getExecCommand with ../ path traversal in binary (SAFE_NAME_REGEX allows ../)
- getAllSessions skips .tmp files that don't match session filename format
2026-02-13 17:24:36 -08:00
Affaan Mustafa
1b273de13f test: add 3 tests for grepFile Unicode, SAFE_NAME_REGEX traversal, getSessionSize boundary
Round 108:
- grepFile with Unicode/emoji content (UTF-16 string matching on split lines)
- getRunCommand accepts ../ path traversal via SAFE_NAME_REGEX (allows / and . individually)
- getSessionSize exact 1024-byte B→KB boundary and 1MB KB→MB boundary
2026-02-13 17:18:06 -08:00
Affaan Mustafa
882157ac09 test: add 3 tests for Round 107 (881 total)
- grepFile with ^$ pattern verifies empty line matching including trailing newline phantom
- replaceInFile with self-reintroducing replacement confirms single-pass behavior
- setAlias with whitespace-only title exposes missing trim validation vs sessionPath
2026-02-13 17:11:32 -08:00
Affaan Mustafa
69799f2f80 test: add 3 tests for Round 106 (878 total)
- countInFile with named capture groups verifies match(g) ignores group details
- grepFile with multiline (m) flag confirms flag is preserved unlike stripped g
- getAllSessions with array/object limit tests Number() coercion edge cases
2026-02-13 17:07:13 -08:00
Affaan Mustafa
b27c21732f test: add 3 edge-case tests for regex boundary, sticky flag, and type bypass (Round 105)
- parseSessionMetadata: blank line within Completed section truncates items
  due to regex lookahead (?=###|\n\n|$) stopping at \n\n boundary
- grepFile: sticky (y) flag not stripped like g flag, causing stateful
  .test() behavior that misses matching lines
- getExecCommand: object args bypass SAFE_ARGS_REGEX (typeof !== 'string')
  but coerce to "[object Object]" in command string
2026-02-13 16:59:56 -08:00
Affaan Mustafa
332d0f444b test: add Round 104 edge-case tests (detectFromLockFile null, resolveSessionAlias traversal, whitespace notes)
- detectFromLockFile(null): throws TypeError — no input validation before
  path.join (package-manager.js:95)
- resolveSessionAlias('../etc/passwd'): returns path-traversal input unchanged
  when alias lookup fails, documenting the passthrough behavior
- parseSessionMetadata with whitespace-only notes: trim() → "" → hasNotes=false,
  whitespace-only notes treated as absent

Total tests: 872 (all passing)
2026-02-13 16:45:47 -08:00
Affaan Mustafa
45a0b62fcb test: add Round 103 edge-case tests (countInFile bool, grepFile numeric, loadAliases array)
- countInFile(file, false): boolean falls to else-return-0 type guard (utils.js:443)
- grepFile(file, 0): numeric pattern implicitly coerced via RegExp constructor,
  contrasting with countInFile which explicitly rejects non-string non-RegExp
- loadAliases with array aliases: typeof [] === 'object' bypasses validation
  at session-aliases.js:58, returning array instead of plain object

Total tests: 869 (all passing)
2026-02-13 16:08:47 -08:00
Affaan Mustafa
a64a294b29 test: add 3 edge-case tests for looksLikePath heuristic, falsy title coercion, and checkbox regex (Round 102)
- getSessionStats with Unix nonexistent .tmp path triggers looksLikePath
  heuristic → readFile returns null → zeroed stats via null content path
- setAlias with title=0 silently converts to null (0 || null === null)
- parseSessionMetadata skips [x] checked items in In Progress section
  (regex only matches unchecked [ ] checkboxes)

Total tests: 866
2026-02-13 16:02:18 -08:00
Affaan Mustafa
4d016babbb test: round 101 — output() circular crash, getSessionStats type confusion, appendSessionContent null
- output() throws TypeError on circular reference object (JSON.stringify has no try/catch)
- getSessionStats(123) throws TypeError (number reaches parseSessionMetadata, .match() fails)
- appendSessionContent(null) returns false (TypeError caught by try/catch)

Total tests: 863
2026-02-13 15:54:02 -08:00
Affaan Mustafa
d2c1281e97 test: round 100 — findFiles maxAge+recursive interaction, parseSessionMetadata ### truncation, cleanupAliases falsy coercion
- findFiles with both maxAge AND recursive combined (option interaction test)
- parseSessionMetadata truncates item text at embedded ### due to lazy regex
- cleanupAliases callback returning 0 (falsy non-boolean) removes alias via !0 coercion

Total tests: 860
2026-02-13 15:49:06 -08:00
Affaan Mustafa
78ad952433 test: add 3 tests for no-match rewrite, CR-only grepFile, and null write (R99)
- replaceInFile returns true even when pattern doesn't match (silent rewrite)
- grepFile treats CR-only (\r) file as single line (splits on \n only)
- writeSessionContent(null) returns false (TypeError caught by try/catch)
2026-02-13 15:41:15 -08:00
Affaan Mustafa
274cca025e test: add 3 tests for null-input crashes and negative maxAge boundary (R98)
- getSessionById(null) throws TypeError at line 297 (null.length)
- parseSessionFilename(null) throws TypeError at line 30 (null.match())
- findFiles with maxAge: -1 deterministically excludes all files
2026-02-13 15:35:18 -08:00
Affaan Mustafa
18fcb88168 test: add 3 tests for whitespace ID, lastIndex reuse, and whitespace search (Round 97) 2026-02-13 15:28:06 -08:00
Affaan Mustafa
8604583d16 test: add 3 tests for session-manager edge cases (Round 96)
- parseSessionFilename rejects Feb 30 (Date rollover check)
- getAllSessions with limit: Infinity bypasses pagination
- getAllSessions with limit: null demonstrates destructuring default bypass (null !== undefined)

Total: 848 tests, all passing
2026-02-13 15:13:55 -08:00
Affaan Mustafa
233b341557 test: add 3 tests for alternation regex, double-negative clamping, and self-rename (Round 95) 2026-02-13 14:50:49 -08:00
Affaan Mustafa
a95fb54ee4 test: add 3 tests for scoped pkg detection, empty env var, and tools-without-files (Round 94)
- detectFromPackageJson with scoped package name (@scope/pkg@version)
  returns null because split('@')[0] yields empty string
- getPackageManager skips empty string CLAUDE_PACKAGE_MANAGER via
  falsy short-circuit (distinct from unknown PM name test)
- session-end buildSummarySection includes Tools Used but omits
  Files Modified when transcript has only Read/Grep tools

Total tests: 842
2026-02-13 14:44:40 -08:00
Affaan Mustafa
910ffa5530 test: add 3 tests for regex boundary and flag logic gaps (round 93)
- getSessionStats: drive letter without slash (Z:nosession.tmp) treated as content
- countInFile: case-insensitive regex with g flag auto-appended (/foo/i → /foo/ig)
- countInFile: case-insensitive regex with g flag preserved (/foo/gi stays /foo/gi)
2026-02-13 14:21:03 -08:00
Affaan Mustafa
b9a38b2680 test: add Round 92 tests for object pattern, UNC path, and empty packageManager
- Test countInFile returns 0 for object pattern type (non-string non-RegExp)
- Test getSessionStats treats Windows UNC path as content (not file path)
- Test detectFromPackageJson returns null for empty string packageManager field

Total tests: 836
2026-02-13 14:05:24 -08:00
Affaan Mustafa
14dfe4d110 test: add Round 91 tests for empty action pattern, whitespace PM, and mixed separators
- Test getCommandPattern('') produces valid regex for empty action string
- Test detectFromPackageJson returns null for whitespace-only packageManager
- Test getSessionStats treats mixed Windows path separators as file path

Total tests: 833
2026-02-13 14:02:41 -08:00
Affaan Mustafa
3e98be3e39 test: add Round 90 tests for readStdinJson timeout and saveAliases double failure
- Test readStdinJson timeout path when stdin never closes (resolves with {})
- Test readStdinJson timeout path with partial invalid JSON (catch resolves with {})
- Test saveAliases backup restore double failure (inner restoreErr catch at line 135)

Total tests: 830
2026-02-13 13:59:03 -08:00
Affaan Mustafa
3ec59c48bc test: add 3 tests for subdirectory skip, TypeScript error detection, and entry.name fallback (Round 89)
- getAllSessions skips subdirectories in sessions dir (!entry.isFile() branch)
- post-edit-typecheck.js error detection path when tsc reports errors (relevantLines > 0)
- extractSessionSummary extracts tools via entry.name + entry.input fallback format
2026-02-13 13:39:16 -08:00
Affaan Mustafa
e70d4d2237 test: add 3 tests for replaceInFile deletion, parseSessionMetadata null fields, countInFile zero matches (Round 88)
- replaceInFile with empty replacement string verifies text deletion works
- parseSessionMetadata asserts date/started/lastUpdated are null when fields absent
- countInFile with valid file but non-matching pattern returns 0

Total: 824 tests
2026-02-13 12:49:53 -08:00
Affaan Mustafa
9b286ab3f8 test: add 3 tests for stdin 1MB overflow and analyzePhase async method (round 87)
- post-edit-format.js: verify MAX_STDIN truncation at 1MB limit
- post-edit-typecheck.js: verify MAX_STDIN truncation at 1MB limit
- skill-create-output.js: test analyzePhase() returns Promise and writes output
2026-02-13 12:42:20 -08:00
Affaan Mustafa
b3e362105d test: add 3 tests for typeof guard, empty package.json, and learned_skills_path override (round 86)
- loadAliases resets to defaults when aliases field is a truthy non-object (string)
- detectFromPackageJson returns null for empty (0-byte) package.json
- evaluate-session uses learned_skills_path config override with ~ expansion
2026-02-13 12:23:34 -08:00
Affaan Mustafa
8cacf0f6a6 fix: use nullish coalescing for confidence default + add 3 tests (round 85)
Fix confidence=0 showing 80% instead of 0% in patterns() (|| → ??).
Test evaluate-session.js config parse error catch, getSessionIdShort
fallback at root CWD, and precise confidence=0 assertion.
2026-02-13 12:11:26 -08:00
Affaan Mustafa
cedcf9a701 test: add 3 tests for TOCTOU catch paths and NaN date sort fallback (round 84)
- getSessionById returns null for broken symlink (session-manager.js:307-310)
- findFiles skips broken symlinks matching the pattern (utils.js:170-173)
- listAliases sorts entries with invalid/missing dates via getTime() || 0 fallback
2026-02-13 11:35:22 -08:00
Affaan Mustafa
15717d6d04 test: cover whitespace-only frontmatter field, empty SKILL.md, and getAllSessions TOCTOU symlink 2026-02-13 11:20:44 -08:00
Affaan Mustafa
c8b7d41e42 test: cover tool_name OR fallback, Notification/SubagentStop events, and template regex no-match 2026-02-13 11:12:03 -08:00
Affaan Mustafa
9bec3d7625 test: cover suggest-compact upper bound, getSessionStats null input, and non-string content branch 2026-02-13 11:02:46 -08:00
Affaan Mustafa
2573cbb7b0 test: cover session-end message.role path, getExecCommand non-string args, and legacy hooks format
Round 80: Three previously untested conditional branches:
- session-end.js: entry.message?.role === 'user' third OR condition
  (fires when type is not 'user' but message.role is)
- package-manager.js: getExecCommand with truthy non-string args
  (typeof check short-circuits, value still appended via ternary)
- validate-hooks.js: legacy array format parsing path (lines 115-135)
  with 'Hook N' error labels instead of 'EventType[N]'
2026-02-13 10:39:35 -08:00
Affaan Mustafa
9dccdb9068 test: cover countInFile/grepFile string patterns and validate-commands warnings suffix
Round 79 — untested conditional branches in utils.js and validate-commands.js:
- countInFile: exercise typeof pattern === 'string' branch with valid string
- grepFile: exercise string pattern branch (not RegExp)
- validate-commands: verify (N warnings) suffix in output when warnCount > 0
2026-02-13 10:26:58 -08:00
Affaan Mustafa
f000d9b02d test: cover getSessionStats file-path read, hasContent field, and wrapped hooks format
Round 78 — shifted from catch blocks to untested conditional branches:
- getSessionStats: exercise looksLikePath → getSessionContent path (real .tmp file)
- getAllSessions: verify hasContent true/false for non-empty vs empty files
- validate-hooks: test wrapped { hooks: { PreToolUse: [...] } } production format
2026-02-13 10:21:06 -08:00
Affaan Mustafa
27ae5ea299 test: cover evaluate-session/suggest-compact main().catch and validate-hooks JSON parse
- evaluate-session: main().catch when HOME is non-directory (ENOTDIR)
- suggest-compact: main().catch double-failure when TMPDIR is non-directory
- validate-hooks: invalid JSON in hooks.json triggers error exit

Total tests: 831 → 834
2026-02-13 10:03:48 -08:00
Affaan Mustafa
723e69a621 test: cover deleteSession catch, pre-compact and session-end main().catch
- session-manager: deleteSession returns false when dir is read-only (EACCES)
- pre-compact: main().catch handler when HOME is non-directory (ENOTDIR)
- session-end: main().catch handler when HOME is non-directory (ENOTDIR)

Total tests: 828 → 831
2026-02-13 09:59:48 -08:00
Affaan Mustafa
241c35a589 test: cover setGlobal/setProject catch blocks and session-start main().catch
- setup-package-manager: setGlobal catch when HOME is non-directory (ENOTDIR)
- setup-package-manager: setProject catch when CWD is read-only (EACCES)
- session-start: main().catch handler when ensureDir throws (exit 0, don't block)

Total tests: 825 → 828
2026-02-13 09:55:00 -08:00
Affaan Mustafa
0c67e0571e test: cover cleanupAliases save failure, setAlias save failure, and validate-commands statSync catch
Round 73: Add 3 tests for genuine untested code paths:
- session-aliases cleanupAliases returns failure when save blocked after removing aliases
- session-aliases setAlias returns failure when save blocked on new alias creation
- validate-commands silently skips broken symlinks in skill directory scanning
2026-02-13 09:42:25 -08:00
Affaan Mustafa
02d5986049 test: cover setProjectPM save failure, deleteAlias save failure, hooks async/timeout validation
Round 72: Add 4 tests for untested code paths (818 → 822):
- package-manager.js: setProjectPackageManager wraps writeFile errors (lines 275-279)
- session-aliases.js: deleteAlias returns failure when saveAliases fails (line 299)
- validate-hooks.js: rejects non-boolean async field (line 28-31)
- validate-hooks.js: rejects negative timeout value (lines 32-35)
2026-02-13 08:12:27 -08:00
Affaan Mustafa
f623e3b429 test: cover findFiles unreadable subdir, session-start default PM, setPreferredPM save failure
Round 71: Add 3 tests for untested code paths (815 → 818):
- utils.js findFiles: recursive scan silently skips unreadable subdirectories (line 188 catch)
- session-start.js: shows getSelectionPrompt when pm.source is 'default' (lines 69-72)
- package-manager.js: setPreferredPackageManager wraps saveConfig errors (lines 250-254)
2026-02-13 08:01:15 -08:00
Affaan Mustafa
44b5a4f9f0 test: add 3 tests for untested fallback/skip/failure paths (Round 70)
- session-end.js: entry.name/entry.input fallback in direct tool_use entries
- validate-commands.js: "would create:" regex alternation skip line
- session-aliases.js: updateAliasTitle save failure with read-only dir
2026-02-13 07:48:39 -08:00
Affaan Mustafa
567664091d test: add 3 tests for untested code paths (Round 69, 812 total)
- getGitModifiedFiles: all-invalid patterns skip filtering (compiled.length === 0)
- getSessionById: returns null when sessions directory doesn't exist
- getPackageManager: global-config success path returns source 'global-config'
2026-02-13 07:35:20 -08:00
Affaan Mustafa
5031a84d6e test: add 3 tests for setup-pm --project success, demo export, --list marker (Round 68) 2026-02-13 07:23:16 -08:00
Affaan Mustafa
702c3f54b4 test: add 3 tests for session-aliases empty file, null resolve, metadata backfill (Round 67) 2026-02-13 07:18:28 -08:00
Affaan Mustafa
162222a46c test: add 3 tests for session-manager noIdMatch, session-end fallbacks (Round 66)
- session-manager.js: getSessionById with date-only string exercises the
  noIdMatch path for old-format sessions (2026-02-10 → 2026-02-10-session.tmp)
- session-end.js: extract user messages from role-only JSONL format
  ({"role":"user",...} without type field) exercises line 48 fallback
- session-end.js: nonexistent transcript_path triggers "Transcript not found"
  log path (lines 153-155), creates session with blank template

Total: 803 tests, all passing
2026-02-13 07:10:54 -08:00
Affaan Mustafa
485def8582 test: add 3 tests for evaluate-session regex, empty rules/skills dirs (Round 65)
- evaluate-session.js: verify regex whitespace tolerance around colon
  matches "type" : "user" (with spaces), not just compact JSON
- validate-rules.js: empty directory with no .md files yields Validated 0
- validate-skills.js: directory with only files, no subdirectories yields
  Validated 0

Total: 800 tests, all passing
2026-02-13 07:04:55 -08:00
Affaan Mustafa
cba6b44c61 test: add 3 tests for suggest-compact, session-aliases, typecheck (Round 64)
- suggest-compact: 'default' session ID fallback when CLAUDE_SESSION_ID empty
- session-aliases: loadAliases backfills missing version and metadata fields
- post-edit-typecheck: valid JSON without tool_input passes through unchanged

Total: 797 tests, all passing
2026-02-13 06:59:08 -08:00
Affaan Mustafa
1fcdf12b62 test: add 3 CI validator tests for untested code paths (Round 63)
- validate-hooks: object-format matcher missing matcher field (line 97-100)
- validate-commands: readFileSync catch block for unreadable .md files (lines 56-62)
- validate-commands: empty commands directory with no .md files (Validated 0)

Total: 794 tests, all passing
2026-02-13 06:55:30 -08:00
Affaan Mustafa
85a86f6747 test: add --global success, bare PM name, and source label tests (Round 62)
- setup-package-manager.test.js: --global npm writes config and exits 0
- setup-package-manager.test.js: bare PM name sets global preference
- setup-package-manager.test.js: --detect with env var shows source 'environment'

791 tests total, all passing.
2026-02-13 06:49:29 -08:00
Affaan Mustafa
3ec0aa7b50 test: add replaceInFile write failure, empty sessions dir, and corrupted global config tests (Round 61)
- utils.test.js: replaceInFile returns false on read-only file (catch block)
- session-manager.test.js: getAllSessions returns empty when sessions dir missing
- package-manager.test.js: getPackageManager falls through corrupted global config to npm default

788 tests total, all passing.
2026-02-13 06:44:52 -08:00
Affaan Mustafa
9afecedb21 test: add replaceInFile failure, console-warn overflow, and missing tool_input tests (Round 60) 2026-02-13 06:25:35 -08:00
Affaan Mustafa
7db0d316f5 test: add unreadable session file, stdin overflow, and read-only compact tests (Round 59) 2026-02-13 06:19:02 -08:00
Affaan Mustafa
99fc51dda7 test: add unreadable agent file, colonIdx edge case, and command-as-object tests (Round 58) 2026-02-13 06:14:06 -08:00
Affaan Mustafa
2fea46edc7 test: add SKILL.md-as-directory, broken symlink, and adjacent code block tests (Round 57) 2026-02-13 06:02:56 -08:00
Affaan Mustafa
990c08159c test: add tsconfig walk-up, compact fallback, and Windows atomic write tests (Round 56) 2026-02-13 05:59:07 -08:00
Affaan Mustafa
43808ccf78 test: add maxAge boundary, multi-session injection, and stdin overflow tests (Round 55)
- session-start.js excludes sessions older than 7 days (6.9d vs 8d boundary)
- session-start.js injects newest session when multiple recent sessions exist
- session-end.js handles stdin exceeding 1MB MAX_STDIN limit via env var fallback
2026-02-13 05:48:34 -08:00
Affaan Mustafa
3bc0929c6e test: add search scope, path utility, and zero-value analysis tests (Round 54)
- getAllSessions search matches only shortId, not title/content
- getSessionPath returns absolute path with correct directory structure
- analysisResults handles zero values for all data fields without crash
2026-02-13 05:43:29 -08:00
Affaan Mustafa
ad40bf3aad test: add env var fallback, console.log max matches, and format non-existent file tests
Round 53: Adds 3 hook tests — validates evaluate-session.js
falls back to CLAUDE_TRANSCRIPT_PATH env var when stdin JSON
is invalid, post-edit-console-warn.js truncates output to max
5 matches, and post-edit-format.js passes through data when
the target .tsx file doesn't exist.
2026-02-13 05:34:59 -08:00
Affaan Mustafa
f1a693f7cf test: add inline backtick ref, workflow whitespace, and code-only rule tests
Round 52: Adds 3 CI validator tests — validates command refs
inside inline backticks are checked (not stripped like fenced
blocks), workflow arrows with irregular whitespace pass, and
rule files containing only fenced code blocks are accepted.
2026-02-13 05:29:04 -08:00
Affaan Mustafa
4e520c6873 test: add timeout enforcement, async hook schema, and command format validation tests
Round 51: Adds 3 integration tests for hook infrastructure —
validates hanging hook timeout/kill mechanism, hooks.json async
hook configuration schema, and all hook command format consistency.
2026-02-13 05:23:16 -08:00
Affaan Mustafa
86844a305a test: add alias reporting, parallel compaction, and graceful degradation tests 2026-02-13 05:13:56 -08:00
Affaan Mustafa
b950fd7427 test: add typecheck extension edge cases and conditional summary section tests 2026-02-13 05:10:07 -08:00
Affaan Mustafa
71e86cc93f test: add packageManager version format and sequential save integrity tests 2026-02-13 05:04:58 -08:00
Affaan Mustafa
4f7b50fb78 test: add inline JS escape validation and frontmatter colon-less line tests 2026-02-13 05:01:28 -08:00
Affaan Mustafa
277006bd7f test: add Windows path heuristic and checkbox case sensitivity tests
Round 46: verify getSessionStats recognises C:/ and D:\ as file
paths but not bare C: without slash; verify parseSessionMetadata
only matches lowercase [x] checkboxes (not uppercase [X]).
2026-02-13 04:51:39 -08:00
Affaan Mustafa
f6ebc2a3c2 test: add setup-package-manager marker uniqueness and list completeness tests
Round 45: verify --detect shows exactly one (current) marker and
--list includes all four PMs with Lock file + Install entries.
2026-02-13 04:47:30 -08:00
Affaan Mustafa
443986e086 test: verify session-start.js handles empty (0-byte) session file gracefully 2026-02-13 04:43:59 -08:00
Affaan Mustafa
c92d3f908f test: verify getSessionById excludes content/metadata/stats when includeContent is false 2026-02-13 04:39:25 -08:00
Affaan Mustafa
b868f42ad1 test: add validator edge-case tests for case sensitivity, frontmatter spacing, missing dirs, and empty matchers 2026-02-13 04:35:02 -08:00
Affaan Mustafa
842ff2eff6 test: verify pre-compact annotates only newest session file when multiple exist 2026-02-13 04:31:05 -08:00
Affaan Mustafa
b678c2f1b0 fix: collapse newlines in user messages to prevent markdown list breaks in session-end
User messages containing newline characters were being added as-is to
markdown list items in buildSummarySection(), breaking the list format.
Now newlines are replaced with spaces before backtick escaping.
2026-02-13 04:28:50 -08:00
Affaan Mustafa
dc11fc2fd8 fix: make saveAliases atomic on Unix by skipping unnecessary unlink before rename
On Unix/macOS, rename(2) atomically replaces the destination file.
The previous code ran unlinkSync before renameSync on all platforms,
creating an unnecessary non-atomic window where a crash could lose
data. Now the delete-before-rename is gated behind process.platform
=== 'win32', where rename cannot overwrite an existing file.
2026-02-13 04:23:22 -08:00
Affaan Mustafa
0daa5cb070 test: add evaluate-session tilde expansion and missing config tests (Round 38) 2026-02-13 04:19:13 -08:00
Affaan Mustafa
e2040b46b3 fix: remove unreachable return after process.exit in post-edit-typecheck hook 2026-02-13 04:15:13 -08:00
Affaan Mustafa
c93c218cb8 fix: sync Cursor suggest-compact.js with corrected hooks version
The .cursor copy had diverged from scripts/hooks/suggest-compact.js:
- Fixed interval calculation: count % 25 → (count - threshold) % 25
  so suggestions fire relative to the configured threshold
- Added upper bound clamp (<=1000000) to prevent counter corruption
  from large values converting to scientific notation strings
- Removed unreliable String(process.ppid) fallback for session ID
2026-02-13 04:09:31 -08:00
Affaan Mustafa
b497135b95 fix: correct box() off-by-one width calculation in skill-create-output
The box() helper produced lines that were width+1 characters instead of
the requested width. Adjusted all three formulas (top border, middle
content, bottom border) by -1 each. Added 4 tests verifying box width
accuracy across instincts(), analysisResults(), and nextSteps() output.
2026-02-13 04:05:12 -08:00
Affaan Mustafa
554b5d6704 fix: header subtitle width mismatch in skill-create-output; add 9 tests (Round 34)
- Fix subtitle padding 55→59 so line 94 matches 64-char border width
- Add 4 header width alignment tests (skill-create-output)
- Add 3 getExecCommand non-string args tests (package-manager)
- Add 2 detectFromPackageJson non-string type tests (package-manager)
2026-02-13 03:58:16 -08:00
Affaan Mustafa
bb9df39d96 test: add 10 tests for birthtime fallback, stdin error, alias rollback (Round 33)
Cover createdTime/birthtime fallback in session-manager, readStdinJson
error event settled-flag guard in utils, renameAlias rollback on naming
conflict in session-aliases, and saveAliases backup preservation on
serialization failure. Total: 713 tests.
2026-02-13 03:50:44 -08:00
Affaan Mustafa
72de0a4e2c test: add 17 tests for validators, hooks, and edge cases (Round 32)
Coverage improvements:
- validate-agents: empty frontmatter block, no-content frontmatter,
  partial frontmatter, mixed valid/invalid agents
- validate-rules: directory with .md name (stat.isFile check),
  deeply nested subdirectory rules
- validate-commands: 3-agent workflow chain, broken middle agent
- post-edit-typecheck: spaces in paths, shell metacharacters, .tsx
- check-console-log: git failure passthrough, large stdin
- post-edit-console-warn: console.error only, null tool_input
- session-end: empty transcript, whitespace-only transcript

Total tests: 686 → 703
2026-02-13 03:44:10 -08:00
Affaan Mustafa
167b105cac fix: reject flags passed as package manager names in setup-package-manager CLI
When --global or --project was followed by another flag (e.g., --global --project),
the flag was treated as a package manager name. Added pmName.startsWith('-') check
to both handlers. Added 20 tests across 4 test files covering argument validation,
ensureDir error propagation, runCommand stderr handling, and saveAliases failure paths.
2026-02-13 03:37:46 -08:00
Affaan Mustafa
b1eb99d961 fix: use local-time Date constructor in session-manager to prevent timezone day shift
new Date('YYYY-MM-DD') creates UTC midnight, which in negative UTC offset
timezones (e.g., Hawaii) causes getDate() to return the previous day.
Replaced with new Date(year, month - 1, day) for correct local-time behavior.

Added 15 tests: session-manager datetime verification and edge cases (7),
package-manager getCommandPattern special characters (4), and
validators model/skill-reference validation (4). Tests: 651 → 666.
2026-02-13 03:29:04 -08:00
Affaan Mustafa
992688a674 fix: add cwd to prettier hook, consistent process.exit(0), and stdout pass-through
- post-edit-format.js: add cwd based on file directory so npx resolves
  correct local prettier binary
- post-edit-typecheck.js, post-edit-format.js: replace console.log(data)
  with process.stdout.write(data) to avoid trailing newline corruption
- Add process.exit(0) to 4 hooks for consistent termination
  (check-console-log, post-edit-console-warn, post-edit-format,
  post-edit-typecheck)
- run-all.js: switch from execSync to spawnSync so stderr is visible
  on the success path (hook warnings were silently discarded)
- Add 21 tests: cwd verification, process.exit(0) checks, exact
  stdout pass-through, extension edge cases, exclusion pattern
  matching, threshold boundary values (630 → 651)
2026-02-13 03:20:41 -08:00
Affaan Mustafa
253645b5e4 test: add 22 tests for readStdinJson, evaluate-session config, and suggest-compact hook
- utils.test.js: 5 tests for readStdinJson maxSize truncation, whitespace-only stdin, trailing whitespace, and BOM prefix handling
- evaluate-session.test.js: 4 tests for config file parsing, assistant-only transcripts, malformed JSON lines, and empty stdin
- suggest-compact.test.js: 13 new tests covering counter file creation/increment, threshold suggestion, interval suggestion, env var handling, corrupted/empty counter files, and session isolation
2026-02-13 03:11:51 -08:00
Affaan Mustafa
b3db83d018 test: add 22 tests for validators, skill-create-output, and package-manager edge cases 2026-02-13 03:02:28 -08:00
Affaan Mustafa
d903053830 test: add 15 tests for session-manager and session-aliases edge cases
Cover 30-day month validation (Sep/Nov 31 rejection), getSessionStats
path heuristic with multiline content, combined date+search+pagination
in getAllSessions, ambiguous prefix matching in getSessionById, unclosed
code fence in parseSessionMetadata, empty checklist item behavior,
reserved name case sensitivity (LIST/Help/Set), negative limit in
listAliases, and undefined title in setAlias.
2026-02-13 02:54:23 -08:00
Affaan Mustafa
6bbcbec23d fix: exact byte pass-through in post-edit-console-warn, add 7 tests
Replace console.log(data) with process.stdout.write(data) in both
pass-through paths to prevent appending a trailing newline that
corrupts the hook output. Add 7 tests covering exact byte fidelity,
malformed JSON, missing file_path, non-existent files, exclusion
patterns in check-console-log, non-git repo handling, and empty stdin.
2026-02-13 02:49:33 -08:00
Affaan Mustafa
f4758ff8f0 fix: consistent periodic interval spacing in suggest-compact, add 10 tests
- suggest-compact.js: count % 25 → (count - threshold) % 25 for consistent
  spacing regardless of threshold value
- Update existing periodic interval test to match corrected behavior
- 10 new tests: interval fix regression (non-25-divisible threshold, false
  suggestion prevention), corrupted counter file, 1M boundary, malformed
  JSON pass-through, non-TS extension pass-through, empty sessions dir,
  blank template skip
2026-02-13 02:45:08 -08:00
Affaan Mustafa
4ff4872bf3 fix: nullish coalescing in evaluate-session config, narrow pre-compact glob, add 11 tests
- evaluate-session.js: || 10 → ?? 10 for min_session_length (0 is valid)
- pre-compact.js: *.tmp → *-session.tmp to match only session files
- 11 new tests: config loading (min=0, null, custom path, invalid JSON),
  session-end update path (timestamp, template replace, preserve content),
  pre-compact glob specificity, extractSessionSummary edge cases
2026-02-13 02:42:01 -08:00
Affaan Mustafa
27dce7794a fix: reject empty/invalid array commands in hooks validator, add 19 tests
validate-hooks.js: Empty arrays [] and arrays with non-string elements
(e.g., [123, null]) passed command validation due to JS truthiness of
empty arrays (![] === false). Added explicit length and element type
checks.

19 new tests covering: non-array event type values, null/string matcher
entries, string/number top-level data, empty string/array commands,
non-string array elements, non-string type field, non-number timeout,
timeout boundary (0), unwrapped hooks format, legacy format error paths,
empty agent directory, whitespace-only command files, valid skill refs,
mixed valid/invalid rules and skills.
2026-02-13 02:33:40 -08:00
Affaan Mustafa
a62a3a2416 fix: sanitize getExecCommand args, escape regex in getCommandPattern, clean up readStdinJson timeout, add 10 tests
Validate args parameter in getExecCommand() against SAFE_ARGS_REGEX to
prevent command injection when returned string is passed to a shell.
Escape regex metacharacters in getCommandPattern() generic action branch
to prevent malformed patterns and unintended matching. Clean up stdin
listeners in readStdinJson() timeout path to prevent process hanging.
2026-02-13 02:27:04 -08:00
Affaan Mustafa
d9331cb17f fix: eliminate command injection in hooks, fix pass-through newline corruption, add 8 tests
Replace shell: true with npx.cmd on Windows in post-edit-format.js and
post-edit-typecheck.js to prevent command injection via crafted file paths.
Replace console.log(data) with process.stdout.write(data) in
check-console-log.js to avoid appending extra newlines to pass-through data.
2026-02-13 02:22:55 -08:00
Affaan Mustafa
f33ed4c49e fix: clamp getAllSessions pagination params, add cleanupAliases success field, add 10 tests
- session-manager: clamp offset/limit to safe non-negative integers to
  prevent negative offset counting from end and NaN returning empty results
- session-aliases: add success field to cleanupAliases return value for
  API contract consistency with setAlias/deleteAlias/renameAlias
2026-02-13 02:16:22 -08:00
Affaan Mustafa
2dbba8877b fix: reject whitespace-only command/field values in CI validators, add 10 tests
validate-hooks.js: whitespace-only command strings now fail validation
validate-agents.js: whitespace-only model/tools values now fail validation
2026-02-13 02:09:22 -08:00
Affaan Mustafa
5398ac793d fix: clamp progressBar to prevent RangeError on overflow, add 10 tests
progressBar() in skill-create-output.js could crash with RangeError when
percent > 100 because repeat() received a negative count. Fixed by
clamping filled to [0, width].

New tests:
- progressBar edge cases: 0%, 100%, and >100% confidence
- Empty patterns/instincts arrays
- post-edit-format: null tool_input, missing file_path, prettier failure
- setup-package-manager: --detect output completeness, current marker
2026-02-13 02:01:57 -08:00
Affaan Mustafa
0e0319a1c2 fix: clamp suggest-compact counter overflow, add 9 boundary tests
Counter file could contain huge values (e.g. 999999999999) that pass
Number.isFinite() but cause unbounded growth. Added range clamp to
reject values outside [1, 1000000].

New tests cover:
- Counter overflow reset (huge number, negative number)
- COMPACT_THRESHOLD zero fallback
- session-end empty sections (no tools/files omits headers)
- session-end slice boundaries (10 messages, 20 tools, 30 files)
- post-edit-console-warn 5-match limit
- post-edit-console-warn ignores console.warn/error/debug
2026-02-13 01:59:25 -08:00
Affaan Mustafa
c1919bb879 fix: greedy regex in validate-commands captures all refs per line, add 18 tests
The command cross-reference regex /^.*`\/(...)`.*$/gm only captured the
LAST command ref per line due to greedy .* consuming earlier refs.
Replaced with line-by-line processing using non-anchored regex to
capture ALL command references.

New tests:
- 4 validate-commands multi-ref-per-line tests (regression)
- 8 evaluate-session threshold boundary tests (new file)
- 6 session-aliases edge case tests (cleanup, rename, path matching)
2026-02-13 01:52:30 -08:00
Affaan Mustafa
6dcb5daa5c fix: sync .opencode/ package version to 1.4.1
The OpenCode sub-package had stale 1.0.0 versions in package.json,
index.ts VERSION export, and package-lock.json while the main package
is at 1.4.1. Updated all three to match.
2026-02-13 01:49:39 -08:00
Affaan Mustafa
e96b522af0 fix: calendar-accurate date validation in parseSessionFilename, add 22 tests
- Fix parseSessionFilename to reject impossible dates (Feb 31, Apr 31,
  Feb 29 non-leap) using Date constructor month/day roundtrip check
- Add 6 session-manager tests for calendar date validation edge cases
- Add 3 session-manager tests for code blocks/special chars in getSessionStats
- Add 10 package-manager tests for PM-specific command formats (getRunCommand
  and getExecCommand for pnpm, yarn, bun, npm)
- Add 3 integration tests for session-end transcript parsing (mixed JSONL
  formats, malformed lines, nested user messages)
2026-02-13 01:42:56 -08:00
Affaan Mustafa
34edb59e19 test: add 7 package-manager priority and source detection tests
- Test valid project-config detection (.claude/package-manager.json)
- Test priority order: project-config > package.json > lock-file
- Test package.json > lock-file priority
- Test default fallback to npm
- Test setPreferredPackageManager success case
- Test getCommandPattern for test and build actions
2026-02-13 01:38:29 -08:00
Affaan Mustafa
37309d47b7 fix: box alignment in test runner, update metadata counts, add 18 tests
- Fix run-all.js box alignment (hardcoded spaces 1 char short, now using dynamic padEnd)
- Update .opencode/index.ts metadata (12→13 agents, 24→31 commands, 16→37 skills)
- Add commandExists edge case tests (empty, spaces, path separators, metacharacters)
- Add findFiles edge case tests (? wildcard, mtime sorting, maxAge filtering)
- Add ensureDir race condition and return value tests
- Add runCommand output trimming and failure tests
- Add pre-compact session annotation and compaction log timestamp tests
- Add check-console-log invalid JSON handling test
- Add replaceInFile capture group test
- Add readStdinJson Promise type check
2026-02-13 01:36:42 -08:00
Affaan Mustafa
3f651b7c3c fix: typecheck hook false positives, add 11 session-manager tests
- Fix post-edit-typecheck.js error filtering: use relative/absolute path
  matching instead of basename, preventing false positives when multiple
  files share the same name (e.g., src/utils.ts vs tests/utils.ts)
- Add writeSessionContent tests (create, overwrite, invalid path)
- Add appendSessionContent test (append to existing file)
- Add deleteSession tests (delete existing, non-existent)
- Add sessionExists tests (file, non-existent, directory)
- Add getSessionStats empty content edge case
- Add post-edit-typecheck stdout passthrough test
- Total: 391 → 402 tests, all passing
2026-02-13 01:28:59 -08:00
Affaan Mustafa
e9343c844b fix: include .md files in instinct-cli glob (completes #216)
The observer agent creates instinct files as .md with YAML frontmatter,
but load_all_instincts() only globbed *.yaml and *.yml. Add *.md to the
glob so instinct-cli status discovers all instinct files.
2026-02-13 01:26:37 -08:00
Affaan Mustafa
7b94b51269 fix: add missing ReplaceInFileOptions to utils.d.ts type declaration
The replaceInFile function in utils.js accepts an optional `options`
parameter with `{ all?: boolean }` for replacing all occurrences, but
the .d.ts type declaration was missing this parameter entirely.
2026-02-13 01:24:34 -08:00
Affaan Mustafa
6f95dbe7ba fix: grepFile global regex lastIndex bug, add 12 tests
Fix grepFile() silently skipping matches when called with /g flag regex.
The global flag makes .test() stateful, causing alternating match/miss
on consecutive matching lines. Strip g flag since per-line testing
doesn't need global state.

Add first-ever tests for evaluate-session.js (5 tests: short session,
long session, missing transcript, malformed stdin, env var fallback)
and suggest-compact.js (5 tests: counter increment, threshold trigger,
periodic suggestions, below-threshold silence, invalid threshold).
2026-02-13 01:18:07 -08:00
Affaan Mustafa
02120fbf5f chore: add dist, __pycache__, and tasks to .gitignore
Prevents accidental commits of build output, Python bytecode
cache, and Claude Code team task files.
2026-02-13 01:11:37 -08:00
Affaan Mustafa
a4848da38b test: add tsconfig depth limit and cleanupAliases exception tests
- post-edit-typecheck: verify 25-level-deep directory completes without
  hanging (tests the max depth=20 walk-up guard)
- cleanupAliases: document behavior when sessionExists callback throws
  (propagates to caller, which is acceptable)
2026-02-13 01:10:30 -08:00
Affaan Mustafa
307ee05b2d fix: instinct-cli glob and evolve --generate (fixes #216, #217)
- Load both .yaml and .yml files in load_all_instincts() (#216)
  The *.yaml-only glob missed .yml files, causing 'No instincts found'
- Implement evolve --generate to create skill/command/agent files (#217)
  Previously printed a stub message. Now generates SKILL.md, command .md,
  and agent .md files from the clustering analysis into ~/.claude/homunculus/evolved/
2026-02-13 01:09:16 -08:00
Affaan Mustafa
c1b6e0bf11 test: add coverage for Claude Code JSONL format and assistant tool blocks
Tests the new transcript parsing from PR #215:
- entry.message.content format (string and array content)
- tool_use blocks nested in assistant message content arrays
- Verifies file paths and tool names extracted from both formats
2026-02-13 01:07:23 -08:00
Affaan Mustafa
654731f232 fix: add missing validation in renameAlias, add 6 tests
renameAlias was missing length (>128), reserved name, and empty string
validation that setAlias enforced. This inconsistency allowed renaming
aliases to reserved names like 'list' or 'delete'.

Also adds tests for:
- renameAlias empty string, reserved name, and length limit
- validate-skills whitespace-only SKILL.md rejection
- validate-rules whitespace-only file and recursive subdirectory scan
2026-02-13 01:05:59 -08:00
zdoc.app
95f63c3cb0 docs(zh-CN): sync Chinese docs with latest upstream changes (#202)
* docs(zh-CN): sync Chinese docs with latest upstream changes

* docs: improve Chinese translation consistency in go-test.md

* docs(zh-CN): update image paths to use shared assets directory

- Update image references from ./assets/ to ../../assets/
- Remove zh-CN/assets directory to use shared assets

---------

Co-authored-by: neo <neo.dowithless@gmail.com>
2026-02-13 01:04:58 -08:00
Siddhi Khandelwal
49aee612fb docs(opencode): clarify OpenCode-specific usage (#214)
* docs(opencode): clarify OpenCode-specific usage

Signed-off-by: Siddhi Khandelwal <siddhi.200727@gmail.com>

* docs(opencode): close bash code fence in CLI example

Signed-off-by: Siddhi Khandelwal <siddhi.200727@gmail.com>

---------

Signed-off-by: Siddhi Khandelwal <siddhi.200727@gmail.com>
2026-02-13 01:04:36 -08:00
dungan
4843a06b3a fix: Windows compatibility for hook scripts (execFileSync + tmux) (#215)
* fix: Windows compatibility for hook scripts

- post-edit-format.js: add `shell: process.platform === 'win32'` to
  execFileSync options so npx.cmd is resolved via cmd.exe on Windows
- post-edit-typecheck.js: same fix for tsc invocation via npx
- hooks.json: skip tmux-dependent hooks on Windows where tmux is
  unavailable (dev-server blocker and long-running command reminder)

On Windows, execFileSync('npx', ...) without shell:true fails with
ENOENT because Node.js cannot directly execute .cmd files. These
hooks silently fail on all Windows installations.

The tmux hooks unconditionally block dev server commands (exit 2) or
warn about tmux on Windows where tmux is not available.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: parse Claude Code JSONL transcript format correctly

The session-end hook expected user messages at entry.content, but
Claude Code's actual JSONL format nests them at entry.message.content.
This caused all session files to be blank templates (0 user messages
despite 136+ actual entries).

- Check entry.message?.content in addition to entry.content
- Extract tool_use blocks from assistant message.content arrays

Verified with Claude Code v2.1.41 JSONL transcripts.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: ddungan <sckim@mococo.co.kr>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 01:04:27 -08:00
Pangerkumzuk Longkumer
9db98673d0 Merge branch 'affaan-m:main' into main 2026-02-09 17:12:53 +05:30
Pangerkumzuk Longkumer
fab2e05ae7 Merge branch 'affaan-m:main' into main 2026-02-02 10:53:41 +05:30
Graceme Kamei
8d65c6d429 Merge branch 'affaan-m:main' into main 2026-01-30 19:32:07 +05:30
Panger Lkr
9b2233b5bc Merge branch 'affaan-m:main' into main 2026-01-27 10:15:34 +05:30
Panger Lkr
5a26daf392 Merge pull request #1 from pangerlkr/pangerlkr-patch-1
feat: add cloud infrastructure security skill
2026-01-23 23:14:43 +05:30
Panger Lkr
438d082e30 feat: add cloud infrastructure security skill
Add comprehensive cloud and infrastructure security skill covering:
- IAM & access control (least privilege, MFA)
- Secrets management & rotation
- Network security (VPC, firewalls)
- Logging & monitoring setup
- CI/CD pipeline security
- Cloudflare/CDN security
- Backup & disaster recovery
- Pre-deployment checklist

Complements existing security-review skill with cloud-specific guidance.
2026-01-23 22:50:59 +05:30
107 changed files with 20507 additions and 1175 deletions

View File

@@ -8,7 +8,7 @@ Pre-translated configurations for [Cursor IDE](https://cursor.com), part of the
|----------|-------|-------------|
| Rules | 27 | Coding standards, security, testing, patterns (common + TypeScript/Python/Go) |
| Agents | 13 | Specialized AI agents (planner, architect, code-reviewer, tdd-guide, etc.) |
| Skills | 37 | Agent skills for backend, frontend, security, TDD, and more |
| Skills | 43 | Agent skills for backend, frontend, security, TDD, and more |
| Commands | 31 | Slash commands for planning, reviewing, testing, and deployment |
| MCP Config | 1 | Pre-configured MCP servers (GitHub, Supabase, Vercel, Railway, etc.) |

View File

@@ -12,7 +12,7 @@ alwaysApply: true
- Pair programming and code generation
- Worker agents in multi-agent systems
**Sonnet 4.5** (Best coding model):
**Sonnet 4.6** (Best coding model):
- Main development work
- Orchestrating multi-agent workflows
- Complex coding tasks

View File

@@ -88,7 +88,12 @@ def load_all_instincts() -> list[dict]:
for directory in [PERSONAL_DIR, INHERITED_DIR]:
if not directory.exists():
continue
for file in directory.glob("*.yaml"):
yaml_files = sorted(
set(directory.glob("*.yaml"))
| set(directory.glob("*.yml"))
| set(directory.glob("*.md"))
)
for file in yaml_files:
try:
content = file.read_text()
parsed = parse_instinct_file(content)
@@ -433,15 +438,96 @@ def cmd_evolve(args):
print()
if args.generate:
print("\n[Would generate evolved structures here]")
print(" Skills would be saved to:", EVOLVED_DIR / "skills")
print(" Commands would be saved to:", EVOLVED_DIR / "commands")
print(" Agents would be saved to:", EVOLVED_DIR / "agents")
generated = _generate_evolved(skill_candidates, workflow_instincts, agent_candidates)
if generated:
print(f"\n✅ Generated {len(generated)} evolved structures:")
for path in generated:
print(f" {path}")
else:
print("\nNo structures generated (need higher-confidence clusters).")
print(f"\n{'='*60}\n")
return 0
# ─────────────────────────────────────────────
# Generate Evolved Structures
# ─────────────────────────────────────────────
def _generate_evolved(skill_candidates: list, workflow_instincts: list, agent_candidates: list) -> list[str]:
"""Generate skill/command/agent files from analyzed instinct clusters."""
generated = []
# Generate skills from top candidates
for cand in skill_candidates[:5]:
trigger = cand['trigger'].strip()
if not trigger:
continue
name = re.sub(r'[^a-z0-9]+', '-', trigger.lower()).strip('-')[:30]
if not name:
continue
skill_dir = EVOLVED_DIR / "skills" / name
skill_dir.mkdir(parents=True, exist_ok=True)
content = f"# {name}\n\n"
content += f"Evolved from {len(cand['instincts'])} instincts "
content += f"(avg confidence: {cand['avg_confidence']:.0%})\n\n"
content += f"## When to Apply\n\n"
content += f"Trigger: {trigger}\n\n"
content += f"## Actions\n\n"
for inst in cand['instincts']:
inst_content = inst.get('content', '')
action_match = re.search(r'## Action\s*\n\s*(.+?)(?:\n\n|\n##|$)', inst_content, re.DOTALL)
action = action_match.group(1).strip() if action_match else inst.get('id', 'unnamed')
content += f"- {action}\n"
(skill_dir / "SKILL.md").write_text(content)
generated.append(str(skill_dir / "SKILL.md"))
# Generate commands from workflow instincts
for inst in workflow_instincts[:5]:
trigger = inst.get('trigger', 'unknown')
cmd_name = re.sub(r'[^a-z0-9]+', '-', trigger.lower().replace('when ', '').replace('implementing ', ''))
cmd_name = cmd_name.strip('-')[:20]
if not cmd_name:
continue
cmd_file = EVOLVED_DIR / "commands" / f"{cmd_name}.md"
content = f"# {cmd_name}\n\n"
content += f"Evolved from instinct: {inst.get('id', 'unnamed')}\n"
content += f"Confidence: {inst.get('confidence', 0.5):.0%}\n\n"
content += inst.get('content', '')
cmd_file.write_text(content)
generated.append(str(cmd_file))
# Generate agents from complex clusters
for cand in agent_candidates[:3]:
trigger = cand['trigger'].strip()
agent_name = re.sub(r'[^a-z0-9]+', '-', trigger.lower()).strip('-')[:20]
if not agent_name:
continue
agent_file = EVOLVED_DIR / "agents" / f"{agent_name}.md"
domains = ', '.join(cand['domains'])
instinct_ids = [i.get('id', 'unnamed') for i in cand['instincts']]
content = f"---\nmodel: sonnet\ntools: Read, Grep, Glob\n---\n"
content += f"# {agent_name}\n\n"
content += f"Evolved from {len(cand['instincts'])} instincts "
content += f"(avg confidence: {cand['avg_confidence']:.0%})\n"
content += f"Domains: {domains}\n\n"
content += f"## Source Instincts\n\n"
for iid in instinct_ids:
content += f"- {iid}\n"
agent_file.write_text(content)
generated.append(str(agent_file))
return generated
# ─────────────────────────────────────────────
# Main
# ─────────────────────────────────────────────

View File

@@ -0,0 +1,722 @@
---
name: cpp-coding-standards
description: C++ coding standards based on the C++ Core Guidelines (isocpp.github.io). Use when writing, reviewing, or refactoring C++ code to enforce modern, safe, and idiomatic practices.
---
# C++ Coding Standards (C++ Core Guidelines)
Comprehensive coding standards for modern C++ (C++17/20/23) derived from the [C++ Core Guidelines](https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines). Enforces type safety, resource safety, immutability, and clarity.
## When to Use
- Writing new C++ code (classes, functions, templates)
- Reviewing or refactoring existing C++ code
- Making architectural decisions in C++ projects
- Enforcing consistent style across a C++ codebase
- Choosing between language features (e.g., `enum` vs `enum class`, raw pointer vs smart pointer)
### When NOT to Use
- Non-C++ projects
- Legacy C codebases that cannot adopt modern C++ features
- Embedded/bare-metal contexts where specific guidelines conflict with hardware constraints (adapt selectively)
## Cross-Cutting Principles
These themes recur across the entire guidelines and form the foundation:
1. **RAII everywhere** (P.8, R.1, E.6, CP.20): Bind resource lifetime to object lifetime
2. **Immutability by default** (P.10, Con.1-5, ES.25): Start with `const`/`constexpr`; mutability is the exception
3. **Type safety** (P.4, I.4, ES.46-49, Enum.3): Use the type system to prevent errors at compile time
4. **Express intent** (P.3, F.1, NL.1-2, T.10): Names, types, and concepts should communicate purpose
5. **Minimize complexity** (F.2-3, ES.5, Per.4-5): Simple code is correct code
6. **Value semantics over pointer semantics** (C.10, R.3-5, F.20, CP.31): Prefer returning by value and scoped objects
## Philosophy & Interfaces (P.*, I.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **P.1** | Express ideas directly in code |
| **P.3** | Express intent |
| **P.4** | Ideally, a program should be statically type safe |
| **P.5** | Prefer compile-time checking to run-time checking |
| **P.8** | Don't leak any resources |
| **P.10** | Prefer immutable data to mutable data |
| **I.1** | Make interfaces explicit |
| **I.2** | Avoid non-const global variables |
| **I.4** | Make interfaces precisely and strongly typed |
| **I.11** | Never transfer ownership by a raw pointer or reference |
| **I.23** | Keep the number of function arguments low |
### DO
```cpp
// P.10 + I.4: Immutable, strongly typed interface
struct Temperature {
double kelvin;
};
Temperature boil(const Temperature& water);
```
### DON'T
```cpp
// Weak interface: unclear ownership, unclear units
double boil(double* temp);
// Non-const global variable
int g_counter = 0; // I.2 violation
```
## Functions (F.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **F.1** | Package meaningful operations as carefully named functions |
| **F.2** | A function should perform a single logical operation |
| **F.3** | Keep functions short and simple |
| **F.4** | If a function might be evaluated at compile time, declare it `constexpr` |
| **F.6** | If your function must not throw, declare it `noexcept` |
| **F.8** | Prefer pure functions |
| **F.16** | For "in" parameters, pass cheaply-copied types by value and others by `const&` |
| **F.20** | For "out" values, prefer return values to output parameters |
| **F.21** | To return multiple "out" values, prefer returning a struct |
| **F.43** | Never return a pointer or reference to a local object |
### Parameter Passing
```cpp
// F.16: Cheap types by value, others by const&
void print(int x); // cheap: by value
void analyze(const std::string& data); // expensive: by const&
void transform(std::string s); // sink: by value (will move)
// F.20 + F.21: Return values, not output parameters
struct ParseResult {
std::string token;
int position;
};
ParseResult parse(std::string_view input); // GOOD: return struct
// BAD: output parameters
void parse(std::string_view input,
std::string& token, int& pos); // avoid this
```
### Pure Functions and constexpr
```cpp
// F.4 + F.8: Pure, constexpr where possible
constexpr int factorial(int n) noexcept {
return (n <= 1) ? 1 : n * factorial(n - 1);
}
static_assert(factorial(5) == 120);
```
### Anti-Patterns
- Returning `T&&` from functions (F.45)
- Using `va_arg` / C-style variadics (F.55)
- Capturing by reference in lambdas passed to other threads (F.53)
- Returning `const T` which inhibits move semantics (F.49)
## Classes & Class Hierarchies (C.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **C.2** | Use `class` if invariant exists; `struct` if data members vary independently |
| **C.9** | Minimize exposure of members |
| **C.20** | If you can avoid defining default operations, do (Rule of Zero) |
| **C.21** | If you define or `=delete` any copy/move/destructor, handle them all (Rule of Five) |
| **C.35** | Base class destructor: public virtual or protected non-virtual |
| **C.41** | A constructor should create a fully initialized object |
| **C.46** | Declare single-argument constructors `explicit` |
| **C.67** | A polymorphic class should suppress public copy/move |
| **C.128** | Virtual functions: specify exactly one of `virtual`, `override`, or `final` |
### Rule of Zero
```cpp
// C.20: Let the compiler generate special members
struct Employee {
std::string name;
std::string department;
int id;
// No destructor, copy/move constructors, or assignment operators needed
};
```
### Rule of Five
```cpp
// C.21: If you must manage a resource, define all five
class Buffer {
public:
explicit Buffer(std::size_t size)
: data_(std::make_unique<char[]>(size)), size_(size) {}
~Buffer() = default;
Buffer(const Buffer& other)
: data_(std::make_unique<char[]>(other.size_)), size_(other.size_) {
std::copy_n(other.data_.get(), size_, data_.get());
}
Buffer& operator=(const Buffer& other) {
if (this != &other) {
auto new_data = std::make_unique<char[]>(other.size_);
std::copy_n(other.data_.get(), other.size_, new_data.get());
data_ = std::move(new_data);
size_ = other.size_;
}
return *this;
}
Buffer(Buffer&&) noexcept = default;
Buffer& operator=(Buffer&&) noexcept = default;
private:
std::unique_ptr<char[]> data_;
std::size_t size_;
};
```
### Class Hierarchy
```cpp
// C.35 + C.128: Virtual destructor, use override
class Shape {
public:
virtual ~Shape() = default;
virtual double area() const = 0; // C.121: pure interface
};
class Circle : public Shape {
public:
explicit Circle(double r) : radius_(r) {}
double area() const override { return 3.14159 * radius_ * radius_; }
private:
double radius_;
};
```
### Anti-Patterns
- Calling virtual functions in constructors/destructors (C.82)
- Using `memset`/`memcpy` on non-trivial types (C.90)
- Providing different default arguments for virtual function and overrider (C.140)
- Making data members `const` or references, which suppresses move/copy (C.12)
## Resource Management (R.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **R.1** | Manage resources automatically using RAII |
| **R.3** | A raw pointer (`T*`) is non-owning |
| **R.5** | Prefer scoped objects; don't heap-allocate unnecessarily |
| **R.10** | Avoid `malloc()`/`free()` |
| **R.11** | Avoid calling `new` and `delete` explicitly |
| **R.20** | Use `unique_ptr` or `shared_ptr` to represent ownership |
| **R.21** | Prefer `unique_ptr` over `shared_ptr` unless sharing ownership |
| **R.22** | Use `make_shared()` to make `shared_ptr`s |
### Smart Pointer Usage
```cpp
// R.11 + R.20 + R.21: RAII with smart pointers
auto widget = std::make_unique<Widget>("config"); // unique ownership
auto cache = std::make_shared<Cache>(1024); // shared ownership
// R.3: Raw pointer = non-owning observer
void render(const Widget* w) { // does NOT own w
if (w) w->draw();
}
render(widget.get());
```
### RAII Pattern
```cpp
// R.1: Resource acquisition is initialization
class FileHandle {
public:
explicit FileHandle(const std::string& path)
: handle_(std::fopen(path.c_str(), "r")) {
if (!handle_) throw std::runtime_error("Failed to open: " + path);
}
~FileHandle() {
if (handle_) std::fclose(handle_);
}
FileHandle(const FileHandle&) = delete;
FileHandle& operator=(const FileHandle&) = delete;
FileHandle(FileHandle&& other) noexcept
: handle_(std::exchange(other.handle_, nullptr)) {}
FileHandle& operator=(FileHandle&& other) noexcept {
if (this != &other) {
if (handle_) std::fclose(handle_);
handle_ = std::exchange(other.handle_, nullptr);
}
return *this;
}
private:
std::FILE* handle_;
};
```
### Anti-Patterns
- Naked `new`/`delete` (R.11)
- `malloc()`/`free()` in C++ code (R.10)
- Multiple resource allocations in a single expression (R.13 -- exception safety hazard)
- `shared_ptr` where `unique_ptr` suffices (R.21)
## Expressions & Statements (ES.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **ES.5** | Keep scopes small |
| **ES.20** | Always initialize an object |
| **ES.23** | Prefer `{}` initializer syntax |
| **ES.25** | Declare objects `const` or `constexpr` unless modification is intended |
| **ES.28** | Use lambdas for complex initialization of `const` variables |
| **ES.45** | Avoid magic constants; use symbolic constants |
| **ES.46** | Avoid narrowing/lossy arithmetic conversions |
| **ES.47** | Use `nullptr` rather than `0` or `NULL` |
| **ES.48** | Avoid casts |
| **ES.50** | Don't cast away `const` |
### Initialization
```cpp
// ES.20 + ES.23 + ES.25: Always initialize, prefer {}, default to const
const int max_retries{3};
const std::string name{"widget"};
const std::vector<int> primes{2, 3, 5, 7, 11};
// ES.28: Lambda for complex const initialization
const auto config = [&] {
Config c;
c.timeout = std::chrono::seconds{30};
c.retries = max_retries;
c.verbose = debug_mode;
return c;
}();
```
### Anti-Patterns
- Uninitialized variables (ES.20)
- Using `0` or `NULL` as pointer (ES.47 -- use `nullptr`)
- C-style casts (ES.48 -- use `static_cast`, `const_cast`, etc.)
- Casting away `const` (ES.50)
- Magic numbers without named constants (ES.45)
- Mixing signed and unsigned arithmetic (ES.100)
- Reusing names in nested scopes (ES.12)
## Error Handling (E.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **E.1** | Develop an error-handling strategy early in a design |
| **E.2** | Throw an exception to signal that a function can't perform its assigned task |
| **E.6** | Use RAII to prevent leaks |
| **E.12** | Use `noexcept` when throwing is impossible or unacceptable |
| **E.14** | Use purpose-designed user-defined types as exceptions |
| **E.15** | Throw by value, catch by reference |
| **E.16** | Destructors, deallocation, and swap must never fail |
| **E.17** | Don't try to catch every exception in every function |
### Exception Hierarchy
```cpp
// E.14 + E.15: Custom exception types, throw by value, catch by reference
class AppError : public std::runtime_error {
public:
using std::runtime_error::runtime_error;
};
class NetworkError : public AppError {
public:
NetworkError(const std::string& msg, int code)
: AppError(msg), status_code(code) {}
int status_code;
};
void fetch_data(const std::string& url) {
// E.2: Throw to signal failure
throw NetworkError("connection refused", 503);
}
void run() {
try {
fetch_data("https://api.example.com");
} catch (const NetworkError& e) {
log_error(e.what(), e.status_code);
} catch (const AppError& e) {
log_error(e.what());
}
// E.17: Don't catch everything here -- let unexpected errors propagate
}
```
### Anti-Patterns
- Throwing built-in types like `int` or string literals (E.14)
- Catching by value (slicing risk) (E.15)
- Empty catch blocks that silently swallow errors
- Using exceptions for flow control (E.3)
- Error handling based on global state like `errno` (E.28)
## Constants & Immutability (Con.*)
### All Rules
| Rule | Summary |
|------|---------|
| **Con.1** | By default, make objects immutable |
| **Con.2** | By default, make member functions `const` |
| **Con.3** | By default, pass pointers and references to `const` |
| **Con.4** | Use `const` for values that don't change after construction |
| **Con.5** | Use `constexpr` for values computable at compile time |
```cpp
// Con.1 through Con.5: Immutability by default
class Sensor {
public:
explicit Sensor(std::string id) : id_(std::move(id)) {}
// Con.2: const member functions by default
const std::string& id() const { return id_; }
double last_reading() const { return reading_; }
// Only non-const when mutation is required
void record(double value) { reading_ = value; }
private:
const std::string id_; // Con.4: never changes after construction
double reading_{0.0};
};
// Con.3: Pass by const reference
void display(const Sensor& s) {
std::cout << s.id() << ": " << s.last_reading() << '\n';
}
// Con.5: Compile-time constants
constexpr double PI = 3.14159265358979;
constexpr int MAX_SENSORS = 256;
```
## Concurrency & Parallelism (CP.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **CP.2** | Avoid data races |
| **CP.3** | Minimize explicit sharing of writable data |
| **CP.4** | Think in terms of tasks, rather than threads |
| **CP.8** | Don't use `volatile` for synchronization |
| **CP.20** | Use RAII, never plain `lock()`/`unlock()` |
| **CP.21** | Use `std::scoped_lock` to acquire multiple mutexes |
| **CP.22** | Never call unknown code while holding a lock |
| **CP.42** | Don't wait without a condition |
| **CP.44** | Remember to name your `lock_guard`s and `unique_lock`s |
| **CP.100** | Don't use lock-free programming unless you absolutely have to |
### Safe Locking
```cpp
// CP.20 + CP.44: RAII locks, always named
class ThreadSafeQueue {
public:
void push(int value) {
std::lock_guard<std::mutex> lock(mutex_); // CP.44: named!
queue_.push(value);
cv_.notify_one();
}
int pop() {
std::unique_lock<std::mutex> lock(mutex_);
// CP.42: Always wait with a condition
cv_.wait(lock, [this] { return !queue_.empty(); });
const int value = queue_.front();
queue_.pop();
return value;
}
private:
std::mutex mutex_; // CP.50: mutex with its data
std::condition_variable cv_;
std::queue<int> queue_;
};
```
### Multiple Mutexes
```cpp
// CP.21: std::scoped_lock for multiple mutexes (deadlock-free)
void transfer(Account& from, Account& to, double amount) {
std::scoped_lock lock(from.mutex_, to.mutex_);
from.balance_ -= amount;
to.balance_ += amount;
}
```
### Anti-Patterns
- `volatile` for synchronization (CP.8 -- it's for hardware I/O only)
- Detaching threads (CP.26 -- lifetime management becomes nearly impossible)
- Unnamed lock guards: `std::lock_guard<std::mutex>(m);` destroys immediately (CP.44)
- Holding locks while calling callbacks (CP.22 -- deadlock risk)
- Lock-free programming without deep expertise (CP.100)
## Templates & Generic Programming (T.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **T.1** | Use templates to raise the level of abstraction |
| **T.2** | Use templates to express algorithms for many argument types |
| **T.10** | Specify concepts for all template arguments |
| **T.11** | Use standard concepts whenever possible |
| **T.13** | Prefer shorthand notation for simple concepts |
| **T.43** | Prefer `using` over `typedef` |
| **T.120** | Use template metaprogramming only when you really need to |
| **T.144** | Don't specialize function templates (overload instead) |
### Concepts (C++20)
```cpp
#include <concepts>
// T.10 + T.11: Constrain templates with standard concepts
template<std::integral T>
T gcd(T a, T b) {
while (b != 0) {
a = std::exchange(b, a % b);
}
return a;
}
// T.13: Shorthand concept syntax
void sort(std::ranges::random_access_range auto& range) {
std::ranges::sort(range);
}
// Custom concept for domain-specific constraints
template<typename T>
concept Serializable = requires(const T& t) {
{ t.serialize() } -> std::convertible_to<std::string>;
};
template<Serializable T>
void save(const T& obj, const std::string& path);
```
### Anti-Patterns
- Unconstrained templates in visible namespaces (T.47)
- Specializing function templates instead of overloading (T.144)
- Template metaprogramming where `constexpr` suffices (T.120)
- `typedef` instead of `using` (T.43)
## Standard Library (SL.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **SL.1** | Use libraries wherever possible |
| **SL.2** | Prefer the standard library to other libraries |
| **SL.con.1** | Prefer `std::array` or `std::vector` over C arrays |
| **SL.con.2** | Prefer `std::vector` by default |
| **SL.str.1** | Use `std::string` to own character sequences |
| **SL.str.2** | Use `std::string_view` to refer to character sequences |
| **SL.io.50** | Avoid `endl` (use `'\n'` -- `endl` forces a flush) |
```cpp
// SL.con.1 + SL.con.2: Prefer vector/array over C arrays
const std::array<int, 4> fixed_data{1, 2, 3, 4};
std::vector<std::string> dynamic_data;
// SL.str.1 + SL.str.2: string owns, string_view observes
std::string build_greeting(std::string_view name) {
return "Hello, " + std::string(name) + "!";
}
// SL.io.50: Use '\n' not endl
std::cout << "result: " << value << '\n';
```
## Enumerations (Enum.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **Enum.1** | Prefer enumerations over macros |
| **Enum.3** | Prefer `enum class` over plain `enum` |
| **Enum.5** | Don't use ALL_CAPS for enumerators |
| **Enum.6** | Avoid unnamed enumerations |
```cpp
// Enum.3 + Enum.5: Scoped enum, no ALL_CAPS
enum class Color { red, green, blue };
enum class LogLevel { debug, info, warning, error };
// BAD: plain enum leaks names, ALL_CAPS clashes with macros
enum { RED, GREEN, BLUE }; // Enum.3 + Enum.5 + Enum.6 violation
#define MAX_SIZE 100 // Enum.1 violation -- use constexpr
```
## Source Files & Naming (SF.*, NL.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **SF.1** | Use `.cpp` for code files and `.h` for interface files |
| **SF.7** | Don't write `using namespace` at global scope in a header |
| **SF.8** | Use `#include` guards for all `.h` files |
| **SF.11** | Header files should be self-contained |
| **NL.5** | Avoid encoding type information in names (no Hungarian notation) |
| **NL.8** | Use a consistent naming style |
| **NL.9** | Use ALL_CAPS for macro names only |
| **NL.10** | Prefer `underscore_style` names |
### Header Guard
```cpp
// SF.8: Include guard (or #pragma once)
#ifndef PROJECT_MODULE_WIDGET_H
#define PROJECT_MODULE_WIDGET_H
// SF.11: Self-contained -- include everything this header needs
#include <string>
#include <vector>
namespace project::module {
class Widget {
public:
explicit Widget(std::string name);
const std::string& name() const;
private:
std::string name_;
};
} // namespace project::module
#endif // PROJECT_MODULE_WIDGET_H
```
### Naming Conventions
```cpp
// NL.8 + NL.10: Consistent underscore_style
namespace my_project {
constexpr int max_buffer_size = 4096; // NL.9: not ALL_CAPS (it's not a macro)
class tcp_connection { // underscore_style class
public:
void send_message(std::string_view msg);
bool is_connected() const;
private:
std::string host_; // trailing underscore for members
int port_;
};
} // namespace my_project
```
### Anti-Patterns
- `using namespace std;` in a header at global scope (SF.7)
- Headers that depend on inclusion order (SF.10, SF.11)
- Hungarian notation like `strName`, `iCount` (NL.5)
- ALL_CAPS for anything other than macros (NL.9)
## Performance (Per.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **Per.1** | Don't optimize without reason |
| **Per.2** | Don't optimize prematurely |
| **Per.6** | Don't make claims about performance without measurements |
| **Per.7** | Design to enable optimization |
| **Per.10** | Rely on the static type system |
| **Per.11** | Move computation from run time to compile time |
| **Per.19** | Access memory predictably |
### Guidelines
```cpp
// Per.11: Compile-time computation where possible
constexpr auto lookup_table = [] {
std::array<int, 256> table{};
for (int i = 0; i < 256; ++i) {
table[i] = i * i;
}
return table;
}();
// Per.19: Prefer contiguous data for cache-friendliness
std::vector<Point> points; // GOOD: contiguous
std::vector<std::unique_ptr<Point>> indirect_points; // BAD: pointer chasing
```
### Anti-Patterns
- Optimizing without profiling data (Per.1, Per.6)
- Choosing "clever" low-level code over clear abstractions (Per.4, Per.5)
- Ignoring data layout and cache behavior (Per.19)
## Quick Reference Checklist
Before marking C++ work complete:
- [ ] No raw `new`/`delete` -- use smart pointers or RAII (R.11)
- [ ] Objects initialized at declaration (ES.20)
- [ ] Variables are `const`/`constexpr` by default (Con.1, ES.25)
- [ ] Member functions are `const` where possible (Con.2)
- [ ] `enum class` instead of plain `enum` (Enum.3)
- [ ] `nullptr` instead of `0`/`NULL` (ES.47)
- [ ] No narrowing conversions (ES.46)
- [ ] No C-style casts (ES.48)
- [ ] Single-argument constructors are `explicit` (C.46)
- [ ] Rule of Zero or Rule of Five applied (C.20, C.21)
- [ ] Base class destructors are public virtual or protected non-virtual (C.35)
- [ ] Templates are constrained with concepts (T.10)
- [ ] No `using namespace` in headers at global scope (SF.7)
- [ ] Headers have include guards and are self-contained (SF.8, SF.11)
- [ ] Locks use RAII (`scoped_lock`/`lock_guard`) (CP.20)
- [ ] Exceptions are custom types, thrown by value, caught by reference (E.14, E.15)
- [ ] `'\n'` instead of `std::endl` (SL.io.50)
- [ ] No magic numbers (ES.45)

View File

@@ -25,7 +25,7 @@ async function main() {
// Track tool call count (increment in a temp file)
// Use a session-specific counter file based on session ID from environment
// or parent PID as fallback
const sessionId = process.env.CLAUDE_SESSION_ID || String(process.ppid) || 'default';
const sessionId = process.env.CLAUDE_SESSION_ID || 'default';
const counterFile = path.join(getTempDir(), `claude-tool-count-${sessionId}`);
const rawThreshold = parseInt(process.env.COMPACT_THRESHOLD || '50', 10);
const threshold = Number.isFinite(rawThreshold) && rawThreshold > 0 && rawThreshold <= 10000
@@ -44,7 +44,11 @@ async function main() {
const bytesRead = fs.readSync(fd, buf, 0, 64, 0);
if (bytesRead > 0) {
const parsed = parseInt(buf.toString('utf8', 0, bytesRead).trim(), 10);
count = Number.isFinite(parsed) ? parsed + 1 : 1;
// Clamp to reasonable range — corrupted files could contain huge values
// that pass Number.isFinite() (e.g., parseInt('9'.repeat(30)) => 1e+29)
count = (Number.isFinite(parsed) && parsed > 0 && parsed <= 1000000)
? parsed + 1
: 1;
}
// Truncate and write new value
fs.ftruncateSync(fd, 0);
@@ -62,8 +66,8 @@ async function main() {
log(`[StrategicCompact] ${threshold} tool calls reached - consider /compact if transitioning phases`);
}
// Suggest at regular intervals after threshold
if (count > threshold && count % 25 === 0) {
// Suggest at regular intervals after threshold (every 25 calls from threshold)
if (count > threshold && (count - threshold) % 25 === 0) {
log(`[StrategicCompact] ${count} tool calls - good checkpoint for /compact if context is stale`);
}

View File

@@ -0,0 +1,18 @@
steps:
- name: Setup Go environment
uses: actions/setup-go@v6.2.0
with:
# The Go version to download (if necessary) and use. Supports semver spec and ranges. Be sure to enclose this option in single quotation marks.
go-version: # optional
# Path to the go.mod, go.work, .go-version, or .tool-versions file.
go-version-file: # optional
# Set this option to true if you want the action to always check for the latest available version that satisfies the version spec
check-latest: # optional
# Used to pull Go distributions from go-versions. Since there's a default, this is typically not supplied by the user. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting.
token: # optional, default is ${{ github.server_url == 'https://github.com' && github.token || '' }}
# Used to specify whether caching is needed. Set to true, if you'd like to enable caching.
cache: # optional, default is true
# Used to specify the path to a dependency file - go.sum
cache-dependency-path: # optional
# Target architecture for Go to use. Examples: x86, x64. Will use system architecture by default.
architecture: # optional

View File

@@ -1,34 +0,0 @@
name: AgentShield Security Scan
on:
push:
branches: [main]
pull_request:
branches: [main]
# Prevent duplicate runs
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
# Minimal permissions
permissions:
contents: read
jobs:
agentshield:
name: AgentShield Scan
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Run AgentShield Security Scan
uses: affaan-m/agentshield@v1
with:
path: '.'
min-severity: 'medium'
format: 'terminal'
fail-on-findings: 'false'

10
.gitignore vendored
View File

@@ -21,6 +21,16 @@ Thumbs.db
# Node
node_modules/
# Build output
dist/
# Python
__pycache__/
*.pyc
# Task files (Claude Code teams)
tasks/
# Personal configs (if any)
personal/
private/

View File

@@ -1,9 +1,24 @@
# OpenCode ECC Plugin
> ⚠️ This README is specific to OpenCode usage.
> If you installed ECC via npm (e.g. `npm install opencode-ecc`), refer to the root README instead.
Everything Claude Code (ECC) plugin for OpenCode - agents, commands, hooks, and skills.
## Installation
## Installation Overview
There are two ways to use Everything Claude Code (ECC):
1. **npm package (recommended for most users)**
Install via npm/bun/yarn and use the `ecc-install` CLI to set up rules and agents.
2. **Direct clone / plugin mode**
Clone the repository and run OpenCode directly inside it.
Choose the method that matches your workflow below.
### Option 1: npm Package
```bash
@@ -17,6 +32,11 @@ Add to your `opencode.json`:
"plugin": ["ecc-universal"]
}
```
After installation, the `ecc-install` CLI becomes available:
```bash
npx ecc-install typescript
```
### Option 2: Direct Use

View File

@@ -2,11 +2,11 @@
* Everything Claude Code (ECC) Plugin for OpenCode
*
* This package provides a complete OpenCode plugin with:
* - 12 specialized agents (planner, architect, code-reviewer, etc.)
* - 24 commands (/plan, /tdd, /code-review, etc.)
* - 13 specialized agents (planner, architect, code-reviewer, etc.)
* - 31 commands (/plan, /tdd, /code-review, etc.)
* - Plugin hooks (auto-format, TypeScript check, console.log warning, etc.)
* - Custom tools (run-tests, check-coverage, security-audit)
* - 16 skills (coding-standards, security-review, tdd-workflow, etc.)
* - 37 skills (coding-standards, security-review, tdd-workflow, etc.)
*
* Usage:
*
@@ -39,7 +39,7 @@ export { ECCHooksPlugin, default } from "./plugins/index.js"
export * from "./plugins/index.js"
// Version export
export const VERSION = "1.0.0"
export const VERSION = "1.4.1"
// Plugin metadata
export const metadata = {
@@ -48,9 +48,9 @@ export const metadata = {
description: "Everything Claude Code plugin for OpenCode",
author: "affaan-m",
features: {
agents: 12,
commands: 24,
skills: 16,
agents: 13,
commands: 31,
skills: 37,
hookEvents: [
"file.edited",
"tool.execute.before",

83
.opencode/package-lock.json generated Normal file
View File

@@ -0,0 +1,83 @@
{
"name": "ecc-universal",
"version": "1.4.1",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "ecc-universal",
"version": "1.4.1",
"license": "MIT",
"devDependencies": {
"@opencode-ai/plugin": "^1.0.0",
"@types/node": "^20.0.0",
"typescript": "^5.3.0"
},
"engines": {
"node": ">=18.0.0"
},
"peerDependencies": {
"@opencode-ai/plugin": ">=1.0.0"
}
},
"node_modules/@opencode-ai/plugin": {
"version": "1.1.53",
"resolved": "https://registry.npmjs.org/@opencode-ai/plugin/-/plugin-1.1.53.tgz",
"integrity": "sha512-9ye7Wz2kESgt02AUDaMea4hXxj6XhWwKAG8NwFhrw09Ux54bGaMJFt1eIS8QQGIMaD+Lp11X4QdyEg96etEBJw==",
"dev": true,
"license": "MIT",
"dependencies": {
"@opencode-ai/sdk": "1.1.53",
"zod": "4.1.8"
}
},
"node_modules/@opencode-ai/sdk": {
"version": "1.1.53",
"resolved": "https://registry.npmjs.org/@opencode-ai/sdk/-/sdk-1.1.53.tgz",
"integrity": "sha512-RUIVnPOP1CyyU32FrOOYuE7Ge51lOBuhaFp2NSX98ncApT7ffoNetmwzqrhOiJQgZB1KrbCHLYOCK6AZfacxag==",
"dev": true,
"license": "MIT"
},
"node_modules/@types/node": {
"version": "20.19.33",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.19.33.tgz",
"integrity": "sha512-Rs1bVAIdBs5gbTIKza/tgpMuG1k3U/UMJLWecIMxNdJFDMzcM5LOiLVRYh3PilWEYDIeUDv7bpiHPLPsbydGcw==",
"dev": true,
"license": "MIT",
"dependencies": {
"undici-types": "~6.21.0"
}
},
"node_modules/typescript": {
"version": "5.9.3",
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz",
"integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==",
"dev": true,
"license": "Apache-2.0",
"bin": {
"tsc": "bin/tsc",
"tsserver": "bin/tsserver"
},
"engines": {
"node": ">=14.17"
}
},
"node_modules/undici-types": {
"version": "6.21.0",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz",
"integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==",
"dev": true,
"license": "MIT"
},
"node_modules/zod": {
"version": "4.1.8",
"resolved": "https://registry.npmjs.org/zod/-/zod-4.1.8.tgz",
"integrity": "sha512-5R1P+WwQqmmMIEACyzSvo4JXHY5WiAFHRMg+zBZKgKS+Q1viRa0C1hmUKtHltoIFKtIdki3pRxkmpP74jnNYHQ==",
"dev": true,
"license": "MIT",
"funding": {
"url": "https://github.com/sponsors/colinhacks"
}
}
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "ecc-universal",
"version": "1.0.0",
"version": "1.4.1",
"description": "Everything Claude Code (ECC) plugin for OpenCode - agents, commands, hooks, and skills",
"main": "dist/index.js",
"types": "dist/index.d.ts",

60
CLAUDE.md Normal file
View File

@@ -0,0 +1,60 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
This is a **Claude Code plugin** - a collection of production-ready agents, skills, hooks, commands, rules, and MCP configurations. The project provides battle-tested workflows for software development using Claude Code.
## Running Tests
```bash
# Run all tests
node tests/run-all.js
# Run individual test files
node tests/lib/utils.test.js
node tests/lib/package-manager.test.js
node tests/hooks/hooks.test.js
```
## Architecture
The project is organized into several core components:
- **agents/** - Specialized subagents for delegation (planner, code-reviewer, tdd-guide, etc.)
- **skills/** - Workflow definitions and domain knowledge (coding standards, patterns, testing)
- **commands/** - Slash commands invoked by users (/tdd, /plan, /e2e, etc.)
- **hooks/** - Trigger-based automations (session persistence, pre/post-tool hooks)
- **rules/** - Always-follow guidelines (security, coding style, testing requirements)
- **mcp-configs/** - MCP server configurations for external integrations
- **scripts/** - Cross-platform Node.js utilities for hooks and setup
- **tests/** - Test suite for scripts and utilities
## Key Commands
- `/tdd` - Test-driven development workflow
- `/plan` - Implementation planning
- `/e2e` - Generate and run E2E tests
- `/code-review` - Quality review
- `/build-fix` - Fix build errors
- `/learn` - Extract patterns from sessions
- `/skill-create` - Generate skills from git history
## Development Notes
- Package manager detection: npm, pnpm, yarn, bun (configurable via `CLAUDE_PACKAGE_MANAGER` env var or project config)
- Cross-platform: Windows, macOS, Linux support via Node.js scripts
- Agent format: Markdown with YAML frontmatter (name, description, tools, model)
- Skill format: Markdown with clear sections for when to use, how it works, examples
- Hook format: JSON with matcher conditions and command/notification hooks
## Contributing
Follow the formats in CONTRIBUTING.md:
- Agents: Markdown with frontmatter (name, description, tools, model)
- Skills: Clear sections (When to Use, How It Works, Examples)
- Commands: Markdown with description frontmatter
- Hooks: JSON with matcher and hooks array
File naming: lowercase with hyphens (e.g., `python-reviewer.md`, `tdd-workflow.md`)

View File

@@ -13,7 +13,7 @@
![Java](https://img.shields.io/badge/-Java-ED8B00?logo=openjdk&logoColor=white)
![Markdown](https://img.shields.io/badge/-Markdown-000000?logo=markdown&logoColor=white)
> **42K+ stars** | **5K+ forks** | **24 contributors** | **6 languages supported** | **Anthropic Hackathon Winner**
> **50K+ stars** | **6K+ forks** | **30 contributors** | **6 languages supported** | **Anthropic Hackathon Winner**
---
@@ -143,7 +143,7 @@ For manual install instructions see the README in the `rules/` folder.
/plugin list everything-claude-code@everything-claude-code
```
**That's it!** You now have access to 13 agents, 37 skills, and 31 commands.
**That's it!** You now have access to 13 agents, 44 skills, and 32 commands.
---
@@ -222,6 +222,7 @@ everything-claude-code/
| |-- verification-loop/ # Continuous verification (Longform Guide)
| |-- golang-patterns/ # Go idioms and best practices
| |-- golang-testing/ # Go testing patterns, TDD, benchmarks
| |-- cpp-coding-standards/ # C++ coding standards from C++ Core Guidelines (NEW)
| |-- cpp-testing/ # C++ testing with GoogleTest, CMake/CTest (NEW)
| |-- django-patterns/ # Django patterns, models, views (NEW)
| |-- django-security/ # Django security best practices (NEW)
@@ -245,6 +246,12 @@ everything-claude-code/
| |-- deployment-patterns/ # CI/CD, Docker, health checks, rollbacks (NEW)
| |-- docker-patterns/ # Docker Compose, networking, volumes, container security (NEW)
| |-- e2e-testing/ # Playwright E2E patterns and Page Object Model (NEW)
| |-- content-hash-cache-pattern/ # SHA-256 content hash caching for file processing (NEW)
| |-- cost-aware-llm-pipeline/ # LLM cost optimization, model routing, budget tracking (NEW)
| |-- regex-vs-llm-structured-text/ # Decision framework: regex vs LLM for text parsing (NEW)
| |-- swift-actor-persistence/ # Thread-safe Swift data persistence with actors (NEW)
| |-- swift-protocol-di-testing/ # Protocol-based DI for testable Swift code (NEW)
| |-- search-first/ # Research-before-coding workflow (NEW)
|
|-- commands/ # Slash commands for quick execution
| |-- tdd.md # /tdd - Test-driven development
@@ -254,6 +261,7 @@ everything-claude-code/
| |-- build-fix.md # /build-fix - Fix build errors
| |-- refactor-clean.md # /refactor-clean - Dead code removal
| |-- learn.md # /learn - Extract patterns mid-session (Longform Guide)
| |-- learn-eval.md # /learn-eval - Extract, evaluate, and save patterns (NEW)
| |-- checkpoint.md # /checkpoint - Save verification state (Longform Guide)
| |-- verify.md # /verify - Run verification loop (Longform Guide)
| |-- setup-pm.md # /setup-pm - Configure package manager
@@ -486,6 +494,7 @@ This gives you instant access to all commands, agents, skills, and hooks.
> git clone https://github.com/affaan-m/everything-claude-code.git
>
> # Option A: User-level rules (applies to all projects)
> mkdir -p ~/.claude/rules
> cp -r everything-claude-code/rules/common/* ~/.claude/rules/
> cp -r everything-claude-code/rules/typescript/* ~/.claude/rules/ # pick your stack
> cp -r everything-claude-code/rules/python/* ~/.claude/rules/
@@ -691,11 +700,12 @@ Each component is fully independent.
</details>
<details>
<summary><b>Does this work with Cursor / OpenCode?</b></summary>
<summary><b>Does this work with Cursor / OpenCode / Codex?</b></summary>
Yes. ECC is cross-platform:
- **Cursor**: Pre-translated configs in `.cursor/`. See [Cursor IDE Support](#cursor-ide-support).
- **OpenCode**: Full plugin support in `.opencode/`. See [OpenCode Support](#-opencode-support).
- **Codex**: First-class support with adapter drift guards and SessionStart fallback. See PR [#257](https://github.com/affaan-m/everything-claude-code/pull/257).
- **Claude Code**: Native — this is the primary target.
</details>
@@ -800,8 +810,8 @@ The configuration is automatically detected from `.opencode/opencode.json`.
| Feature | Claude Code | OpenCode | Status |
|---------|-------------|----------|--------|
| Agents | ✅ 13 agents | ✅ 12 agents | **Claude Code leads** |
| Commands | ✅ 31 commands | ✅ 24 commands | **Claude Code leads** |
| Skills | ✅ 37 skills | ✅ 16 skills | **Claude Code leads** |
| Commands | ✅ 32 commands | ✅ 24 commands | **Claude Code leads** |
| Skills | ✅ 44 skills | ✅ 16 skills | **Claude Code leads** |
| Hooks | ✅ 3 phases | ✅ 20+ events | **OpenCode has more!** |
| Rules | ✅ 8 rules | ✅ 8 rules | **Full parity** |
| MCP Servers | ✅ Full | ✅ Full | **Full parity** |
@@ -821,7 +831,7 @@ OpenCode's plugin system is MORE sophisticated than Claude Code with 20+ event t
**Additional OpenCode events**: `file.edited`, `file.watcher.updated`, `message.updated`, `lsp.client.diagnostics`, `tui.toast.show`, and more.
### Available Commands (31)
### Available Commands (32)
| Command | Description |
|---------|-------------|
@@ -855,6 +865,7 @@ OpenCode's plugin system is MORE sophisticated than Claude Code with 20+ event t
| `/instinct-import` | Import instincts |
| `/instinct-export` | Export instincts |
| `/evolve` | Cluster instincts into skills |
| `/learn-eval` | Extract and evaluate patterns before saving |
| `/setup-pm` | Configure package manager |
### Plugin Installation

View File

@@ -95,7 +95,7 @@ cp -r everything-claude-code/rules/* ~/.claude/rules/
/plugin list everything-claude-code@everything-claude-code
```
**完成!** 你现在可以使用 13 个代理、37 个技能和 31 个命令。
**完成!** 你现在可以使用 13 个代理、43 个技能和 31 个命令。
---

91
commands/learn-eval.md Normal file
View File

@@ -0,0 +1,91 @@
---
description: Extract reusable patterns from the session, self-evaluate quality before saving, and determine the right save location (Global vs Project).
---
# /learn-eval - Extract, Evaluate, then Save
Extends `/learn` with a quality gate and save-location decision before writing any skill file.
## What to Extract
Look for:
1. **Error Resolution Patterns** — root cause + fix + reusability
2. **Debugging Techniques** — non-obvious steps, tool combinations
3. **Workarounds** — library quirks, API limitations, version-specific fixes
4. **Project-Specific Patterns** — conventions, architecture decisions, integration patterns
## Process
1. Review the session for extractable patterns
2. Identify the most valuable/reusable insight
3. **Determine save location:**
- Ask: "Would this pattern be useful in a different project?"
- **Global** (`~/.claude/skills/learned/`): Generic patterns usable across 2+ projects (bash compatibility, LLM API behavior, debugging techniques, etc.)
- **Project** (`.claude/skills/learned/` in current project): Project-specific knowledge (quirks of a particular config file, project-specific architecture decisions, etc.)
- When in doubt, choose Global (moving Global → Project is easier than the reverse)
4. Draft the skill file using this format:
```markdown
---
name: pattern-name
description: "Under 130 characters"
user-invocable: false
origin: auto-extracted
---
# [Descriptive Pattern Name]
**Extracted:** [Date]
**Context:** [Brief description of when this applies]
## Problem
[What problem this solves - be specific]
## Solution
[The pattern/technique/workaround - with code examples]
## When to Use
[Trigger conditions]
```
5. **Self-evaluate before saving** using this rubric:
| Dimension | 1 | 3 | 5 |
|-----------|---|---|---|
| Specificity | Abstract principles only, no code examples | Representative code example present | Rich examples covering all usage patterns |
| Actionability | Unclear what to do | Main steps are understandable | Immediately actionable, edge cases covered |
| Scope Fit | Too broad or too narrow | Mostly appropriate, some boundary ambiguity | Name, trigger, and content perfectly aligned |
| Non-redundancy | Nearly identical to another skill | Some overlap but unique perspective exists | Completely unique value |
| Coverage | Covers only a fraction of the target task | Main cases covered, common variants missing | Main cases, edge cases, and pitfalls covered |
- Score each dimension 15
- If any dimension scores 12, improve the draft and re-score until all dimensions are ≥ 3
- Show the user the scores table and the final draft
6. Ask user to confirm:
- Show: proposed save path + scores table + final draft
- Wait for explicit confirmation before writing
7. Save to the determined location
## Output Format for Step 5 (scores table)
| Dimension | Score | Rationale |
|-----------|-------|-----------|
| Specificity | N/5 | ... |
| Actionability | N/5 | ... |
| Scope Fit | N/5 | ... |
| Non-redundancy | N/5 | ... |
| Coverage | N/5 | ... |
| **Total** | **N/25** | |
## Notes
- Don't extract trivial fixes (typos, simple syntax errors)
- Don't extract one-time issues (specific API outages, etc.)
- Focus on patterns that will save time in future sessions
- Keep skills focused — one pattern per skill
- If Coverage score is low, add related variants before saving

View File

@@ -140,7 +140,7 @@ cp -r everything-claude-code/rules/golang/* ~/.claude/rules/
/plugin list everything-claude-code@everything-claude-code
```
**完了です!** これで13のエージェント、37のスキル、31のコマンドにアクセスできます。
**完了です!** これで13のエージェント、43のスキル、31のコマンドにアクセスできます。
---
@@ -454,6 +454,7 @@ Duplicate hooks file detected: ./hooks/hooks.json resolves to already-loaded fil
> git clone https://github.com/affaan-m/everything-claude-code.git
>
> # オプション Aユーザーレベルルールすべてのプロジェクトに適用
> mkdir -p ~/.claude/rules
> cp -r everything-claude-code/rules/common/* ~/.claude/rules/
> cp -r everything-claude-code/rules/typescript/* ~/.claude/rules/ # スタックを選択
> cp -r everything-claude-code/rules/python/* ~/.claude/rules/

View File

@@ -1,6 +1,18 @@
# 为 Everything Claude Code 做贡献
感谢您希望做出贡献这个仓库旨在成为 Claude Code 用户的社区资源。
感谢您想要贡献这个仓库 Claude Code 用户的社区资源。
## 目录
* [我们正在寻找的内容](#我们寻找什么)
* [快速开始](#快速开始)
* [贡献技能](#贡献技能)
* [贡献智能体](#贡献智能体)
* [贡献钩子](#贡献钩子)
* [贡献命令](#贡献命令)
* [拉取请求流程](#拉取请求流程)
***
## 我们寻找什么
@@ -21,16 +33,6 @@
* 框架模式
* 测试策略
* 架构指南
* 领域特定知识
### 命令
调用有用工作流的斜杠命令:
* 部署命令
* 测试命令
* 文档命令
* 代码生成命令
### 钩子
@@ -41,124 +43,365 @@
* 验证钩子
* 通知钩子
### 规则
### 命令
始终遵循的指导原则
调用有用工作流的斜杠命令
* 安全规则
* 代码风格规则
* 测试要求
* 命名约定
### MCP 配置
新的或改进的 MCP 服务器配置:
* 数据库集成
* 云提供商 MCP
* 监控工具
* 通讯工具
* 部署命令
* 测试命令
* 代码生成命令
***
## 如何贡献
### 1. Fork 仓库
## 快速开始
```bash
git clone https://github.com/YOUR_USERNAME/everything-claude-code.git
# 1. Fork and clone
gh repo fork affaan-m/everything-claude-code --clone
cd everything-claude-code
# 2. Create a branch
git checkout -b feat/my-contribution
# 3. Add your contribution (see sections below)
# 4. Test locally
cp -r skills/my-skill ~/.claude/skills/ # for skills
# Then test with Claude Code
# 5. Submit PR
git add . && git commit -m "feat: add my-skill" && git push
```
### 2. 创建一个分支
***
```bash
git checkout -b add-python-reviewer
## 贡献技能
技能是 Claude Code 根据上下文加载的知识模块。
### 目录结构
```
skills/
└── your-skill-name/
└── SKILL.md
```
### 3. 添加您的贡献
将文件放在适当的目录中:
* `agents/` 用于新的智能体
* `skills/` 用于技能(可以是单个 .md 文件或目录)
* `commands/` 用于斜杠命令
* `rules/` 用于规则文件
* `hooks/` 用于钩子配置
* `mcp-configs/` 用于 MCP 服务器配置
### 4. 遵循格式
**智能体** 应包含 frontmatter
### SKILL.md 模板
```markdown
---
name: agent-name
description: What it does
tools: Read, Grep, Glob, Bash
name: your-skill-name
description: Brief description shown in skill list
---
# 你的技能标题
简要概述此技能涵盖的内容。
## 核心概念
解释关键模式和准则。
## 代码示例
```typescript
// 包含实用、经过测试的示例
function example() {
// 注释良好的代码
}
```
## 最佳实践
- 可操作的指导原则
- 该做与不该做的事项
- 需要避免的常见陷阱
## 适用场景
描述此技能适用的场景。
```
### 技能清单
* \[ ] 专注于一个领域/技术
* \[ ] 包含实用的代码示例
* \[ ] 少于 500 行
* \[ ] 使用清晰的章节标题
* \[ ] 已通过 Claude Code 测试
### 技能示例
| 技能 | 目的 |
|-------|---------|
| `coding-standards/` | TypeScript/JavaScript 模式 |
| `frontend-patterns/` | React 和 Next.js 最佳实践 |
| `backend-patterns/` | API 和数据库模式 |
| `security-review/` | 安全检查清单 |
***
## 贡献智能体
智能体是通过任务工具调用的专业助手。
### 文件位置
```
agents/your-agent-name.md
```
### 智能体模板
```markdown
---
name: 你的代理名称
description: 该代理的作用以及 Claude 应在何时调用它。请具体说明!
tools: ["Read", "Write", "Edit", "Bash", "Grep", "Glob"]
model: sonnet
---
Instructions here...
你是一名 [角色] 专家。
## 你的角色
- 主要职责
- 次要职责
- 你不做的事情(界限)
## 工作流程
### 步骤 1理解
你如何着手处理任务。
### 步骤 2执行
你如何开展工作。
### 步骤 3验证
你如何验证结果。
## 输出格式
你返回给用户的内容。
## 示例
### 示例:[场景]
输入:[用户提供的内容]
操作:[你做了什么]
输出:[你返回的内容]
```
**技能** 应清晰且可操作:
### 智能体字段
```markdown
# Skill Name
| 字段 | 描述 | 选项 |
|-------|-------------|---------|
| `name` | 小写,用连字符连接 | `code-reviewer` |
| `description` | 用于决定何时调用 | 要具体! |
| `tools` | 仅包含必要的内容 | `Read, Write, Edit, Bash, Grep, Glob, WebFetch, Task` |
| `model` | 复杂度级别 | `haiku` (简单), `sonnet` (编码), `opus` (复杂) |
## When to Use
### 智能体示例
...
| 智能体 | 目的 |
|-------|---------|
| `tdd-guide.md` | 测试驱动开发 |
| `code-reviewer.md` | 代码审查 |
| `security-reviewer.md` | 安全扫描 |
| `build-error-resolver.md` | 修复构建错误 |
## How It Works
***
...
## 贡献钩子
## Examples
钩子是由 Claude Code 事件触发的自动行为。
...
### 文件位置
```
hooks/hooks.json
```
**命令** 应解释其功能:
### 钩子类型
```markdown
---
description: Brief description of command
---
| 类型 | 触发条件 | 用例 |
|------|---------|----------|
| `PreToolUse` | 工具运行前 | 验证、警告、阻止 |
| `PostToolUse` | 工具运行后 | 格式化、检查、通知 |
| `SessionStart` | 会话开始时 | 加载上下文 |
| `Stop` | 会话结束时 | 清理、审计 |
# Command Name
Detailed instructions...
```
**钩子** 应包含描述:
### 钩子格式
```json
{
"matcher": "...",
"hooks": [...],
"description": "What this hook does"
"hooks": {
"PreToolUse": [
{
"matcher": "tool == \"Bash\" && tool_input.command matches \"rm -rf /\"",
"hooks": [
{
"type": "command",
"command": "echo '[Hook] BLOCKED: Dangerous command' && exit 1"
}
],
"description": "Block dangerous rm commands"
}
]
}
}
```
### 5. 测试您的贡献
### 匹配器语法
在提交之前,请确保您的配置能在 Claude Code 中正常工作。
```javascript
// Match specific tools
tool == "Bash"
tool == "Edit"
tool == "Write"
### 6. 提交 PR
// Match input patterns
tool_input.command matches "npm install"
tool_input.file_path matches "\\.tsx?$"
```bash
git add .
git commit -m "Add Python code reviewer agent"
git push origin add-python-reviewer
// Combine conditions
tool == "Bash" && tool_input.command matches "git push"
```
然后提交一个 PR包含以下内容
### 钩子示例
* 您添加了什么
* 为什么它有用
* 您是如何测试的
```json
// Block dev servers outside tmux
{
"matcher": "tool == \"Bash\" && tool_input.command matches \"npm run dev\"",
"hooks": [{"type": "command", "command": "echo 'Use tmux for dev servers' && exit 1"}],
"description": "Ensure dev servers run in tmux"
}
// Auto-format after editing TypeScript
{
"matcher": "tool == \"Edit\" && tool_input.file_path matches \"\\.tsx?$\"",
"hooks": [{"type": "command", "command": "npx prettier --write \"$file_path\""}],
"description": "Format TypeScript files after edit"
}
// Warn before git push
{
"matcher": "tool == \"Bash\" && tool_input.command matches \"git push\"",
"hooks": [{"type": "command", "command": "echo '[Hook] Review changes before pushing'"}],
"description": "Reminder to review before push"
}
```
### 钩子清单
* \[ ] 匹配器具体(不过于宽泛)
* \[ ] 包含清晰的错误/信息消息
* \[ ] 使用正确的退出代码 (`exit 1` 阻止, `exit 0` 允许)
* \[ ] 经过充分测试
* \[ ] 有描述
***
## 贡献命令
命令是用户通过 `/command-name` 调用的操作。
### 文件位置
```
commands/your-command.md
```
### 命令模板
```markdown
---
description: 在 /help 中显示的简要描述
---
# 命令名称
## 目的
此命令的功能。
## 用法
```
/your-command [args]
```
## 工作流程
1. 第一步
2. 第二步
3. 最后一步
## 输出
用户将收到的内容。
```
### 命令示例
| 命令 | 目的 |
|---------|---------|
| `commit.md` | 创建 git 提交 |
| `code-review.md` | 审查代码变更 |
| `tdd.md` | TDD 工作流 |
| `e2e.md` | E2E 测试 |
***
## 拉取请求流程
### 1. PR 标题格式
```
feat(skills): add rust-patterns skill
feat(agents): add api-designer agent
feat(hooks): add auto-format hook
fix(skills): update React patterns
docs: improve contributing guide
```
### 2. PR 描述
```markdown
## 摘要
你正在添加什么以及为什么添加。
## 类型
- [ ] 技能
- [ ] 代理
- [ ] 钩子
- [ ] 命令
## 测试
你是如何测试这个的。
## 检查清单
- [ ] 遵循格式指南
- [ ] 已使用 Claude Code 进行测试
- [ ] 无敏感信息API 密钥、路径)
- [ ] 描述清晰
```
### 3. 审查流程
1. 维护者在 48 小时内审查
2. 如有要求,请处理反馈
3. 一旦批准,合并到主分支
***
@@ -166,34 +409,34 @@ git push origin add-python-reviewer
### 应该做的
* 保持配置专注模块化
* 保持贡献内容专注模块化
* 包含清晰的描述
* 提交前进行测试
* 遵循现有模式
* 记录任何依赖项
* 记录依赖项
### 不应该做的
* 包含敏感数据API 密钥、令牌、路径)
* 添加过于复杂或小众的配置
* 提交未经测试的配置
* 创建重复的功能
* 添加需要特定付费服务且没有替代方案的配置
* 提交未经测试的贡献
* 创建现有功能的重复项
***
## 文件命名
* 使用小写字母和连字符:`python-reviewer.md`
* 要有描述性:`tdd-workflow.md` 而不是 `workflow.md`
* 确保智能体/技能名称与文件名匹配
* 使用小写和连字符:`python-reviewer.md`
* 描述性要强`tdd-workflow.md` 而不是 `workflow.md`
* 名称与文件名匹配
***
## 有问题吗?
请提出问题或在 X 上联系我们:[@affaanmustafa](https://x.com/affaanmustafa)
* **问题:** [github.com/affaan-m/everything-claude-code/issues](https://github.com/affaan-m/everything-claude-code/issues)
* **X/Twitter** [@affaanmustafa](https://x.com/affaanmustafa)
***
感谢您的贡献让我们共同构建一个优秀的资源。
感谢您的贡献让我们共同构建一个出色的资源。

View File

@@ -1,21 +1,27 @@
**语言:** English | [繁體中文](docs/zh-TW/README.md) | [简体中文](docs/zh-CN/README.md)
**语言:** English | [繁體中文](../zh-TW/README.md) | [简体中文](README.md)
# Everything Claude Code
[![Stars](https://img.shields.io/github/stars/affaan-m/everything-claude-code?style=flat)](https://github.com/affaan-m/everything-claude-code/stargazers)
[![Forks](https://img.shields.io/github/forks/affaan-m/everything-claude-code?style=flat)](https://github.com/affaan-m/everything-claude-code/network/members)
[![Contributors](https://img.shields.io/github/contributors/affaan-m/everything-claude-code?style=flat)](https://github.com/affaan-m/everything-claude-code/graphs/contributors)
[![License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
![Shell](https://img.shields.io/badge/-Shell-4EAA25?logo=gnu-bash\&logoColor=white)
![TypeScript](https://img.shields.io/badge/-TypeScript-3178C6?logo=typescript\&logoColor=white)
![Python](https://img.shields.io/badge/-Python-3776AB?logo=python\&logoColor=white)
![Go](https://img.shields.io/badge/-Go-00ADD8?logo=go\&logoColor=white)
![Java](https://img.shields.io/badge/-Java-ED8B00?logo=openjdk\&logoColor=white)
![Markdown](https://img.shields.io/badge/-Markdown-000000?logo=markdown\&logoColor=white)
> **42K+ 星标** | **5K+ 分支** | **24 位贡献者** | **支持 6 种语言**
***
<div align="center">
**🌐 语言 / 语言 / 語言**
[**English**](README.md) | [简体中文](README.zh-CN.md) | [繁體中文](docs/zh-TW/README.md)
[**English**](../../README.md) | [简体中文](../../README.zh-CN.md) | [繁體中文](../../docs/zh-TW/README.md)
</div>
@@ -61,6 +67,38 @@
***
## 最新动态
### v1.4.1 — 错误修复 (2026年2月)
* **修复了直觉导入内容丢失问题** — `parse_instinct_file()``/instinct-import` 期间会静默丢弃 frontmatter 之后的所有内容Action, Evidence, Examples 部分)。已由社区贡献者 @ericcai0814 修复 ([#148](https://github.com/affaan-m/everything-claude-code/issues/148), [#161](https://github.com/affaan-m/everything-claude-code/pull/161))
### v1.4.0 — 多语言规则、安装向导 & PM2 (2026年2月)
* **交互式安装向导** — 新的 `configure-ecc` 技能提供了带有合并/覆盖检测的引导式设置
* **PM2 & 多智能体编排** — 6 个新命令 (`/pm2`, `/multi-plan`, `/multi-execute`, `/multi-backend`, `/multi-frontend`, `/multi-workflow`) 用于管理复杂的多服务工作流
* **多语言规则架构** — 规则从扁平文件重组为 `common/` + `typescript/` + `python/` + `golang/` 目录。仅安装您需要的语言
* **中文 (zh-CN) 翻译** — 所有智能体、命令、技能和规则的完整翻译 (80+ 个文件)
* **GitHub Sponsors 支持** — 通过 GitHub Sponsors 赞助项目
* **增强的 CONTRIBUTING.md** — 针对每种贡献类型的详细 PR 模板
### v1.3.0 — OpenCode 插件支持 (2026年2月)
* **完整的 OpenCode 集成** — 12 个智能体24 个命令16 个技能,通过 OpenCode 的插件系统支持钩子 (20+ 种事件类型)
* **3 个原生自定义工具** — run-tests, check-coverage, security-audit
* **LLM 文档** — `llms.txt` 用于获取全面的 OpenCode 文档
### v1.2.0 — 统一的命令和技能 (2026年2月)
* **Python/Django 支持** — Django 模式、安全、TDD 和验证技能
* **Java Spring Boot 技能** — Spring Boot 的模式、安全、TDD 和验证
* **会话管理** — `/sessions` 命令用于查看会话历史
* **持续学习 v2** — 基于直觉的学习,带有置信度评分、导入/导出、进化
完整的更新日志请参见 [Releases](https://github.com/affaan-m/everything-claude-code/releases)。
***
## 🚀 快速开始
在 2 分钟内启动并运行:
@@ -83,8 +121,13 @@
# Clone the repo first
git clone https://github.com/affaan-m/everything-claude-code.git
# Copy rules (applies to all projects)
cp -r everything-claude-code/rules/* ~/.claude/rules/
# Install common rules (required)
cp -r everything-claude-code/rules/common/* ~/.claude/rules/
# Install language-specific rules (pick your stack)
cp -r everything-claude-code/rules/typescript/* ~/.claude/rules/
cp -r everything-claude-code/rules/python/* ~/.claude/rules/
cp -r everything-claude-code/rules/golang/* ~/.claude/rules/
```
### 步骤 3开始使用
@@ -97,7 +140,7 @@ cp -r everything-claude-code/rules/* ~/.claude/rules/
/plugin list everything-claude-code@everything-claude-code
```
**就这样!** 您现在可以访问 15+ 个代理、30+ 个技能和 20+ 个命令。
**就这样!** 您现在可以访问 15+ 个智能体,30+ 个技能,以及 30+ 个命令。
***
@@ -156,23 +199,38 @@ everything-claude-code/
| |-- e2e-runner.md # Playwright 端到端测试
| |-- refactor-cleaner.md # 无用代码清理
| |-- doc-updater.md # 文档同步
| |-- go-reviewer.md # Go 代码审查(新增)
| |-- go-build-resolver.md # Go 构建错误修复(新增)
| |-- go-reviewer.md # Go 代码审查
| |-- go-build-resolver.md # Go 构建错误修复
| |-- python-reviewer.md # Python 代码审查(新增)
| |-- database-reviewer.md # 数据库/Supabase 审查(新增)
|
|-- skills/ # 工作流定义与领域知识
| |-- coding-standards/ # 各语言最佳实践
| |-- backend-patterns/ # API、数据库、缓存模式
| |-- frontend-patterns/ # React、Next.js 模式
| |-- continuous-learning/ # 从会话中自动提取模式(长文档指南)
| |-- continuous-learning-v2/ # 基于直觉的学习,带置信度评分
| |-- continuous-learning-v2/ # 基于直觉的学习置信度评分
| |-- iterative-retrieval/ # 子代理的渐进式上下文精炼
| |-- strategic-compact/ # 手动压缩建议(长文档指南)
| |-- tdd-workflow/ # TDD 方法论
| |-- security-review/ # 安全检查清单
| |-- eval-harness/ # 验证循环评估(长文档指南)
| |-- verification-loop/ # 持续验证(长文档指南)
| |-- golang-patterns/ # Go 语言惯用法与最佳实践(新增)
| |-- golang-testing/ # Go 测试模式、TDD基准测试(新增)
| |-- golang-patterns/ # Go 语言惯用法与最佳实践
| |-- golang-testing/ # Go 测试模式、TDD基准测试
| |-- cpp-testing/ # 使用 GoogleTest、CMake/CTest 的 C++ 测试(新增)
| |-- django-patterns/ # Django 模式、模型与视图(新增)
| |-- django-security/ # Django 安全最佳实践(新增)
| |-- django-tdd/ # Django TDD 工作流(新增)
| |-- django-verification/ # Django 验证循环(新增)
| |-- python-patterns/ # Python 惯用法与最佳实践(新增)
| |-- python-testing/ # 使用 pytest 的 Python 测试(新增)
| |-- springboot-patterns/ # Java Spring Boot 模式(新增)
| |-- springboot-security/ # Spring Boot 安全(新增)
| |-- springboot-tdd/ # Spring Boot TDD新增
| |-- springboot-verification/ # Spring Boot 验证流程(新增)
| |-- configure-ecc/ # 交互式安装向导(新增)
| |-- security-scan/ # AgentShield 安全审计集成(新增)
|
|-- commands/ # 快捷执行的 Slash 命令
| |-- tdd.md # /tdd - 测试驱动开发
@@ -192,15 +250,28 @@ everything-claude-code/
| |-- instinct-status.md # /instinct-status - 查看已学习的直觉(新增)
| |-- instinct-import.md # /instinct-import - 导入直觉(新增)
| |-- instinct-export.md # /instinct-export - 导出直觉(新增)
| |-- evolve.md # /evolve - 将直觉聚类为技能(新增)
| |-- evolve.md # /evolve - 将直觉聚类为技能
| |-- pm2.md # /pm2 - PM2 服务生命周期管理(新增)
| |-- multi-plan.md # /multi-plan - 多代理任务拆解(新增)
| |-- multi-execute.md # /multi-execute - 编排式多代理工作流(新增)
| |-- multi-backend.md # /multi-backend - 后端多服务编排(新增)
| |-- multi-frontend.md # /multi-frontend - 前端多服务编排(新增)
| |-- multi-workflow.md # /multi-workflow - 通用多服务工作流(新增)
|
|-- rules/ # 必须遵循的规则(复制到 ~/.claude/rules/
| |-- security.md # 强制安全检查
| |-- coding-style.md # 不可变性、文件组织规范
| |-- testing.md # TDD80% 覆盖率要求
| |-- git-workflow.md # 提交格式与 PR 流程
| |-- agents.md # 何时委派给子代理
| |-- performance.md # 模型选择与上下文管理
| |-- README.md # 结构概览与安装指南
| |-- common/ # 与语言无关的通用原则
| | |-- coding-style.md # 不可变性与文件组织
| | |-- git-workflow.md # 提交格式与 PR 流程
| | |-- testing.md # TDD80% 覆盖率要求
| | |-- performance.md # 模型选择与上下文管理
| | |-- patterns.md # 设计模式与项目骨架
| | |-- hooks.md # Hook 架构与 TodoWrite
| | |-- agents.md # 何时委派给子代理
| | |-- security.md # 强制安全检查
| |-- typescript/ # TypeScript / JavaScript 专用
| |-- python/ # Python 专用
| |-- golang/ # Go 专用
|
|-- hooks/ # 基于触发器的自动化
| |-- hooks.json # 所有 Hook 配置PreToolUse、PostToolUse、Stop 等)
@@ -209,7 +280,7 @@ everything-claude-code/
|
|-- scripts/ # 跨平台 Node.js 脚本(新增)
| |-- lib/ # 共享工具
| | |-- utils.js # 跨平台文件 / 路径 / 系统工具
| | |-- utils.js # 跨平台文件/路径/系统工具
| | |-- package-manager.js # 包管理器检测与选择
| |-- hooks/ # Hook 实现
| | |-- session-start.js # 会话开始时加载上下文
@@ -227,7 +298,7 @@ everything-claude-code/
|-- contexts/ # 动态系统提示注入上下文(长文档指南)
| |-- dev.md # 开发模式上下文
| |-- review.md # 代码审查模式上下文
| |-- research.md # 研究 / 探索模式上下文
| |-- research.md # 研究/探索模式上下文
|
|-- examples/ # 示例配置与会话
| |-- CLAUDE.md # 项目级配置示例
@@ -277,6 +348,30 @@ everything-claude-code/
* **Instinct 集合** - 用于 continuous-learning-v2
* **模式提取** - 从您的提交历史中学习
### AgentShield — 安全审计器
扫描您的 Claude Code 配置,查找漏洞、错误配置和注入风险。
```bash
# Quick scan (no install needed)
npx ecc-agentshield scan
# Auto-fix safe issues
npx ecc-agentshield scan --fix
# Deep analysis with Opus 4.6
npx ecc-agentshield scan --opus --stream
# Generate secure config from scratch
npx ecc-agentshield init
```
检查 CLAUDE.md、settings.json、MCP 服务器、钩子和智能体定义。生成带有可操作发现的安全等级 (A-F)。
在 Claude Code 中使用 `/security-scan` 来运行它,或者通过 [GitHub Action](https://github.com/affaan-m/agentshield) 添加到 CI。
[GitHub](https://github.com/affaan-m/agentshield) | [npm](https://www.npmjs.com/package/ecc-agentshield)
### 🧠 持续学习 v2
基于本能的学习系统会自动学习您的模式:
@@ -361,11 +456,16 @@ Duplicate hooks file detected: ./hooks/hooks.json resolves to already-loaded fil
> git clone https://github.com/affaan-m/everything-claude-code.git
>
> # 选项 A用户级规则适用于所有项目
> cp -r everything-claude-code/rules/* ~/.claude/rules/
> mkdir -p ~/.claude/rules
> cp -r everything-claude-code/rules/common/* ~/.claude/rules/
> cp -r everything-claude-code/rules/typescript/* ~/.claude/rules/ # 选择您的技术栈
> cp -r everything-claude-code/rules/python/* ~/.claude/rules/
> cp -r everything-claude-code/rules/golang/* ~/.claude/rules/
>
> # 选项 B项目级规则仅适用于当前项目
> mkdir -p .claude/rules
> cp -r everything-claude-code/rules/* .claude/rules/
> cp -r everything-claude-code/rules/common/* .claude/rules/
> cp -r everything-claude-code/rules/typescript/* .claude/rules/ # 选择您的技术栈
> ```
***
@@ -381,8 +481,11 @@ git clone https://github.com/affaan-m/everything-claude-code.git
# Copy agents to your Claude config
cp everything-claude-code/agents/*.md ~/.claude/agents/
# Copy rules
cp everything-claude-code/rules/*.md ~/.claude/rules/
# Copy rules (common + language-specific)
cp -r everything-claude-code/rules/common/* ~/.claude/rules/
cp -r everything-claude-code/rules/typescript/* ~/.claude/rules/ # pick your stack
cp -r everything-claude-code/rules/python/* ~/.claude/rules/
cp -r everything-claude-code/rules/golang/* ~/.claude/rules/
# Copy commands
cp everything-claude-code/commands/*.md ~/.claude/commands/
@@ -451,15 +554,18 @@ model: opus
### 规则
规则是始终遵循的指导原则。保持其模块化
规则是始终遵循的指导原则,组织成 `common/`(与语言无关)+ 语言特定目录
```
~/.claude/rules/
security.md # No hardcoded secrets
coding-style.md # Immutability, file limits
testing.md # TDD, coverage requirements
rules/
common/ # Universal principles (always install)
typescript/ # TS/JS specific patterns and tools
python/ # Python specific patterns and tools
golang/ # Go specific patterns and tools
```
有关安装和结构详情,请参阅 [`rules/README.md`](rules/README.md)。
***
## 🧪 运行测试
@@ -493,11 +599,144 @@ node tests/hooks/hooks.test.js
### 贡献想法
* 特定语言技能Python、Rust 模式)- 现已包含 Go
* 特定框架配置Django、RailsLaravel
* DevOps 代理(KubernetesTerraformAWS
* 测试策略不同框架
* 特定领域的知识ML、数据工程、移动开发
* 特定语言技能 (Rust, C#, Swift, Kotlin) — Go, Python, Java 已包含
* 特定框架配置 (Rails, Laravel, FastAPI, NestJS) — Django, Spring Boot 已包含
* DevOps 智能体 (Kubernetes, Terraform, AWS, Docker)
* 测试策略 (不同框架,视觉回归)
* 领域特定知识 (ML, 数据工程, 移动端)
***
## Cursor IDE 支持
ecc-universal 包含为 [Cursor IDE](https://cursor.com) 预翻译的配置。`.cursor/` 目录包含适用于 Cursor 格式的规则、智能体、技能、命令和 MCP 配置。
### 快速开始 (Cursor)
```bash
# Install the package
npm install ecc-universal
# Install for your language(s)
./install.sh --target cursor typescript
./install.sh --target cursor python golang
```
### 已翻译内容
| 组件 | Claude Code → Cursor | 对等性 |
|-----------|---------------------|--------|
| 规则 | 添加了 YAML frontmatter路径扁平化 | 完全 |
| 智能体 | 模型 ID 已扩展,工具 → 只读标志 | 完全 |
| 技能 | 无需更改 (标准相同) | 相同 |
| 命令 | 路径引用已更新,多-\* 已存根 | 部分 |
| MCP 配置 | 环境变量插值语法已更新 | 完全 |
| 钩子 | Cursor 中无等效项 | 参见替代方案 |
详情请参阅 [.cursor/README.md](.cursor/README.md),完整迁移指南请参阅 [.cursor/MIGRATION.md](.cursor/MIGRATION.md)。
***
## 🔌 OpenCode 支持
ECC 提供 **完整的 OpenCode 支持**,包括插件和钩子。
### 快速开始
```bash
# Install OpenCode
npm install -g opencode
# Run in the repository root
opencode
```
配置会自动从 `.opencode/opencode.json` 检测。
### 功能对等
| 特性 | Claude Code | OpenCode | 状态 |
|---------|-------------|----------|--------|
| 智能体 | ✅ 14 agents | ✅ 12 agents | **Claude Code 领先** |
| 命令 | ✅ 30 commands | ✅ 24 commands | **Claude Code 领先** |
| 技能 | ✅ 28 skills | ✅ 16 skills | **Claude Code 领先** |
| 钩子 | ✅ 3 phases | ✅ 20+ events | **OpenCode 更多!** |
| 规则 | ✅ 8 rules | ✅ 8 rules | **完全一致** |
| MCP Servers | ✅ Full | ✅ Full | **完全一致** |
| 自定义工具 | ✅ Via hooks | ✅ Native support | **OpenCode 更好** |
### 通过插件实现的钩子支持
OpenCode 的插件系统比 Claude Code 更复杂,有 20 多种事件类型:
| Claude Code 钩子 | OpenCode 插件事件 |
|-----------------|----------------------|
| PreToolUse | `tool.execute.before` |
| PostToolUse | `tool.execute.after` |
| Stop | `session.idle` |
| SessionStart | `session.created` |
| SessionEnd | `session.deleted` |
**额外的 OpenCode 事件**`file.edited``file.watcher.updated``message.updated``lsp.client.diagnostics``tui.toast.show` 等等。
### 可用命令 (24)
| 命令 | 描述 |
|---------|-------------|
| `/plan` | 创建实施计划 |
| `/tdd` | 强制执行 TDD 工作流 |
| `/code-review` | 审查代码变更 |
| `/security` | 运行安全审查 |
| `/build-fix` | 修复构建错误 |
| `/e2e` | 生成端到端测试 |
| `/refactor-clean` | 移除死代码 |
| `/orchestrate` | 多代理工作流 |
| `/learn` | 从会话中提取模式 |
| `/checkpoint` | 保存验证状态 |
| `/verify` | 运行验证循环 |
| `/eval` | 根据标准进行评估 |
| `/update-docs` | 更新文档 |
| `/update-codemaps` | 更新代码地图 |
| `/test-coverage` | 分析覆盖率 |
| `/go-review` | Go 代码审查 |
| `/go-test` | Go TDD 工作流 |
| `/go-build` | 修复 Go 构建错误 |
| `/skill-create` | 从 git 生成技能 |
| `/instinct-status` | 查看习得的本能 |
| `/instinct-import` | 导入本能 |
| `/instinct-export` | 导出本能 |
| `/evolve` | 将本能聚类为技能 |
| `/setup-pm` | 配置包管理器 |
### 插件安装
**选项 1直接使用**
```bash
cd everything-claude-code
opencode
```
**选项 2作为 npm 包安装**
```bash
npm install ecc-universal
```
然后添加到您的 `opencode.json`
```json
{
"plugin": ["ecc-universal"]
}
```
### 文档
* **迁移指南**`.opencode/MIGRATION.md`
* **OpenCode 插件 README**`.opencode/README.md`
* **整合的规则**`.opencode/instructions/INSTRUCTIONS.md`
* **LLM 文档**`llms.txt`(完整的 OpenCode 文档,供 LLM 使用)
***
@@ -542,10 +781,11 @@ node tests/hooks/hooks.test.js
## 🔗 链接
* **简明指南(从此开始** [Everything Claude Code 简明指南](https://x.com/affaanmustafa/status/2012378465664745795)
* **详细指南(高级):** [Everything Claude Code 详细指南](https://x.com/affaanmustafa/status/2014040193557471352)
* **关注** [@affaanmustafa](https://x.com/affaanmustafa)
* **zenith.chat** [zenith.chat](https://zenith.chat)
* **速查指南 (从此开始):** [Claude Code 万事速查指南](https://x.com/affaanmustafa/status/2012378465664745795)
* **详细指南 (进阶):** [Claude Code 万事详细指南](https://x.com/affaanmustafa/status/2014040193557471352)
* **关注:** [@affaanmustafa](https://x.com/affaanmustafa)
* **zenith.chat:** [zenith.chat](https://zenith.chat)
* **技能目录:** [awesome-agent-skills](https://github.com/JackyST0/awesome-agent-skills)
***

47
docs/zh-CN/SPONSORS.md Normal file
View File

@@ -0,0 +1,47 @@
# 赞助者
感谢所有赞助本项目的各位!你们的支持让 ECC 生态系统持续成长。
## 企业赞助者
*成为 [企业赞助者](https://github.com/sponsors/affaan-m),将您的名字展示在此处*
## 商业赞助者
*成为 [商业赞助者](https://github.com/sponsors/affaan-m),将您的名字展示在此处*
## 团队赞助者
*成为 [团队赞助者](https://github.com/sponsors/affaan-m),将您的名字展示在此处*
## 个人赞助者
*成为 [赞助者](https://github.com/sponsors/affaan-m),将您的名字列在此处*
***
## 为什么要赞助?
您的赞助将帮助我们:
* **更快地交付** — 更多时间投入到工具和功能的开发上
* **保持免费** — 高级功能为所有人的免费层级提供资金支持
* **更好的支持** — 赞助者获得优先响应
* **影响路线图** — Pro+ 赞助者可以对功能进行投票
## 赞助等级
| 等级 | 价格 | 权益 |
|------|-------|----------|
| 支持者 | $5/月 | 名字出现在 README 中,早期访问 |
| 建造者 | $10/月 | 高级工具访问权限 |
| 专业版 | $25/月 | 优先支持,办公时间咨询 |
| 团队 | $100/月 | 5个席位团队配置 |
| 商业 | $500/月 | 25个席位咨询额度 |
| 企业 | $2K/月 | 无限席位,定制工具 |
[**Become a Sponsor →**](https://github.com/sponsors/affaan-m)
***
*自动更新。最后同步2026年2月*

View File

@@ -0,0 +1,274 @@
---
description: 为Go强制执行TDD工作流程。首先编写表驱动测试然后实现。使用go test -cover验证80%以上的覆盖率。
---
# Go TDD 命令
此命令使用惯用的 Go 测试模式,为 Go 代码强制执行测试驱动开发方法。
## 此命令的作用
1. **定义类型/接口**:首先搭建函数签名
2. **编写表驱动测试**创建全面的测试用例RED 阶段)
3. **运行测试**:验证测试因正确原因而失败
4. **实现代码**编写最少的代码以通过测试GREEN 阶段)
5. **重构**:改进代码,同时保持测试通过
6. **检查覆盖率**:确保 80% 以上的覆盖率
## 何时使用
在以下情况下使用 `/go-test`
* 实现新的 Go 函数时
* 为现有代码添加测试覆盖率时
* 修复 bug 时(先编写失败的测试)
* 构建关键业务逻辑时
* 学习 Go 中的 TDD 工作流程时
## TDD 循环
```
RED → Write failing table-driven test
GREEN → Implement minimal code to pass
REFACTOR → Improve code, tests stay green
REPEAT → Next test case
```
## 示例会话
````
User: /go-test I need a function to validate email addresses
Agent:
# TDD Session: Email Validator
## Step 1: Define Interface
```go
// validator/email.go
package validator
// ValidateEmail checks if the given string is a valid email address.
// Returns nil if valid, error describing the issue otherwise.
func ValidateEmail(email string) error {
panic("not implemented")
}
```
## Step 2: Write Table-Driven Tests (RED)
```go
// validator/email_test.go
package validator
import (
"testing"
)
func TestValidateEmail(t *testing.T) {
tests := []struct {
name string
email string
wantErr bool
}{
// Valid emails
{"simple email", "user@example.com", false},
{"with subdomain", "user@mail.example.com", false},
{"with plus", "user+tag@example.com", false},
{"with dots", "first.last@example.com", false},
// Invalid emails
{"empty string", "", true},
{"no at sign", "userexample.com", true},
{"no domain", "user@", true},
{"no local part", "@example.com", true},
{"double at", "user@@example.com", true},
{"spaces", "user @example.com", true},
{"no tld", "user@example", true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := ValidateEmail(tt.email)
if tt.wantErr && err == nil {
t.Errorf("ValidateEmail(%q) = nil; want error", tt.email)
}
if !tt.wantErr && err != nil {
t.Errorf("ValidateEmail(%q) = %v; want nil", tt.email, err)
}
})
}
}
```
## Step 3: Run Tests - Verify FAIL
```bash
$ go test ./validator/...
--- FAIL: TestValidateEmail (0.00s)
--- FAIL: TestValidateEmail/simple_email (0.00s)
panic: not implemented
FAIL
```
✓ Tests fail as expected (panic).
## Step 4: Implement Minimal Code (GREEN)
```go
// validator/email.go
package validator
import (
"errors"
"regexp"
)
var emailRegex = regexp.MustCompile(`^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$`)
var (
ErrEmailEmpty = errors.New("email cannot be empty")
ErrEmailInvalid = errors.New("email format is invalid")
)
func ValidateEmail(email string) error {
if email == "" {
return ErrEmailEmpty
}
if !emailRegex.MatchString(email) {
return ErrEmailInvalid
}
return nil
}
```
## Step 5: Run Tests - Verify PASS
```bash
$ go test ./validator/...
PASS
ok project/validator 0.003s
```
✓ All tests passing!
## Step 6: Check Coverage
```bash
$ go test -cover ./validator/...
PASS
coverage: 100.0% of statements
ok project/validator 0.003s
```
✓ Coverage: 100%
## TDD Complete!
````
## 测试模式
### 表驱动测试
```go
tests := []struct {
name string
input InputType
want OutputType
wantErr bool
}{
{"case 1", input1, want1, false},
{"case 2", input2, want2, true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := Function(tt.input)
// assertions
})
}
```
### 并行测试
```go
for _, tt := range tests {
tt := tt // Capture
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
// test body
})
}
```
### 测试辅助函数
```go
func setupTestDB(t *testing.T) *sql.DB {
t.Helper()
db := createDB()
t.Cleanup(func() { db.Close() })
return db
}
```
## 覆盖率命令
```bash
# Basic coverage
go test -cover ./...
# Coverage profile
go test -coverprofile=coverage.out ./...
# View in browser
go tool cover -html=coverage.out
# Coverage by function
go tool cover -func=coverage.out
# With race detection
go test -race -cover ./...
```
## 覆盖率目标
| 代码类型 | 目标 |
|-----------|--------|
| 关键业务逻辑 | 100% |
| 公共 API | 90%+ |
| 通用代码 | 80%+ |
| 生成的代码 | 排除 |
## TDD 最佳实践
**应该做:**
* 先编写测试,再编写任何实现
* 每次更改后运行测试
* 使用表驱动测试以获得全面的覆盖率
* 测试行为,而非实现细节
* 包含边界情况空值、nil、最大值
**不应该做:**
* 在编写测试之前编写实现
* 跳过 RED 阶段
* 直接测试私有函数
* 在测试中使用 `time.Sleep`
* 忽略不稳定的测试
## 相关命令
* `/go-build` - 修复构建错误
* `/go-review` - 在实现后审查代码
* `/verify` - 运行完整的验证循环
## 相关
* 技能:`skills/golang-testing/`
* 技能:`skills/tdd-workflow/`

View File

@@ -0,0 +1,162 @@
# 后端 - 后端导向开发
后端导向的工作流程(研究 → 构思 → 规划 → 执行 → 优化 → 评审),由 Codex 主导。
## 使用方法
```bash
/backend <backend task description>
```
## 上下文
* 后端任务:$ARGUMENTS
* Codex 主导Gemini 作为辅助参考
* 适用场景API 设计、算法实现、数据库优化、业务逻辑
## 你的角色
你是 **后端协调者**,为服务器端任务协调多模型协作(研究 → 构思 → 规划 → 执行 → 优化 → 评审)。
**协作模型**
* **Codex** 后端逻辑、算法(**后端权威,可信赖**
* **Gemini** 前端视角(**后端意见仅供参考**
* **Claude (自身)** 协调、规划、执行、交付
***
## 多模型调用规范
**调用语法**
```
# New session call
Bash({
command: "~/.claude/bin/codeagent-wrapper {{LITE_MODE_FLAG}}--backend codex - \"$PWD\" <<'EOF'
ROLE_FILE: <role prompt path>
<TASK>
Requirement: <enhanced requirement (or $ARGUMENTS if not enhanced)>
Context: <project context and analysis from previous phases>
</TASK>
OUTPUT: Expected output format
EOF",
run_in_background: false,
timeout: 3600000,
description: "Brief description"
})
# Resume session call
Bash({
command: "~/.claude/bin/codeagent-wrapper {{LITE_MODE_FLAG}}--backend codex resume <SESSION_ID> - \"$PWD\" <<'EOF'
ROLE_FILE: <role prompt path>
<TASK>
Requirement: <enhanced requirement (or $ARGUMENTS if not enhanced)>
Context: <project context and analysis from previous phases>
</TASK>
OUTPUT: Expected output format
EOF",
run_in_background: false,
timeout: 3600000,
description: "Brief description"
})
```
**角色提示词**
| 阶段 | Codex |
|-------|-------|
| 分析 | `~/.claude/.ccg/prompts/codex/analyzer.md` |
| 规划 | `~/.claude/.ccg/prompts/codex/architect.md` |
| 评审 | `~/.claude/.ccg/prompts/codex/reviewer.md` |
**会话复用**:每次调用返回 `SESSION_ID: xxx`,在后续阶段使用 `resume xxx`。在第 2 阶段保存 `CODEX_SESSION`,在第 3 和第 5 阶段使用 `resume`
***
## 沟通准则
1. 在回复开头使用模式标签 `[Mode: X]`,初始值为 `[Mode: Research]`
2. 遵循严格序列:`Research → Ideation → Plan → Execute → Optimize → Review`
3. 需要时(例如确认/选择/批准)使用 `AskUserQuestion` 工具进行用户交互
***
## 核心工作流程
### 阶段 0提示词增强可选
`[Mode: Prepare]` - 如果 ace-tool MCP 可用,调用 `mcp__ace-tool__enhance_prompt`**用增强后的结果替换原始的 $ARGUMENTS用于后续的 Codex 调用**
### 阶段 1研究
`[Mode: Research]` - 理解需求并收集上下文
1. **代码检索**(如果 ace-tool MCP 可用):调用 `mcp__ace-tool__search_context` 以检索现有的 API、数据模型、服务架构
2. 需求完整性评分0-10>=7 继续,<7 停止并补充
### 阶段 2构思
`[Mode: Ideation]` - Codex 主导的分析
**必须调用 Codex**(遵循上述调用规范):
* ROLE\_FILE`~/.claude/.ccg/prompts/codex/analyzer.md`
* 需求:增强后的需求(或未增强时的 $ARGUMENTS
* 上下文:来自阶段 1 的项目上下文
* 输出:技术可行性分析、推荐解决方案(至少 2 个)、风险评估
**保存 SESSION\_ID**`CODEX_SESSION`)以供后续阶段复用。
输出解决方案(至少 2 个),等待用户选择。
### 阶段 3规划
`[Mode: Plan]` - Codex 主导的规划
**必须调用 Codex**(使用 `resume <CODEX_SESSION>` 以复用会话):
* ROLE\_FILE`~/.claude/.ccg/prompts/codex/architect.md`
* 需求:用户选择的解决方案
* 上下文:阶段 2 的分析结果
* 输出:文件结构、函数/类设计、依赖关系
Claude 综合规划,在用户批准后保存到 `.claude/plan/task-name.md`
### 阶段 4实施
`[Mode: Execute]` - 代码开发
* 严格遵循已批准的规划
* 遵循现有项目的代码规范
* 确保错误处理、安全性、性能优化
### 阶段 5优化
`[Mode: Optimize]` - Codex 主导的评审
**必须调用 Codex**(遵循上述调用规范):
* ROLE\_FILE`~/.claude/.ccg/prompts/codex/reviewer.md`
* 需求:评审以下后端代码变更
* 上下文git diff 或代码内容
* 输出安全性、性能、错误处理、API 合规性问题列表
整合评审反馈,在用户确认后执行优化。
### 阶段 6质量评审
`[Mode: Review]` - 最终评估
* 对照规划检查完成情况
* 运行测试以验证功能
* 报告问题和建议
***
## 关键规则
1. **Codex 的后端意见是可信赖的**
2. **Gemini 的后端意见仅供参考**
3. 外部模型**对文件系统零写入权限**
4. Claude 处理所有代码写入和文件操作

View File

@@ -0,0 +1,315 @@
# 执行 - 多模型协同执行
多模型协同执行 - 从计划获取原型 → Claude 重构并实施 → 多模型审计与交付。
$ARGUMENTS
***
## 核心协议
* **语言协议**:与工具/模型交互时使用**英语**,与用户沟通时使用用户的语言
* **代码主权**:外部模型**零文件系统写入权限**,所有修改由 Claude 执行
* **脏原型重构**:将 Codex/Gemini 统一差异视为“脏原型”,必须重构为生产级代码
* **止损机制**:当前阶段输出未经验证前,不得进入下一阶段
* **前提条件**仅在用户明确回复“Y”到 `/ccg:plan` 输出后执行(如果缺失,必须先确认)
***
## 多模型调用规范
**调用语法**(并行:使用 `run_in_background: true`
```
# Resume session call (recommended) - Implementation Prototype
Bash({
command: "~/.claude/bin/codeagent-wrapper {{LITE_MODE_FLAG}}--backend <codex|gemini> {{GEMINI_MODEL_FLAG}}resume <SESSION_ID> - \"$PWD\" <<'EOF'
ROLE_FILE: <role prompt path>
<TASK>
Requirement: <task description>
Context: <plan content + target files>
</TASK>
OUTPUT: Unified Diff Patch ONLY. Strictly prohibit any actual modifications.
EOF",
run_in_background: true,
timeout: 3600000,
description: "Brief description"
})
# New session call - Implementation Prototype
Bash({
command: "~/.claude/bin/codeagent-wrapper {{LITE_MODE_FLAG}}--backend <codex|gemini> {{GEMINI_MODEL_FLAG}}- \"$PWD\" <<'EOF'
ROLE_FILE: <role prompt path>
<TASK>
Requirement: <task description>
Context: <plan content + target files>
</TASK>
OUTPUT: Unified Diff Patch ONLY. Strictly prohibit any actual modifications.
EOF",
run_in_background: true,
timeout: 3600000,
description: "Brief description"
})
```
**审计调用语法**(代码审查 / 审计):
```
Bash({
command: "~/.claude/bin/codeagent-wrapper {{LITE_MODE_FLAG}}--backend <codex|gemini> {{GEMINI_MODEL_FLAG}}resume <SESSION_ID> - \"$PWD\" <<'EOF'
ROLE_FILE: <role prompt path>
<TASK>
Scope: Audit the final code changes.
Inputs:
- The applied patch (git diff / final unified diff)
- The touched files (relevant excerpts if needed)
Constraints:
- Do NOT modify any files.
- Do NOT output tool commands that assume filesystem access.
</TASK>
OUTPUT:
1) A prioritized list of issues (severity, file, rationale)
2) Concrete fixes; if code changes are needed, include a Unified Diff Patch in a fenced code block.
EOF",
run_in_background: true,
timeout: 3600000,
description: "Brief description"
})
```
**模型参数说明**
* `{{GEMINI_MODEL_FLAG}}`:当使用 `--backend gemini` 时,替换为 `--gemini-model gemini-3-pro-preview`(注意尾随空格);对于 codex 使用空字符串
**角色提示**
| 阶段 | Codex | Gemini |
|-------|-------|--------|
| 实施 | `~/.claude/.ccg/prompts/codex/architect.md` | `~/.claude/.ccg/prompts/gemini/frontend.md` |
| 审查 | `~/.claude/.ccg/prompts/codex/reviewer.md` | `~/.claude/.ccg/prompts/gemini/reviewer.md` |
**会话重用**:如果 `/ccg:plan` 提供了 SESSION\_ID使用 `resume <SESSION_ID>` 来重用上下文。
**等待后台任务**(最大超时 600000ms = 10 分钟):
```
TaskOutput({ task_id: "<task_id>", block: true, timeout: 600000 })
```
**重要**
* 必须指定 `timeout: 600000`,否则默认 30 秒会导致过早超时
* 如果 10 分钟后仍未完成,继续使用 `TaskOutput` 轮询,**切勿终止进程**
* 如果因超时而跳过等待,**必须调用 `AskUserQuestion` 询问用户是继续等待还是终止任务**
***
## 执行工作流
**执行任务**$ARGUMENTS
### 阶段 0读取计划
`[Mode: Prepare]`
1. **识别输入类型**
* 计划文件路径(例如 `.claude/plan/xxx.md`
* 直接任务描述
2. **读取计划内容**
* 如果提供了计划文件路径,读取并解析
* 提取任务类型、实施步骤、关键文件、SESSION\_ID
3. **执行前确认**
* 如果输入是“直接任务描述”或计划缺少 `SESSION_ID` / 关键文件:先与用户确认
* 如果无法确认用户已回复“Y”到计划在继续前必须再次确认
4. **任务类型路由**
| 任务类型 | 检测 | 路由 |
|-----------|-----------|-------|
| **前端** | 页面、组件、UI、样式、布局 | Gemini |
| **后端** | API、接口、数据库、逻辑、算法 | Codex |
| **全栈** | 包含前端和后端 | Codex ∥ Gemini 并行 |
***
### 阶段 1快速上下文检索
`[Mode: Retrieval]`
**必须使用 MCP 工具进行快速上下文检索,切勿手动逐个读取文件**
基于计划中的“关键文件”列表,调用 `mcp__ace-tool__search_context`
```
mcp__ace-tool__search_context({
query: "<semantic query based on plan content, including key files, modules, function names>",
project_root_path: "$PWD"
})
```
**检索策略**
* 从计划的“关键文件”表中提取目标路径
* 构建语义查询覆盖:入口文件、依赖模块、相关类型定义
* 如果结果不足,添加 1-2 次递归检索
* **切勿**使用 Bash + find/ls 手动探索项目结构
**检索后**
* 组织检索到的代码片段
* 确认实施所需的完整上下文
* 进入阶段 3
***
### 阶段 3原型获取
`[Mode: Prototype]`
**基于任务类型路由**
#### 路由 A前端/UI/样式 → Gemini
**限制**:上下文 < 32k 令牌
1. 调用 Gemini使用 `~/.claude/.ccg/prompts/gemini/frontend.md`
2. 输入:计划内容 + 检索到的上下文 + 目标文件
3. 输出:`Unified Diff Patch ONLY. Strictly prohibit any actual modifications.`
4. **Gemini 是前端设计权威,其 CSS/React/Vue 原型是最终的视觉基线**
5. **警告**:忽略 Gemini 的后端逻辑建议
6. 如果计划包含 `GEMINI_SESSION`:优先使用 `resume <GEMINI_SESSION>`
#### 路由 B后端/逻辑/算法 → Codex
1. 调用 Codex使用 `~/.claude/.ccg/prompts/codex/architect.md`
2. 输入:计划内容 + 检索到的上下文 + 目标文件
3. 输出:`Unified Diff Patch ONLY. Strictly prohibit any actual modifications.`
4. **Codex 是后端逻辑权威,利用其逻辑推理和调试能力**
5. 如果计划包含 `CODEX_SESSION`:优先使用 `resume <CODEX_SESSION>`
#### 路由 C全栈 → 并行调用
1. **并行调用**`run_in_background: true`
* Gemini处理前端部分
* Codex处理后端部分
2. 使用 `TaskOutput` 等待两个模型的完整结果
3. 每个模型使用计划中相应的 `SESSION_ID` 作为 `resume`(如果缺失则创建新会话)
**遵循上面 `IMPORTANT` 中的 `Multi-Model Call Specification` 指令**
***
### 阶段 4代码实施
`[Mode: Implement]`
**Claude 作为代码主权执行以下步骤**
1. **读取差异**:解析 Codex/Gemini 返回的统一差异补丁
2. **心智沙盒**
* 模拟将差异应用到目标文件
* 检查逻辑一致性
* 识别潜在冲突或副作用
3. **重构与清理**
* 将“脏原型”重构为**高度可读、可维护、企业级代码**
* 移除冗余代码
* 确保符合项目现有代码标准
* **除非必要,不要生成注释/文档**,代码应具有自解释性
4. **最小范围**
* 更改仅限于需求范围
* **强制审查**副作用
* 进行针对性修正
5. **应用更改**
* 使用编辑/写入工具执行实际修改
* **仅修改必要代码**,绝不影响用户的其他现有功能
6. **自验证**(强烈推荐):
* 运行项目现有的 lint / 类型检查 / 测试(优先考虑最小相关范围)
* 如果失败:先修复回归问题,然后进入阶段 5
***
### 阶段 5审计与交付
`[Mode: Audit]`
#### 5.1 自动审计
**更改生效后,必须立即并行调用** Codex 和 Gemini 进行代码审查:
1. **Codex 审查**`run_in_background: true`
* ROLE\_FILE`~/.claude/.ccg/prompts/codex/reviewer.md`
* 输入:更改的差异 + 目标文件
* 重点:安全性、性能、错误处理、逻辑正确性
2. **Gemini 审查**`run_in_background: true`
* ROLE\_FILE`~/.claude/.ccg/prompts/gemini/reviewer.md`
* 输入:更改的差异 + 目标文件
* 重点:可访问性、设计一致性、用户体验
使用 `TaskOutput` 等待两个模型的完整审查结果。优先重用阶段 3 的会话(`resume <SESSION_ID>`)以确保上下文一致性。
#### 5.2 整合与修复
1. 综合 Codex + Gemini 的审查反馈
2. 按信任规则权衡:后端遵循 Codex前端遵循 Gemini
3. 执行必要的修复
4. 根据需要重复阶段 5.1(直到风险可接受)
#### 5.3 交付确认
审计通过后,向用户报告:
```markdown
## 执行完成
### 变更摘要
| 文件 | 操作 | 描述 |
|------|-----------|-------------|
| path/to/file.ts | 已修改 | 描述 |
### 审计结果
- Codex: <通过/发现 N 个问题>
- Gemini: <通过/发现 N 个问题>
### 建议
1. [ ] <建议的测试步骤>
2. [ ] <建议的验证步骤>
```
***
## 关键规则
1. **代码主权** 所有文件修改由 Claude 执行,外部模型零写入权限
2. **脏原型重构** Codex/Gemini 输出视为草稿,必须重构
3. **信任规则** 后端遵循 Codex前端遵循 Gemini
4. **最小更改** 仅修改必要代码,无副作用
5. **强制审计** 更改后必须执行多模型代码审查
***
## 使用方法
```bash
# Execute plan file
/ccg:execute .claude/plan/feature-name.md
# Execute task directly (for plans already discussed in context)
/ccg:execute implement user authentication based on previous plan
```
***
## 与 /ccg:plan 的关系
1. `/ccg:plan` 生成计划 + SESSION\_ID
2. 用户用“Y”确认
3. `/ccg:execute` 读取计划,重用 SESSION\_ID执行实施

View File

@@ -0,0 +1,162 @@
# 前端 - 前端聚焦开发
前端聚焦的工作流(研究 → 构思 → 规划 → 执行 → 优化 → 评审),由 Gemini 主导。
## 使用方法
```bash
/frontend <UI task description>
```
## 上下文
* 前端任务: $ARGUMENTS
* Gemini 主导Codex 作为辅助参考
* 适用场景: 组件设计、响应式布局、UI 动画、样式优化
## 您的角色
您是 **前端协调器**,为 UI/UX 任务协调多模型协作(研究 → 构思 → 规划 → 执行 → 优化 → 评审)。
**协作模型**:
* **Gemini** 前端 UI/UX**前端权威,可信赖**
* **Codex** 后端视角(**前端意见仅供参考**
* **Claude自身** 协调、规划、执行、交付
***
## 多模型调用规范
**调用语法**:
```
# New session call
Bash({
command: "~/.claude/bin/codeagent-wrapper {{LITE_MODE_FLAG}}--backend gemini --gemini-model gemini-3-pro-preview - \"$PWD\" <<'EOF'
ROLE_FILE: <role prompt path>
<TASK>
Requirement: <enhanced requirement (or $ARGUMENTS if not enhanced)>
Context: <project context and analysis from previous phases>
</TASK>
OUTPUT: Expected output format
EOF",
run_in_background: false,
timeout: 3600000,
description: "Brief description"
})
# Resume session call
Bash({
command: "~/.claude/bin/codeagent-wrapper {{LITE_MODE_FLAG}}--backend gemini --gemini-model gemini-3-pro-preview resume <SESSION_ID> - \"$PWD\" <<'EOF'
ROLE_FILE: <role prompt path>
<TASK>
Requirement: <enhanced requirement (or $ARGUMENTS if not enhanced)>
Context: <project context and analysis from previous phases>
</TASK>
OUTPUT: Expected output format
EOF",
run_in_background: false,
timeout: 3600000,
description: "Brief description"
})
```
**角色提示词**:
| 阶段 | Gemini |
|-------|--------|
| 分析 | `~/.claude/.ccg/prompts/gemini/analyzer.md` |
| 规划 | `~/.claude/.ccg/prompts/gemini/architect.md` |
| 评审 | `~/.claude/.ccg/prompts/gemini/reviewer.md` |
**会话重用**: 每次调用返回 `SESSION_ID: xxx`,在后续阶段使用 `resume xxx`。在阶段 2 保存 `GEMINI_SESSION`,在阶段 3 和 5 使用 `resume`
***
## 沟通指南
1. 以模式标签 `[Mode: X]` 开始响应,初始为 `[Mode: Research]`
2. 遵循严格顺序: `Research → Ideation → Plan → Execute → Optimize → Review`
3. 需要时(例如确认/选择/批准)使用 `AskUserQuestion` 工具进行用户交互
***
## 核心工作流
### 阶段 0: 提示词增强(可选)
`[Mode: Prepare]` - 如果 ace-tool MCP 可用,调用 `mcp__ace-tool__enhance_prompt`**将原始的 $ARGUMENTS 替换为增强后的结果,用于后续的 Gemini 调用**
### 阶段 1: 研究
`[Mode: Research]` - 理解需求并收集上下文
1. **代码检索**(如果 ace-tool MCP 可用): 调用 `mcp__ace-tool__search_context` 来检索现有的组件、样式、设计系统
2. 需求完整性评分0-10: >=7 继续,<7 停止并补充
### 阶段 2: 构思
`[Mode: Ideation]` - Gemini 主导的分析
**必须调用 Gemini**(遵循上述调用规范):
* ROLE\_FILE: `~/.claude/.ccg/prompts/gemini/analyzer.md`
* 需求: 增强后的需求(或未经增强的 $ARGUMENTS
* 上下文: 来自阶段 1 的项目上下文
* 输出: UI 可行性分析、推荐解决方案(至少 2 个、UX 评估
**保存 SESSION\_ID**`GEMINI_SESSION`)以供后续阶段重用。
输出解决方案(至少 2 个),等待用户选择。
### 阶段 3: 规划
`[Mode: Plan]` - Gemini 主导的规划
**必须调用 Gemini**(使用 `resume <GEMINI_SESSION>` 来重用会话):
* ROLE\_FILE: `~/.claude/.ccg/prompts/gemini/architect.md`
* 需求: 用户选择的解决方案
* 上下文: 阶段 2 的分析结果
* 输出: 组件结构、UI 流程、样式方案
Claude 综合规划,在用户批准后保存到 `.claude/plan/task-name.md`
### 阶段 4: 实现
`[Mode: Execute]` - 代码开发
* 严格遵循批准的规划
* 遵循现有项目设计系统和代码标准
* 确保响应式设计、可访问性
### 阶段 5: 优化
`[Mode: Optimize]` - Gemini 主导的评审
**必须调用 Gemini**(遵循上述调用规范):
* ROLE\_FILE: `~/.claude/.ccg/prompts/gemini/reviewer.md`
* 需求: 评审以下前端代码变更
* 上下文: git diff 或代码内容
* 输出: 可访问性、响应式设计、性能、设计一致性等问题列表
整合评审反馈,在用户确认后执行优化。
### 阶段 6: 质量评审
`[Mode: Review]` - 最终评估
* 对照规划检查完成情况
* 验证响应式设计和可访问性
* 报告问题与建议
***
## 关键规则
1. **Gemini 的前端意见是可信赖的**
2. **Codex 的前端意见仅供参考**
3. 外部模型**没有文件系统写入权限**
4. Claude 处理所有代码写入和文件操作

View File

@@ -0,0 +1,270 @@
# 计划 - 多模型协同规划
多模型协同规划 - 上下文检索 + 双模型分析 → 生成分步实施计划。
$ARGUMENTS
***
## 核心协议
* **语言协议**:与工具/模型交互时使用 **英语**,与用户沟通时使用其语言
* **强制并行**Codex/Gemini 调用 **必须** 使用 `run_in_background: true`(包括单模型调用,以避免阻塞主线程)
* **代码主权**:外部模型 **零文件系统写入权限**,所有修改由 Claude 执行
* **止损机制**:在当前阶段输出验证完成前,不进入下一阶段
* **仅限规划**:此命令允许读取上下文并写入 `.claude/plan/*` 计划文件,但 **绝不修改生产代码**
***
## 多模型调用规范
**调用语法**(并行:使用 `run_in_background: true`
```
Bash({
command: "~/.claude/bin/codeagent-wrapper {{LITE_MODE_FLAG}}--backend <codex|gemini> {{GEMINI_MODEL_FLAG}}- \"$PWD\" <<'EOF'
ROLE_FILE: <role prompt path>
<TASK>
Requirement: <enhanced requirement>
Context: <retrieved project context>
</TASK>
OUTPUT: Step-by-step implementation plan with pseudo-code. DO NOT modify any files.
EOF",
run_in_background: true,
timeout: 3600000,
description: "Brief description"
})
```
**模型参数说明**
* `{{GEMINI_MODEL_FLAG}}`: 当使用 `--backend gemini` 时,替换为 `--gemini-model gemini-3-pro-preview`(注意尾随空格);对于 codex 使用空字符串
**角色提示**
| 阶段 | Codex | Gemini |
|-------|-------|--------|
| 分析 | `~/.claude/.ccg/prompts/codex/analyzer.md` | `~/.claude/.ccg/prompts/gemini/analyzer.md` |
| 规划 | `~/.claude/.ccg/prompts/codex/architect.md` | `~/.claude/.ccg/prompts/gemini/architect.md` |
**会话复用**:每次调用返回 `SESSION_ID: xxx`(通常由包装器输出),**必须保存** 供后续 `/ccg:execute` 使用。
**等待后台任务**(最大超时 600000ms = 10 分钟):
```
TaskOutput({ task_id: "<task_id>", block: true, timeout: 600000 })
```
**重要提示**
* 必须指定 `timeout: 600000`,否则默认 30 秒会导致过早超时
* 如果 10 分钟后仍未完成,继续使用 `TaskOutput` 轮询,**绝不终止进程**
* 如果因超时而跳过等待,**必须调用 `AskUserQuestion` 询问用户是继续等待还是终止任务**
***
## 执行流程
**规划任务**$ARGUMENTS
### 阶段 1完整上下文检索
`[Mode: Research]`
#### 1.1 提示增强(必须先执行)
**必须调用 `mcp__ace-tool__enhance_prompt` 工具**
```
mcp__ace-tool__enhance_prompt({
prompt: "$ARGUMENTS",
conversation_history: "<last 5-10 conversation turns>",
project_root_path: "$PWD"
})
```
等待增强后的提示,**将所有后续阶段的原始 $ARGUMENTS 替换为增强结果**。
#### 1.2 上下文检索
**调用 `mcp__ace-tool__search_context` 工具**
```
mcp__ace-tool__search_context({
query: "<semantic query based on enhanced requirement>",
project_root_path: "$PWD"
})
```
* 使用自然语言构建语义查询Where/What/How
* **绝不基于假设回答**
* 如果 MCP 不可用:回退到 Glob + Grep 进行文件发现和关键符号定位
#### 1.3 完整性检查
* 必须获取相关类、函数、变量的 **完整定义和签名**
* 如果上下文不足,触发 **递归检索**
* 输出优先级:入口文件 + 行号 + 关键符号名称;仅在必要时添加最小代码片段以消除歧义
#### 1.4 需求对齐
* 如果需求仍有歧义,**必须** 输出引导性问题给用户
* 直到需求边界清晰(无遗漏,无冗余)
### 阶段 2多模型协同分析
`[Mode: Analysis]`
#### 2.1 分发输入
**并行调用** Codex 和 Gemini`run_in_background: true`
**原始需求**(不预设观点)分发给两个模型:
1. **Codex 后端分析**
* ROLE\_FILE`~/.claude/.ccg/prompts/codex/analyzer.md`
* 重点:技术可行性、架构影响、性能考虑、潜在风险
* 输出:多视角解决方案 + 优缺点分析
2. **Gemini 前端分析**
* ROLE\_FILE`~/.claude/.ccg/prompts/gemini/analyzer.md`
* 重点UI/UX 影响、用户体验、视觉设计
* 输出:多视角解决方案 + 优缺点分析
使用 `TaskOutput` 等待两个模型的完整结果。**保存 SESSION\_ID**`CODEX_SESSION``GEMINI_SESSION`)。
#### 2.2 交叉验证
整合视角并迭代优化:
1. **识别共识**(强信号)
2. **识别分歧**(需要权衡)
3. **互补优势**:后端逻辑遵循 Codex前端设计遵循 Gemini
4. **逻辑推理**:消除解决方案中的逻辑漏洞
#### 2.3(可选但推荐)双模型计划草案
为减少 Claude 综合计划中的遗漏风险,可以并行让两个模型输出“计划草案”(仍然 **不允许** 修改文件):
1. **Codex 计划草案**(后端权威):
* ROLE\_FILE`~/.claude/.ccg/prompts/codex/architect.md`
* 输出:分步计划 + 伪代码(重点:数据流/边缘情况/错误处理/测试策略)
2. **Gemini 计划草案**(前端权威):
* ROLE\_FILE`~/.claude/.ccg/prompts/gemini/architect.md`
* 输出:分步计划 + 伪代码(重点:信息架构/交互/可访问性/视觉一致性)
使用 `TaskOutput` 等待两个模型的完整结果,记录它们建议的关键差异。
#### 2.4 生成实施计划Claude 最终版本)
综合两个分析,生成 **分步实施计划**
```markdown
## 实施计划:<任务名称>
### 任务类型
- [ ] 前端 (→ Gemini)
- [ ] 后端 (→ Codex)
- [ ] 全栈 (→ 并行)
### 技术解决方案
<基于 Codex + Gemini 分析得出的最优解决方案>
### 实施步骤
1. <步骤 1> - 预期交付物
2. <步骤 2> - 预期交付物
...
### 关键文件
| 文件 | 操作 | 描述 |
|------|-----------|-------------|
| path/to/file.ts:L10-L50 | 修改 | 描述 |
### 风险与缓解措施
| 风险 | 缓解措施 |
|------|------------|
### SESSION_ID (供 /ccg:execute 使用)
- CODEX_SESSION: <session_id>
- GEMINI_SESSION: <session_id>
```
### 阶段 2 结束:计划交付(非执行)
**`/ccg:plan` 的职责到此结束,必须执行以下操作**
1. 向用户呈现完整的实施计划(包括伪代码)
2. 将计划保存到 `.claude/plan/<feature-name>.md`(从需求中提取功能名称,例如 `user-auth``payment-module`
3.**粗体文本** 输出提示(必须使用实际保存的文件路径):
***
**计划已生成并保存至 `.claude/plan/actual-feature-name.md`**
**请审阅以上计划。您可以:**
* **修改计划**:告诉我需要调整的内容,我会更新计划
* **执行计划**:复制以下命令到新会话
```
/ccg:execute .claude/plan/actual-feature-name.md
```
***
**注意**:上面的 `actual-feature-name.md` 必须替换为实际保存的文件名!
4. **立即终止当前响应**(在此停止。不再进行工具调用。)
**绝对禁止**
* 询问用户“是/否”然后自动执行(执行是 `/ccg:execute` 的职责)
* 任何对生产代码的写入操作
* 自动调用 `/ccg:execute` 或任何实施操作
* 当用户未明确请求修改时继续触发模型调用
***
## 计划保存
规划完成后,将计划保存至:
* **首次规划**`.claude/plan/<feature-name>.md`
* **迭代版本**`.claude/plan/<feature-name>-v2.md``.claude/plan/<feature-name>-v3.md`...
计划文件写入应在向用户呈现计划前完成。
***
## 计划修改流程
如果用户请求修改计划:
1. 根据用户反馈调整计划内容
2. 更新 `.claude/plan/<feature-name>.md` 文件
3. 重新呈现修改后的计划
4. 提示用户再次审阅或执行
***
## 后续步骤
用户批准后,**手动** 执行:
```bash
/ccg:execute .claude/plan/<feature-name>.md
```
***
## 关键规则
1. **仅规划,不实施** 此命令不执行任何代码更改
2. **无是/否提示** 仅呈现计划,让用户决定后续步骤
3. **信任规则** 后端遵循 Codex前端遵循 Gemini
4. 外部模型 **零文件系统写入权限**
5. **SESSION\_ID 交接** 计划末尾必须包含 `CODEX_SESSION` / `GEMINI_SESSION`(供 `/ccg:execute resume <SESSION_ID>` 使用)

View File

@@ -0,0 +1,189 @@
# 工作流程 - 多模型协同开发
多模型协同开发工作流程(研究 → 构思 → 规划 → 执行 → 优化 → 审查),带有智能路由:前端 → Gemini后端 → Codex。
结构化开发工作流程包含质量门控、MCP 服务和多模型协作。
## 使用方法
```bash
/workflow <task description>
```
## 上下文
* 待开发任务:$ARGUMENTS
* 结构化的 6 阶段工作流程,包含质量门控
* 多模型协作Codex后端 + Gemini前端 + Claude编排
* MCP 服务集成ace-tool以增强能力
## 你的角色
你是**编排者**,协调一个多模型协作系统(研究 → 构思 → 规划 → 执行 → 优化 → 审查)。为有经验的开发者进行简洁、专业的沟通。
**协作模型**
* **ace-tool MCP** 代码检索 + 提示词增强
* **Codex** 后端逻辑、算法、调试(**后端权威,可信赖**
* **Gemini** 前端 UI/UX、视觉设计**前端专家,后端意见仅供参考**
* **Claude自身** 编排、规划、执行、交付
***
## 多模型调用规范
**调用语法**(并行:`run_in_background: true`,串行:`false`
```
# New session call
Bash({
command: "~/.claude/bin/codeagent-wrapper {{LITE_MODE_FLAG}}--backend <codex|gemini> {{GEMINI_MODEL_FLAG}}- \"$PWD\" <<'EOF'
ROLE_FILE: <role prompt path>
<TASK>
Requirement: <enhanced requirement (or $ARGUMENTS if not enhanced)>
Context: <project context and analysis from previous phases>
</TASK>
OUTPUT: Expected output format
EOF",
run_in_background: true,
timeout: 3600000,
description: "Brief description"
})
# Resume session call
Bash({
command: "~/.claude/bin/codeagent-wrapper {{LITE_MODE_FLAG}}--backend <codex|gemini> {{GEMINI_MODEL_FLAG}}resume <SESSION_ID> - \"$PWD\" <<'EOF'
ROLE_FILE: <role prompt path>
<TASK>
Requirement: <enhanced requirement (or $ARGUMENTS if not enhanced)>
Context: <project context and analysis from previous phases>
</TASK>
OUTPUT: Expected output format
EOF",
run_in_background: true,
timeout: 3600000,
description: "Brief description"
})
```
**模型参数说明**
* `{{GEMINI_MODEL_FLAG}}`: 当使用 `--backend gemini` 时,替换为 `--gemini-model gemini-3-pro-preview`(注意末尾空格);对于 codex 使用空字符串
**角色提示词**
| 阶段 | Codex | Gemini |
|-------|-------|--------|
| 分析 | `~/.claude/.ccg/prompts/codex/analyzer.md` | `~/.claude/.ccg/prompts/gemini/analyzer.md` |
| 规划 | `~/.claude/.ccg/prompts/codex/architect.md` | `~/.claude/.ccg/prompts/gemini/architect.md` |
| 审查 | `~/.claude/.ccg/prompts/codex/reviewer.md` | `~/.claude/.ccg/prompts/gemini/reviewer.md` |
**会话复用**:每次调用返回 `SESSION_ID: xxx`,在后续阶段使用 `resume xxx` 子命令(注意:`resume`,而非 `--resume`)。
**并行调用**:使用 `run_in_background: true` 启动,使用 `TaskOutput` 等待结果。**必须等待所有模型返回后才能进入下一阶段**。
**等待后台任务**(使用最大超时 600000ms = 10 分钟):
```
TaskOutput({ task_id: "<task_id>", block: true, timeout: 600000 })
```
**重要**
* 必须指定 `timeout: 600000`,否则默认 30 秒会导致过早超时。
* 如果 10 分钟后仍未完成,继续使用 `TaskOutput` 轮询,**切勿终止进程**。
* 如果因超时而跳过等待,**必须调用 `AskUserQuestion` 询问用户是继续等待还是终止任务。切勿直接终止。**
***
## 沟通指南
1. 回复以模式标签 `[Mode: X]` 开头,初始为 `[Mode: Research]`
2. 遵循严格顺序:`Research → Ideation → Plan → Execute → Optimize → Review`
3. 每个阶段完成后请求用户确认。
4. 当评分 < 7 或用户不批准时强制停止。
5. 需要时(例如确认/选择/批准)使用 `AskUserQuestion` 工具进行用户交互。
***
## 执行工作流程
**任务描述**$ARGUMENTS
### 阶段 1研究与分析
`[Mode: Research]` - 理解需求并收集上下文:
1. **提示词增强**:调用 `mcp__ace-tool__enhance_prompt`**将所有后续对 Codex/Gemini 的调用中的原始 $ARGUMENTS 替换为增强后的结果**
2. **上下文检索**:调用 `mcp__ace-tool__search_context`
3. **需求完整性评分** (0-10)
* 目标清晰度 (0-3),预期成果 (0-3),范围边界 (0-2),约束条件 (0-2)
* ≥7继续 | <7停止询问澄清问题
### 阶段 2解决方案构思
`[Mode: Ideation]` - 多模型并行分析:
**并行调用** (`run_in_background: true`)
* Codex使用分析器提示词输出技术可行性、解决方案、风险
* Gemini使用分析器提示词输出 UI 可行性、解决方案、UX 评估
使用 `TaskOutput` 等待结果。**保存 SESSION\_ID** (`CODEX_SESSION``GEMINI_SESSION`)。
**遵循上方 `Multi-Model Call Specification` 中的 `IMPORTANT` 说明**
综合两项分析,输出解决方案比较(至少 2 个选项),等待用户选择。
### 阶段 3详细规划
`[Mode: Plan]` - 多模型协作规划:
**并行调用**(使用 `resume <SESSION_ID>` 恢复会话):
* Codex使用架构师提示词 + `resume $CODEX_SESSION`,输出后端架构
* Gemini使用架构师提示词 + `resume $GEMINI_SESSION`,输出前端架构
使用 `TaskOutput` 等待结果。
**遵循上方 `Multi-Model Call Specification` 中的 `IMPORTANT` 说明**
**Claude 综合**:采纳 Codex 后端计划 + Gemini 前端计划,在用户批准后保存到 `.claude/plan/task-name.md`
### 阶段 4实施
`[Mode: Execute]` - 代码开发:
* 严格遵循批准的计划
* 遵循现有项目代码标准
* 在关键里程碑请求反馈
### 阶段 5代码优化
`[Mode: Optimize]` - 多模型并行审查:
**并行调用**
* Codex使用审查者提示词关注安全性、性能、错误处理
* Gemini使用审查者提示词关注可访问性、设计一致性
使用 `TaskOutput` 等待结果。整合审查反馈,在用户确认后执行优化。
**遵循上方 `Multi-Model Call Specification` 中的 `IMPORTANT` 说明**
### 阶段 6质量审查
`[Mode: Review]` - 最终评估:
* 对照计划检查完成情况
* 运行测试以验证功能
* 报告问题和建议
* 请求最终用户确认
***
## 关键规则
1. 阶段顺序不可跳过(除非用户明确指示)
2. 外部模型**对文件系统零写入权限**,所有修改由 Claude 执行
3. 当评分 < 7 或用户不批准时**强制停止**

283
docs/zh-CN/commands/pm2.md Normal file
View File

@@ -0,0 +1,283 @@
# PM2 初始化
自动分析项目并生成 PM2 服务命令。
**命令**: `$ARGUMENTS`
***
## 工作流程
1. 检查 PM2如果缺失通过 `npm install -g pm2` 安装)
2. 扫描项目以识别服务(前端/后端/数据库)
3. 生成配置文件和各命令文件
***
## 服务检测
| 类型 | 检测方式 | 默认端口 |
|------|-----------|--------------|
| Vite | vite.config.\* | 5173 |
| Next.js | next.config.\* | 3000 |
| Nuxt | nuxt.config.\* | 3000 |
| CRA | package.json 中的 react-scripts | 3000 |
| Express/Node | server/backend/api 目录 + package.json | 3000 |
| FastAPI/Flask | requirements.txt / pyproject.toml | 8000 |
| Go | go.mod / main.go | 8080 |
**端口检测优先级**: 用户指定 > .env 文件 > 配置文件 > 脚本参数 > 默认端口
***
## 生成的文件
```
project/
├── ecosystem.config.cjs # PM2 config
├── {backend}/start.cjs # Python wrapper (if applicable)
└── .claude/
├── commands/
│ ├── pm2-all.md # Start all + monit
│ ├── pm2-all-stop.md # Stop all
│ ├── pm2-all-restart.md # Restart all
│ ├── pm2-{port}.md # Start single + logs
│ ├── pm2-{port}-stop.md # Stop single
│ ├── pm2-{port}-restart.md # Restart single
│ ├── pm2-logs.md # View all logs
│ └── pm2-status.md # View status
└── scripts/
├── pm2-logs-{port}.ps1 # Single service logs
└── pm2-monit.ps1 # PM2 monitor
```
***
## Windows 配置(重要)
### ecosystem.config.cjs
**必须使用 `.cjs` 扩展名**
```javascript
module.exports = {
apps: [
// Node.js (Vite/Next/Nuxt)
{
name: 'project-3000',
cwd: './packages/web',
script: 'node_modules/vite/bin/vite.js',
args: '--port 3000',
interpreter: 'C:/Program Files/nodejs/node.exe',
env: { NODE_ENV: 'development' }
},
// Python
{
name: 'project-8000',
cwd: './backend',
script: 'start.cjs',
interpreter: 'C:/Program Files/nodejs/node.exe',
env: { PYTHONUNBUFFERED: '1' }
}
]
}
```
**框架脚本路径:**
| 框架 | script | args |
|-----------|--------|------|
| Vite | `node_modules/vite/bin/vite.js` | `--port {port}` |
| Next.js | `node_modules/next/dist/bin/next` | `dev -p {port}` |
| Nuxt | `node_modules/nuxt/bin/nuxt.mjs` | `dev --port {port}` |
| Express | `src/index.js``server.js` | - |
### Python 包装脚本 (start.cjs)
```javascript
const { spawn } = require('child_process');
const proc = spawn('python', ['-m', 'uvicorn', 'app.main:app', '--host', '0.0.0.0', '--port', '8000', '--reload'], {
cwd: __dirname, stdio: 'inherit', windowsHide: true
});
proc.on('close', (code) => process.exit(code));
```
***
## 命令文件模板(最简内容)
### pm2-all.md (启动所有 + 监控)
````markdown
启动所有服务并打开 PM2 监控器。
```bash
cd "{PROJECT_ROOT}" && pm2 start ecosystem.config.cjs && start wt.exe -d "{PROJECT_ROOT}" pwsh -NoExit -c "pm2 monit"
```
````
### pm2-all-stop.md
````markdown
停止所有服务。
```bash
cd "{PROJECT_ROOT}" && pm2 stop all
```
````
### pm2-all-restart.md
````markdown
重启所有服务。
```bash
cd "{PROJECT_ROOT}" && pm2 restart all
```
````
### pm2-{port}.md (启动单个 + 日志)
````markdown
启动 {name} ({port}) 并打开日志。
```bash
cd "{PROJECT_ROOT}" && pm2 start ecosystem.config.cjs --only {name} && start wt.exe -d "{PROJECT_ROOT}" pwsh -NoExit -c "pm2 logs {name}"
```
````
### pm2-{port}-stop.md
````markdown
停止 {name} ({port})。
```bash
cd "{PROJECT_ROOT}" && pm2 stop {name}
```
````
### pm2-{port}-restart.md
````markdown
重启 {name} ({port})。
```bash
cd "{PROJECT_ROOT}" && pm2 restart {name}
```
````
### pm2-logs.md
````markdown
查看所有 PM2 日志。
```bash
cd "{PROJECT_ROOT}" && pm2 logs
```
````
### pm2-status.md
````markdown
查看 PM2 状态。
```bash
cd "{PROJECT_ROOT}" && pm2 status
```
````
### PowerShell 脚本 (pm2-logs-{port}.ps1)
```powershell
Set-Location "{PROJECT_ROOT}"
pm2 logs {name}
```
### PowerShell 脚本 (pm2-monit.ps1)
```powershell
Set-Location "{PROJECT_ROOT}"
pm2 monit
```
***
## 关键规则
1. **配置文件**: `ecosystem.config.cjs` (不是 .js)
2. **Node.js**: 直接指定 bin 路径 + 解释器
3. **Python**: Node.js 包装脚本 + `windowsHide: true`
4. **打开新窗口**: `start wt.exe -d "{path}" pwsh -NoExit -c "command"`
5. **最简内容**: 每个命令文件只有 1-2 行描述 + bash 代码块
6. **直接执行**: 无需 AI 解析,直接运行 bash 命令
***
## 执行
基于 `$ARGUMENTS`,执行初始化:
1. 扫描项目服务
2. 生成 `ecosystem.config.cjs`
3. 为 Python 服务生成 `{backend}/start.cjs`(如果适用)
4. 在 `.claude/commands/` 中生成命令文件
5. 在 `.claude/scripts/` 中生成脚本文件
6. **更新项目 CLAUDE.md**,添加 PM2 信息(见下文)
7. **显示完成摘要**,包含终端命令
***
## 初始化后:更新 CLAUDE.md
生成文件后,将 PM2 部分追加到项目的 `CLAUDE.md`(如果不存在则创建):
````markdown
## PM2 服务
| 端口 | 名称 | 类型 |
|------|------|------|
| {port} | {name} | {type} |
**终端命令:**
```bash
pm2 start ecosystem.config.cjs # First time
pm2 start all # After first time
pm2 stop all / pm2 restart all
pm2 start {name} / pm2 stop {name}
pm2 logs / pm2 status / pm2 monit
pm2 save # Save process list
pm2 resurrect # Restore saved list
```
````
**更新 CLAUDE.md 的规则:**
* 如果存在 PM2 部分,替换它
* 如果不存在,追加到末尾
* 保持内容精简且必要
***
## 初始化后:显示摘要
所有文件生成后,输出:
```
## PM2 Init Complete
**Services:**
| Port | Name | Type |
|------|------|------|
| {port} | {name} | {type} |
**Claude Commands:** /pm2-all, /pm2-all-stop, /pm2-{port}, /pm2-{port}-stop, /pm2-logs, /pm2-status
**Terminal Commands:**
## First time (with config file)
pm2 start ecosystem.config.cjs && pm2 save
## After first time (simplified)
pm2 start all # Start all
pm2 stop all # Stop all
pm2 restart all # Restart all
pm2 start {name} # Start single
pm2 stop {name} # Stop single
pm2 logs # View logs
pm2 monit # Monitor panel
pm2 resurrect # Restore saved processes
**Tip:** Run `pm2 save` after first start to enable simplified commands.
```

View File

@@ -0,0 +1,80 @@
# 规则
## 结构
规则被组织为一个**通用**层加上**语言特定**的目录:
```
rules/
├── common/ # Language-agnostic principles (always install)
│ ├── coding-style.md
│ ├── git-workflow.md
│ ├── testing.md
│ ├── performance.md
│ ├── patterns.md
│ ├── hooks.md
│ ├── agents.md
│ └── security.md
├── typescript/ # TypeScript/JavaScript specific
├── python/ # Python specific
└── golang/ # Go specific
```
* **common/** 包含通用原则 —— 没有语言特定的代码示例。
* **语言目录** 通过框架特定的模式、工具和代码示例来扩展通用规则。每个文件都引用其对应的通用文件。
## 安装
### 选项 1安装脚本推荐
```bash
# Install common + one or more language-specific rule sets
./install.sh typescript
./install.sh python
./install.sh golang
# Install multiple languages at once
./install.sh typescript python
```
### 选项 2手动安装
> **重要提示:** 复制整个目录 —— 不要使用 `/*` 将其扁平化。
> 通用目录和语言特定目录包含同名的文件。
> 将它们扁平化到一个目录会导致语言特定的文件覆盖通用规则,并破坏语言特定文件使用的相对 `../common/` 引用。
```bash
# Install common rules (required for all projects)
cp -r rules/common ~/.claude/rules/common
# Install language-specific rules based on your project's tech stack
cp -r rules/typescript ~/.claude/rules/typescript
cp -r rules/python ~/.claude/rules/python
cp -r rules/golang ~/.claude/rules/golang
# Attention ! ! ! Configure according to your actual project requirements; the configuration here is for reference only.
```
## 规则与技能
* **规则** 定义广泛适用的标准、约定和检查清单例如“80% 的测试覆盖率”、“没有硬编码的密钥”)。
* **技能**`skills/` 目录)为特定任务提供深入、可操作的参考材料(例如,`python-patterns``golang-testing`)。
语言特定的规则文件会在适当的地方引用相关的技能。规则告诉你*要做什么*;技能告诉你*如何去做*。
## 添加新语言
要添加对新语言的支持(例如,`rust/`
1. 创建一个 `rules/rust/` 目录
2. 添加扩展通用规则的文件:
* `coding-style.md` —— 格式化工具、习惯用法、错误处理模式
* `testing.md` —— 测试框架、覆盖率工具、测试组织
* `patterns.md` —— 语言特定的设计模式
* `hooks.md` —— 用于格式化工具、代码检查器、类型检查器的 PostToolUse 钩子
* `security.md` —— 密钥管理、安全扫描工具
3. 每个文件应以以下内容开头:
```
> 此文件通过 <语言> 特定内容扩展了 [common/xxx.md](../common/xxx.md)。
```
4. 如果现有技能可用,则引用它们,或者在 `skills/` 下创建新的技能。

View File

@@ -1,72 +0,0 @@
# 编码风格
## 不可变性(关键)
始终创建新对象,切勿修改:
```javascript
// WRONG: Mutation
function updateUser(user, name) {
user.name = name // MUTATION!
return user
}
// CORRECT: Immutability
function updateUser(user, name) {
return {
...user,
name
}
}
```
## 文件组织
多个小文件 > 少数大文件:
* 高内聚,低耦合
* 典型 200-400 行,最多 800 行
* 从大型组件中提取实用工具
* 按功能/领域组织,而非按类型
## 错误处理
始终全面处理错误:
```typescript
try {
const result = await riskyOperation()
return result
} catch (error) {
console.error('Operation failed:', error)
throw new Error('Detailed user-friendly message')
}
```
## 输入验证
始终验证用户输入:
```typescript
import { z } from 'zod'
const schema = z.object({
email: z.string().email(),
age: z.number().int().min(0).max(150)
})
const validated = schema.parse(input)
```
## 代码质量检查清单
在标记工作完成之前:
* \[ ] 代码可读且命名良好
* \[ ] 函数短小(<50 行)
* \[ ] 文件专注(<800 行)
* \[ ] 无深层嵌套(>4 层)
* \[ ] 正确的错误处理
* \[ ] 无 console.log 语句
* \[ ] 无硬编码值
* \[ ] 无修改(使用不可变模式)

View File

@@ -30,14 +30,15 @@
对于独立操作,**始终**使用并行任务执行:
```markdown
# GOOD: Parallel execution
Launch 3 agents in parallel:
1. Agent 1: Security analysis of auth.ts
2. Agent 2: Performance review of cache system
3. Agent 3: Type checking of utils.ts
# 良好:并行执行
同时启动 3 个智能体:
1. 智能体 1认证模块的安全分析
2. 智能体 2缓存系统的性能审查
3. 智能体 3工具类的类型检查
# 不良:不必要的顺序执行
先智能体 1然后智能体 2最后智能体 3
# BAD: Sequential when unnecessary
First agent 1, then agent 2, then agent 3
```
## 多视角分析

View File

@@ -0,0 +1,52 @@
# 编码风格
## 不可变性(关键)
始终创建新对象,绝不改变现有对象:
```
// Pseudocode
WRONG: modify(original, field, value) → changes original in-place
CORRECT: update(original, field, value) → returns new copy with change
```
理由:不可变数据可以防止隐藏的副作用,使调试更容易,并支持安全的并发。
## 文件组织
多个小文件 > 少数大文件:
* 高内聚,低耦合
* 通常 200-400 行,最多 800 行
* 从大型模块中提取实用工具
* 按功能/领域组织,而不是按类型组织
## 错误处理
始终全面处理错误:
* 在每个层级明确处理错误
* 在面向用户的代码中提供用户友好的错误消息
* 在服务器端记录详细的错误上下文
* 绝不默默地忽略错误
## 输入验证
始终在系统边界处进行验证:
* 在处理前验证所有用户输入
* 在可用时使用基于模式的验证
* 快速失败并提供清晰的错误消息
* 绝不信任外部数据API 响应、用户输入、文件内容)
## 代码质量检查清单
在标记工作完成之前:
* \[ ] 代码可读且命名良好
* \[ ] 函数短小(<50 行)
* \[ ] 文件专注(<800 行)
* \[ ] 没有深度嵌套(>4 层)
* \[ ] 正确的错误处理
* \[ ] 没有硬编码的值(使用常量或配置)
* \[ ] 没有突变(使用不可变模式)

View File

@@ -6,25 +6,6 @@
* **PostToolUse**:工具执行后(自动格式化、检查)
* **Stop**:会话结束时(最终验证)
## 当前 Hooks位于 ~/.claude/settings.json
### PreToolUse
* **tmux 提醒**建议对长时间运行的命令npm、pnpm、yarn、cargo 等)使用 tmux
* **git push 审查**:推送前在 Zed 中打开进行审查
* **文档拦截器**:阻止创建不必要的 .md/.txt 文件
### PostToolUse
* **PR 创建**:记录 PR URL 和 GitHub Actions 状态
* **Prettier**:编辑后自动格式化 JS/TS 文件
* **TypeScript 检查**:编辑 .ts/.tsx 文件后运行 tsc
* **console.log 警告**:警告编辑的文件中存在 console.log
### Stop
* **console.log 审计**:会话结束前检查所有修改的文件中是否存在 console.log
## 自动接受权限
谨慎使用:

View File

@@ -0,0 +1,34 @@
# 常见模式
## 骨架项目
当实现新功能时:
1. 搜索经过实战检验的骨架项目
2. 使用并行代理评估选项:
* 安全性评估
* 可扩展性分析
* 相关性评分
* 实施规划
3. 克隆最佳匹配作为基础
4. 在已验证的结构内迭代
## 设计模式
### 仓库模式
将数据访问封装在一个一致的接口之后:
* 定义标准操作findAll, findById, create, update, delete
* 具体实现处理存储细节数据库、API、文件等
* 业务逻辑依赖于抽象接口,而非存储机制
* 便于轻松切换数据源,并使用模拟对象简化测试
### API 响应格式
对所有 API 响应使用一致的信封格式:
* 包含一个成功/状态指示器
* 包含数据载荷(出错时可为空)
* 包含一个错误消息字段(成功时可为空)
* 为分页响应包含元数据(总数、页码、限制)

View File

@@ -35,14 +35,23 @@
* 文档更新
* 简单的错误修复
## Ultrathink + 计划模式
## 扩展思考 + 计划模式
扩展思考默认启用,最多保留 31,999 个令牌用于内部推理。
通过以下方式控制扩展思考:
* **切换**Option+T (macOS) / Alt+T (Windows/Linux)
* **配置**:在 `~/.claude/settings.json` 中设置 `alwaysThinkingEnabled`
* **预算上限**`export MAX_THINKING_TOKENS=10000`
* **详细模式**Ctrl+O 查看思考输出
对于需要深度推理的复杂任务:
1. 使用 `ultrathink` 进行增强思考
2. 启用**计划模式**以获得结构化方法
3. 通过多轮批判性评审来"发动引擎"
4. 使用拆分角色的子智能体进行多样化分析
1. 确保扩展思考已启用(默认开启)
2. 启用 **计划模式** 以获得结构化方法
3. 使用多轮批判进行彻底分析
4. 使用分割角色子代理以获得多元视角
## 构建故障排除

View File

@@ -15,17 +15,10 @@
## 密钥管理
```typescript
// NEVER: Hardcoded secrets
const apiKey = "sk-proj-xxxxx"
// ALWAYS: Environment variables
const apiKey = process.env.OPENAI_API_KEY
if (!apiKey) {
throw new Error('OPENAI_API_KEY not configured')
}
```
* 切勿在源代码中硬编码密钥
* 始终使用环境变量或密钥管理器
* 在启动时验证所需的密钥是否存在
* 轮换任何可能已泄露的密钥
## 安全响应协议

View File

@@ -6,7 +6,7 @@
1. **单元测试** - 单个函数、工具、组件
2. **集成测试** - API 端点、数据库操作
3. **端到端测试** - 关键用户流程 (Playwright)
3. **端到端测试** - 关键用户流程(根据语言选择框架)
## 测试驱动开发
@@ -28,5 +28,4 @@
## 代理支持
* **tdd-guide** - 主动用于新功能,强制执行先写测试
* **e2e-runner** - Playwright 端到端测试专家
* **tdd-guide** - 主动用于新功能,强制执行测试优先

View File

@@ -0,0 +1,26 @@
# Go 编码风格
> 本文件在 [common/coding-style.md](../common/coding-style.md) 的基础上,扩展了 Go 语言的特定内容。
## 格式化
* **gofmt** 和 **goimports** 是强制性的 —— 无需进行风格辩论
## 设计原则
* 接受接口,返回结构体
* 保持接口小巧1-3 个方法)
## 错误处理
始终用上下文包装错误:
```go
if err != nil {
return fmt.Errorf("failed to create user: %w", err)
}
```
## 参考
查看技能:`golang-patterns` 以获取全面的 Go 语言惯用法和模式。

View File

@@ -0,0 +1,11 @@
# Go 钩子
> 本文件通过 Go 特定内容扩展了 [common/hooks.md](../common/hooks.md)。
## PostToolUse 钩子
`~/.claude/settings.json` 中配置:
* **gofmt/goimports**:编辑后自动格式化 `.go` 文件
* **go vet**:编辑 `.go` 文件后运行静态分析
* **staticcheck**:对修改的包运行扩展静态检查

View File

@@ -0,0 +1,39 @@
# Go 模式
> 本文档在 [common/patterns.md](../common/patterns.md) 的基础上扩展了 Go 语言特定的内容。
## 函数式选项
```go
type Option func(*Server)
func WithPort(port int) Option {
return func(s *Server) { s.port = port }
}
func NewServer(opts ...Option) *Server {
s := &Server{port: 8080}
for _, opt := range opts {
opt(s)
}
return s
}
```
## 小接口
在接口被使用的地方定义它们,而不是在它们被实现的地方。
## 依赖注入
使用构造函数来注入依赖:
```go
func NewUserService(repo UserRepository, logger Logger) *UserService {
return &UserService{repo: repo, logger: logger}
}
```
## 参考
有关全面的 Go 模式(包括并发、错误处理和包组织),请参阅技能:`golang-patterns`

View File

@@ -0,0 +1,28 @@
# Go 安全
> 此文件基于 [common/security.md](../common/security.md) 扩展了 Go 特定内容。
## 密钥管理
```go
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
log.Fatal("OPENAI_API_KEY not configured")
}
```
## 安全扫描
* 使用 **gosec** 进行静态安全分析:
```bash
gosec ./...
```
## 上下文与超时
始终使用 `context.Context` 进行超时控制:
```go
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
```

View File

@@ -0,0 +1,25 @@
# Go 测试
> 本文档在 [common/testing.md](../common/testing.md) 的基础上扩展了 Go 特定的内容。
## 框架
使用标准的 `go test` 并采用 **表格驱动测试**
## 竞态检测
始终使用 `-race` 标志运行:
```bash
go test -race ./...
```
## 覆盖率
```bash
go test -cover ./...
```
## 参考
查看技能:`golang-testing` 以获取详细的 Go 测试模式和辅助工具。

View File

@@ -0,0 +1,37 @@
# Python 编码风格
> 本文件在 [common/coding-style.md](../common/coding-style.md) 的基础上扩展了 Python 特定的内容。
## 标准
* 遵循 **PEP 8** 规范
* 在所有函数签名上使用 **类型注解**
## 不变性
优先使用不可变数据结构:
```python
from dataclasses import dataclass
@dataclass(frozen=True)
class User:
name: str
email: str
from typing import NamedTuple
class Point(NamedTuple):
x: float
y: float
```
## 格式化
* 使用 **black** 进行代码格式化
* 使用 **isort** 进行导入排序
* 使用 **ruff** 进行代码检查
## 参考
查看技能:`python-patterns` 以获取全面的 Python 惯用法和模式。

View File

@@ -0,0 +1,14 @@
# Python 钩子
> 本文档扩展了 [common/hooks.md](../common/hooks.md) 中关于 Python 的特定内容。
## PostToolUse 钩子
`~/.claude/settings.json` 中配置:
* **black/ruff**:编辑后自动格式化 `.py` 文件
* **mypy/pyright**:编辑 `.py` 文件后运行类型检查
## 警告
* 对编辑文件中的 `print()` 语句发出警告(应使用 `logging` 模块替代)

View File

@@ -0,0 +1,34 @@
# Python 模式
> 本文档扩展了 [common/patterns.md](../common/patterns.md),补充了 Python 特定的内容。
## 协议(鸭子类型)
```python
from typing import Protocol
class Repository(Protocol):
def find_by_id(self, id: str) -> dict | None: ...
def save(self, entity: dict) -> dict: ...
```
## 数据类作为 DTO
```python
from dataclasses import dataclass
@dataclass
class CreateUserRequest:
name: str
email: str
age: int | None = None
```
## 上下文管理器与生成器
* 使用上下文管理器(`with` 语句)进行资源管理
* 使用生成器进行惰性求值和内存高效迭代
## 参考
查看技能:`python-patterns`,了解包括装饰器、并发和包组织在内的综合模式。

View File

@@ -0,0 +1,25 @@
# Python 安全
> 本文档基于 [通用安全指南](../common/security.md) 扩展,补充了 Python 相关的内容。
## 密钥管理
```python
import os
from dotenv import load_dotenv
load_dotenv()
api_key = os.environ["OPENAI_API_KEY"] # Raises KeyError if missing
```
## 安全扫描
* 使用 **bandit** 进行静态安全分析:
```bash
bandit -r src/
```
## 参考
查看技能:`django-security` 以获取 Django 特定的安全指南(如适用)。

View File

@@ -0,0 +1,33 @@
# Python 测试
> 本文件在 [通用/测试.md](../common/testing.md) 的基础上扩展了 Python 特定的内容。
## 框架
使用 **pytest** 作为测试框架。
## 覆盖率
```bash
pytest --cov=src --cov-report=term-missing
```
## 测试组织
使用 `pytest.mark` 进行测试分类:
```python
import pytest
@pytest.mark.unit
def test_calculate_total():
...
@pytest.mark.integration
def test_database_connection():
...
```
## 参考
查看技能:`python-testing` 以获取详细的 pytest 模式和夹具信息。

View File

@@ -0,0 +1,58 @@
# TypeScript/JavaScript 编码风格
> 本文件基于 [common/coding-style.md](../common/coding-style.md) 扩展,包含 TypeScript/JavaScript 特定内容。
## 不可变性
使用展开运算符进行不可变更新:
```typescript
// WRONG: Mutation
function updateUser(user, name) {
user.name = name // MUTATION!
return user
}
// CORRECT: Immutability
function updateUser(user, name) {
return {
...user,
name
}
}
```
## 错误处理
使用 async/await 配合 try-catch
```typescript
try {
const result = await riskyOperation()
return result
} catch (error) {
console.error('Operation failed:', error)
throw new Error('Detailed user-friendly message')
}
```
## 输入验证
使用 Zod 进行基于模式的验证:
```typescript
import { z } from 'zod'
const schema = z.object({
email: z.string().email(),
age: z.number().int().min(0).max(150)
})
const validated = schema.parse(input)
```
## Console.log
* 生产代码中不允许出现 `console.log` 语句
* 请使用适当的日志库替代
* 查看钩子以进行自动检测

View File

@@ -0,0 +1,15 @@
# TypeScript/JavaScript 钩子
> 此文件扩展了 [common/hooks.md](../common/hooks.md),并添加了 TypeScript/JavaScript 特有的内容。
## PostToolUse 钩子
`~/.claude/settings.json` 中配置:
* **Prettier**:编辑后自动格式化 JS/TS 文件
* **TypeScript 检查**:编辑 `.ts`/`.tsx` 文件后运行 `tsc`
* **console.log 警告**:警告编辑过的文件中存在 `console.log`
## Stop 钩子
* **console.log 审计**:在会话结束前,检查所有修改过的文件中是否存在 `console.log`

View File

@@ -1,4 +1,6 @@
# 常见模式
# TypeScript/JavaScript 模式
> 此文件在 [common/patterns.md](../common/patterns.md) 的基础上扩展了 TypeScript/JavaScript 特定的内容。
## API 响应格式
@@ -41,16 +43,3 @@ interface Repository<T> {
delete(id: string): Promise<void>
}
```
## 骨架项目
当实现新功能时:
1. 搜索经过实战检验的骨架项目
2. 使用并行代理评估选项:
* 安全性评估
* 可扩展性分析
* 相关性评分
* 实施规划
3. 克隆最佳匹配作为基础
4. 在已验证的结构内迭代

View File

@@ -0,0 +1,21 @@
# TypeScript/JavaScript 安全
> 本文档扩展了 [common/security.md](../common/security.md),包含了 TypeScript/JavaScript 特定的内容。
## 密钥管理
```typescript
// NEVER: Hardcoded secrets
const apiKey = "sk-proj-xxxxx"
// ALWAYS: Environment variables
const apiKey = process.env.OPENAI_API_KEY
if (!apiKey) {
throw new Error('OPENAI_API_KEY not configured')
}
```
## 代理支持
* 使用 **security-reviewer** 技能进行全面的安全审计

View File

@@ -0,0 +1,11 @@
# TypeScript/JavaScript 测试
> 本文档基于 [common/testing.md](../common/testing.md) 扩展,补充了 TypeScript/JavaScript 特定的内容。
## E2E 测试
使用 **Playwright** 作为关键用户流程的 E2E 测试框架。
## 智能体支持
* **e2e-runner** - Playwright E2E 测试专家

View File

@@ -0,0 +1,314 @@
---
name: configure-ecc
description: Everything Claude Code 的交互式安装程序 — 引导用户选择并安装技能和规则到用户级或项目级目录,验证路径,并可选择性地优化已安装的文件。
---
# 配置 Everything Claude Code (ECC)
一个交互式、分步安装向导,用于 Everything Claude Code 项目。使用 `AskUserQuestion` 引导用户选择性安装技能和规则,然后验证正确性并提供优化。
## 何时激活
* 用户说 "configure ecc"、"install ecc"、"setup everything claude code" 或类似表述
* 用户想要从此项目中选择性安装技能或规则
* 用户想要验证或修复现有的 ECC 安装
* 用户想要为其项目优化已安装的技能或规则
## 先决条件
此技能必须在激活前对 Claude Code 可访问。有两种引导方式:
1. **通过插件**: `/plugin install everything-claude-code` — 插件会自动加载此技能
2. **手动**: 仅将此技能复制到 `~/.claude/skills/configure-ecc/SKILL.md`,然后通过说 "configure ecc" 激活
***
## 步骤 0克隆 ECC 仓库
在任何安装之前,将最新的 ECC 源代码克隆到 `/tmp`
```bash
rm -rf /tmp/everything-claude-code
git clone https://github.com/affaan-m/everything-claude-code.git /tmp/everything-claude-code
```
`ECC_ROOT=/tmp/everything-claude-code` 设置为所有后续复制操作的源。
如果克隆失败(网络问题等),使用 `AskUserQuestion` 要求用户提供现有 ECC 克隆的本地路径。
***
## 步骤 1选择安装级别
使用 `AskUserQuestion` 询问用户安装位置:
```
Question: "Where should ECC components be installed?"
Options:
- "User-level (~/.claude/)" — "Applies to all your Claude Code projects"
- "Project-level (.claude/)" — "Applies only to the current project"
- "Both" — "Common/shared items user-level, project-specific items project-level"
```
将选择存储为 `INSTALL_LEVEL`。设置目标目录:
* 用户级别:`TARGET=~/.claude`
* 项目级别:`TARGET=.claude`(相对于当前项目根目录)
* 两者:`TARGET_USER=~/.claude``TARGET_PROJECT=.claude`
如果目标目录不存在,则创建它们:
```bash
mkdir -p $TARGET/skills $TARGET/rules
```
***
## 步骤 2选择并安装技能
### 2a选择技能类别
共有 27 项技能,分为 4 个类别。使用 `AskUserQuestion``multiSelect: true`
```
Question: "Which skill categories do you want to install?"
Options:
- "Framework & Language" — "Django, Spring Boot, Go, Python, Java, Frontend, Backend patterns"
- "Database" — "PostgreSQL, ClickHouse, JPA/Hibernate patterns"
- "Workflow & Quality" — "TDD, verification, learning, security review, compaction"
- "All skills" — "Install every available skill"
```
### 2b确认单项技能
对于每个选定的类别,打印下面的完整技能列表,并要求用户确认或取消选择特定的技能。如果列表超过 4 项,将列表打印为文本,并使用 `AskUserQuestion`,提供一个 "安装所有列出项" 的选项,以及一个 "其他" 选项供用户粘贴特定名称。
**类别框架与语言16 项技能)**
| 技能 | 描述 |
|-------|-------------|
| `backend-patterns` | Node.js/Express/Next.js 的后端架构、API 设计、服务器端最佳实践 |
| `coding-standards` | TypeScript、JavaScript、React、Node.js 的通用编码标准 |
| `django-patterns` | Django 架构、使用 DRF 的 REST API、ORM、缓存、信号、中间件 |
| `django-security` | Django 安全身份验证、CSRF、SQL 注入、XSS 防护 |
| `django-tdd` | 使用 pytest-django、factory\_boy、模拟、覆盖率的 Django 测试 |
| `django-verification` | Django 验证循环:迁移、代码检查、测试、安全扫描 |
| `frontend-patterns` | React、Next.js、状态管理、性能、UI 模式 |
| `golang-patterns` | 地道的 Go 模式、健壮 Go 应用程序的约定 |
| `golang-testing` | Go 测试:表格驱动测试、子测试、基准测试、模糊测试 |
| `java-coding-standards` | Spring Boot 的 Java 编码标准命名、不可变性、Optional、流 |
| `python-patterns` | Pythonic 惯用法、PEP 8、类型提示、最佳实践 |
| `python-testing` | 使用 pytest、TDD、夹具、模拟、参数化的 Python 测试 |
| `springboot-patterns` | Spring Boot 架构、REST API、分层服务、缓存、异步 |
| `springboot-security` | Spring Security身份验证/授权、验证、CSRF、密钥、速率限制 |
| `springboot-tdd` | 使用 JUnit 5、Mockito、MockMvc、Testcontainers 的 Spring Boot TDD |
| `springboot-verification` | Spring Boot 验证:构建、静态分析、测试、安全扫描 |
**类别数据库3 项技能)**
| 技能 | 描述 |
|-------|-------------|
| `clickhouse-io` | ClickHouse 模式、查询优化、分析、数据工程 |
| `jpa-patterns` | JPA/Hibernate 实体设计、关系、查询优化、事务 |
| `postgres-patterns` | PostgreSQL 查询优化、模式设计、索引、安全 |
**类别工作流与质量8 项技能)**
| 技能 | 描述 |
|-------|-------------|
| `continuous-learning` | 从会话中自动提取可重用模式作为习得技能 |
| `continuous-learning-v2` | 基于本能的学习,带有置信度评分,演变为技能/命令/代理 |
| `eval-harness` | 用于评估驱动开发 (EDD) 的正式评估框架 |
| `iterative-retrieval` | 用于子代理上下文问题的渐进式上下文优化 |
| `security-review` | 安全检查清单身份验证、输入、密钥、API、支付功能 |
| `strategic-compact` | 在逻辑间隔处建议手动上下文压缩 |
| `tdd-workflow` | 强制要求 TDD覆盖率 80% 以上:单元测试、集成测试、端到端测试 |
| `verification-loop` | 验证和质量循环模式 |
**独立技能**
| 技能 | 描述 |
|-------|-------------|
| `project-guidelines-example` | 用于创建项目特定技能的模板 |
### 2c执行安装
对于每个选定的技能,复制整个技能目录:
```bash
cp -r $ECC_ROOT/skills/<skill-name> $TARGET/skills/
```
注意:`continuous-learning``continuous-learning-v2` 有额外的文件config.json、钩子、脚本——确保复制整个目录而不仅仅是 SKILL.md。
***
## 步骤 3选择并安装规则
使用 `AskUserQuestion``multiSelect: true`
```
Question: "Which rule sets do you want to install?"
Options:
- "Common rules (Recommended)" — "Language-agnostic principles: coding style, git workflow, testing, security, etc. (8 files)"
- "TypeScript/JavaScript" — "TS/JS patterns, hooks, testing with Playwright (5 files)"
- "Python" — "Python patterns, pytest, black/ruff formatting (5 files)"
- "Go" — "Go patterns, table-driven tests, gofmt/staticcheck (5 files)"
```
执行安装:
```bash
# Common rules (flat copy into rules/)
cp -r $ECC_ROOT/rules/common/* $TARGET/rules/
# Language-specific rules (flat copy into rules/)
cp -r $ECC_ROOT/rules/typescript/* $TARGET/rules/ # if selected
cp -r $ECC_ROOT/rules/python/* $TARGET/rules/ # if selected
cp -r $ECC_ROOT/rules/golang/* $TARGET/rules/ # if selected
```
**重要**:如果用户选择了任何特定语言的规则但**没有**选择通用规则,警告他们:
> "特定语言规则扩展了通用规则。不安装通用规则可能导致覆盖不完整。是否也安装通用规则?"
***
## 步骤 4安装后验证
安装后,执行这些自动化检查:
### 4a验证文件存在
列出所有已安装的文件并确认它们存在于目标位置:
```bash
ls -la $TARGET/skills/
ls -la $TARGET/rules/
```
### 4b检查路径引用
扫描所有已安装的 `.md` 文件中的路径引用:
```bash
grep -rn "~/.claude/" $TARGET/skills/ $TARGET/rules/
grep -rn "../common/" $TARGET/rules/
grep -rn "skills/" $TARGET/skills/
```
**对于项目级别安装**,标记任何对 `~/.claude/` 路径的引用:
* 如果技能引用 `~/.claude/settings.json` — 这通常没问题(设置始终是用户级别的)
* 如果技能引用 `~/.claude/skills/``~/.claude/rules/` — 如果仅安装在项目级别,这可能损坏
* 如果技能通过名称引用另一项技能 — 检查被引用的技能是否也已安装
### 4c检查技能间的交叉引用
有些技能会引用其他技能。验证这些依赖关系:
* `django-tdd` 可能引用 `django-patterns`
* `springboot-tdd` 可能引用 `springboot-patterns`
* `continuous-learning-v2` 引用 `~/.claude/homunculus/` 目录
* `python-testing` 可能引用 `python-patterns`
* `golang-testing` 可能引用 `golang-patterns`
* 特定语言规则引用其 `common/` 对应项
### 4d报告问题
对于发现的每个问题,报告:
1. **文件**:包含问题引用的文件
2. **行号**:行号
3. **问题**:哪里出错了(例如,"引用了 ~/.claude/skills/python-patterns 但 python-patterns 未安装"
4. **建议的修复**:该怎么做(例如,"安装 python-patterns 技能" 或 "将路径更新为 .claude/skills/"
***
## 步骤 5优化已安装文件可选
使用 `AskUserQuestion`
```
Question: "Would you like to optimize the installed files for your project?"
Options:
- "Optimize skills" — "Remove irrelevant sections, adjust paths, tailor to your tech stack"
- "Optimize rules" — "Adjust coverage targets, add project-specific patterns, customize tool configs"
- "Optimize both" — "Full optimization of all installed files"
- "Skip" — "Keep everything as-is"
```
### 如果优化技能:
1. 读取每个已安装的 SKILL.md
2. 询问用户其项目的技术栈是什么(如果尚不清楚)
3. 对于每项技能,建议删除无关部分
4. 在安装目标处就地编辑 SKILL.md 文件(**不是**源仓库)
5. 修复在步骤 4 中发现的任何路径问题
### 如果优化规则:
1. 读取每个已安装的规则 .md 文件
2. 询问用户的偏好:
* 测试覆盖率目标(默认 80%
* 首选的格式化工具
* Git 工作流约定
* 安全要求
3. 在安装目标处就地编辑规则文件
**关键**:只修改安装目标(`$TARGET/`)中的文件,**绝不**修改源 ECC 仓库(`$ECC_ROOT/`)中的文件。
***
## 步骤 6安装摘要
`/tmp` 清理克隆的仓库:
```bash
rm -rf /tmp/everything-claude-code
```
然后打印摘要报告:
```
## ECC Installation Complete
### Installation Target
- Level: [user-level / project-level / both]
- Path: [target path]
### Skills Installed ([count])
- skill-1, skill-2, skill-3, ...
### Rules Installed ([count])
- common (8 files)
- typescript (5 files)
- ...
### Verification Results
- [count] issues found, [count] fixed
- [list any remaining issues]
### Optimizations Applied
- [list changes made, or "None"]
```
***
## 故障排除
### "Claude Code 未获取技能"
* 验证技能目录包含一个 `SKILL.md` 文件(不仅仅是松散的 .md 文件)
* 对于用户级别:检查 `~/.claude/skills/<skill-name>/SKILL.md` 是否存在
* 对于项目级别:检查 `.claude/skills/<skill-name>/SKILL.md` 是否存在
### "规则不工作"
* 规则是平面文件,不在子目录中:`$TARGET/rules/coding-style.md`(正确)对比 `$TARGET/rules/common/coding-style.md`(对于平面安装不正确)
* 安装规则后重启 Claude Code
### "项目级别安装后出现路径引用错误"
* 有些技能假设 `~/.claude/` 路径。运行步骤 4 验证来查找并修复这些问题。
* 对于 `continuous-learning-v2``~/.claude/homunculus/` 目录始终是用户级别的 — 这是预期的,不是错误。

View File

@@ -0,0 +1,322 @@
---
name: cpp-testing
description: 仅在编写/更新/修复C++测试、配置GoogleTest/CTest、诊断失败或不稳定的测试或添加覆盖率/消毒器时使用。
---
# C++ 测试(代理技能)
针对现代 C++C++17/20的代理导向测试工作流使用 GoogleTest/GoogleMock 和 CMake/CTest。
## 使用时机
* 编写新的 C++ 测试或修复现有测试
* 为 C++ 组件设计单元/集成测试覆盖
* 添加测试覆盖、CI 门控或回归保护
* 配置 CMake/CTest 工作流以实现一致的执行
* 调查测试失败或偶发性行为
* 启用用于内存/竞态诊断的消毒剂
### 不适用时机
* 在不修改测试的情况下实现新的产品功能
* 与测试覆盖或失败无关的大规模重构
* 没有测试回归需要验证的性能调优
* 非 C++ 项目或非测试任务
## 核心概念
* **TDD 循环**:红 → 绿 → 重构(先写测试,最小化修复,然后清理)。
* **隔离**:优先使用依赖注入和仿制品,而非全局状态。
* **测试布局**`tests/unit``tests/integration``tests/testdata`
* **Mock 与 Fake**Mock 用于交互Fake 用于有状态行为。
* **CTest 发现**:使用 `gtest_discover_tests()` 进行稳定的测试发现。
* **CI 信号**:先运行子集,然后使用 `--output-on-failure` 运行完整套件。
## TDD 工作流
遵循 RED → GREEN → REFACTOR 循环:
1. **RED**:编写一个捕获新行为的失败测试
2. **GREEN**:实现最小的更改以使其通过
3. **REFACTOR**:在测试保持通过的同时进行清理
```cpp
// tests/add_test.cpp
#include <gtest/gtest.h>
int Add(int a, int b); // Provided by production code.
TEST(AddTest, AddsTwoNumbers) { // RED
EXPECT_EQ(Add(2, 3), 5);
}
// src/add.cpp
int Add(int a, int b) { // GREEN
return a + b;
}
// REFACTOR: simplify/rename once tests pass
```
## 代码示例
### 基础单元测试 (gtest)
```cpp
// tests/calculator_test.cpp
#include <gtest/gtest.h>
int Add(int a, int b); // Provided by production code.
TEST(CalculatorTest, AddsTwoNumbers) {
EXPECT_EQ(Add(2, 3), 5);
}
```
### 夹具 (gtest)
```cpp
// tests/user_store_test.cpp
// Pseudocode stub: replace UserStore/User with project types.
#include <gtest/gtest.h>
#include <memory>
#include <optional>
#include <string>
struct User { std::string name; };
class UserStore {
public:
explicit UserStore(std::string /*path*/) {}
void Seed(std::initializer_list<User> /*users*/) {}
std::optional<User> Find(const std::string &/*name*/) { return User{"alice"}; }
};
class UserStoreTest : public ::testing::Test {
protected:
void SetUp() override {
store = std::make_unique<UserStore>(":memory:");
store->Seed({{"alice"}, {"bob"}});
}
std::unique_ptr<UserStore> store;
};
TEST_F(UserStoreTest, FindsExistingUser) {
auto user = store->Find("alice");
ASSERT_TRUE(user.has_value());
EXPECT_EQ(user->name, "alice");
}
```
### Mock (gmock)
```cpp
// tests/notifier_test.cpp
#include <gmock/gmock.h>
#include <gtest/gtest.h>
#include <string>
class Notifier {
public:
virtual ~Notifier() = default;
virtual void Send(const std::string &message) = 0;
};
class MockNotifier : public Notifier {
public:
MOCK_METHOD(void, Send, (const std::string &message), (override));
};
class Service {
public:
explicit Service(Notifier &notifier) : notifier_(notifier) {}
void Publish(const std::string &message) { notifier_.Send(message); }
private:
Notifier &notifier_;
};
TEST(ServiceTest, SendsNotifications) {
MockNotifier notifier;
Service service(notifier);
EXPECT_CALL(notifier, Send("hello")).Times(1);
service.Publish("hello");
}
```
### CMake/CTest 快速入门
```cmake
# CMakeLists.txt (excerpt)
cmake_minimum_required(VERSION 3.20)
project(example LANGUAGES CXX)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
include(FetchContent)
# Prefer project-locked versions. If using a tag, use a pinned version per project policy.
set(GTEST_VERSION v1.17.0) # Adjust to project policy.
FetchContent_Declare(
googletest
URL https://github.com/google/googletest/archive/refs/tags/${GTEST_VERSION}.zip
)
FetchContent_MakeAvailable(googletest)
add_executable(example_tests
tests/calculator_test.cpp
src/calculator.cpp
)
target_link_libraries(example_tests GTest::gtest GTest::gmock GTest::gtest_main)
enable_testing()
include(GoogleTest)
gtest_discover_tests(example_tests)
```
```bash
cmake -S . -B build -DCMAKE_BUILD_TYPE=Debug
cmake --build build -j
ctest --test-dir build --output-on-failure
```
## 运行测试
```bash
ctest --test-dir build --output-on-failure
ctest --test-dir build -R ClampTest
ctest --test-dir build -R "UserStoreTest.*" --output-on-failure
```
```bash
./build/example_tests --gtest_filter=ClampTest.*
./build/example_tests --gtest_filter=UserStoreTest.FindsExistingUser
```
## 调试失败
1. 使用 gtest 过滤器重新运行单个失败的测试。
2. 在失败的断言周围添加作用域日志记录。
3. 启用消毒剂后重新运行。
4. 根本原因修复后,扩展到完整套件。
## 覆盖率
优先使用目标级别的设置,而非全局标志。
```cmake
option(ENABLE_COVERAGE "Enable coverage flags" OFF)
if(ENABLE_COVERAGE)
if(CMAKE_CXX_COMPILER_ID MATCHES "GNU")
target_compile_options(example_tests PRIVATE --coverage)
target_link_options(example_tests PRIVATE --coverage)
elseif(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
target_compile_options(example_tests PRIVATE -fprofile-instr-generate -fcoverage-mapping)
target_link_options(example_tests PRIVATE -fprofile-instr-generate)
endif()
endif()
```
GCC + gcov + lcov
```bash
cmake -S . -B build-cov -DENABLE_COVERAGE=ON
cmake --build build-cov -j
ctest --test-dir build-cov
lcov --capture --directory build-cov --output-file coverage.info
lcov --remove coverage.info '/usr/*' --output-file coverage.info
genhtml coverage.info --output-directory coverage
```
Clang + llvm-cov
```bash
cmake -S . -B build-llvm -DENABLE_COVERAGE=ON -DCMAKE_CXX_COMPILER=clang++
cmake --build build-llvm -j
LLVM_PROFILE_FILE="build-llvm/default.profraw" ctest --test-dir build-llvm
llvm-profdata merge -sparse build-llvm/default.profraw -o build-llvm/default.profdata
llvm-cov report build-llvm/example_tests -instr-profile=build-llvm/default.profdata
```
## 消毒剂
```cmake
option(ENABLE_ASAN "Enable AddressSanitizer" OFF)
option(ENABLE_UBSAN "Enable UndefinedBehaviorSanitizer" OFF)
option(ENABLE_TSAN "Enable ThreadSanitizer" OFF)
if(ENABLE_ASAN)
add_compile_options(-fsanitize=address -fno-omit-frame-pointer)
add_link_options(-fsanitize=address)
endif()
if(ENABLE_UBSAN)
add_compile_options(-fsanitize=undefined -fno-omit-frame-pointer)
add_link_options(-fsanitize=undefined)
endif()
if(ENABLE_TSAN)
add_compile_options(-fsanitize=thread)
add_link_options(-fsanitize=thread)
endif()
```
## 偶发性测试防护
* 切勿使用 `sleep` 进行同步;使用条件变量或门闩。
* 为每个测试创建唯一的临时目录并始终清理它们。
* 避免在单元测试中依赖真实时间、网络或文件系统。
* 对随机化输入使用确定性种子。
## 最佳实践
### 应该做
* 保持测试的确定性和隔离性
* 优先使用依赖注入而非全局变量
* 对前置条件使用 `ASSERT_*`,对多个检查使用 `EXPECT_*`
* 在 CTest 标签或目录中分离单元测试与集成测试
* 在 CI 中运行消毒剂以进行内存和竞态检测
### 不应该做
* 不要在单元测试中依赖真实时间或网络
* 当可以使用条件变量时,不要使用睡眠作为同步手段
* 不要过度模拟简单的值对象
* 不要对非关键日志使用脆弱的字符串匹配
### 常见陷阱
* **使用固定的临时路径** → 为每个测试生成唯一的临时目录并清理它们。
* **依赖挂钟时间** → 注入时钟或使用模拟时间源。
* **偶发性并发测试** → 使用条件变量/门闩和有界等待。
* **隐藏的全局状态** → 在夹具中重置全局状态或移除全局变量。
* **过度模拟** → 对有状态行为优先使用 Fake仅对交互进行 Mock。
* **缺少消毒剂运行** → 在 CI 中添加 ASan/UBSan/TSan 构建。
* **仅在调试版本上计算覆盖率** → 确保覆盖率目标使用一致的标志。
## 可选附录:模糊测试 / 属性测试
仅在项目已支持 LLVM/libFuzzer 或属性测试库时使用。
* **libFuzzer**:最适合 I/O 最少的纯函数。
* **RapidCheck**:基于属性的测试,用于验证不变量。
最小的 libFuzzer 测试框架(伪代码:替换 ParseConfig
```cpp
#include <cstddef>
#include <cstdint>
#include <string>
extern "C" int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size) {
std::string input(reinterpret_cast<const char *>(data), size);
// ParseConfig(input); // project function
return 0;
}
```
## GoogleTest 的替代方案
* **Catch2**:仅头文件,表达性强的匹配器
* **doctest**:轻量级,编译开销最小

View File

@@ -0,0 +1,165 @@
---
name: nutrient-document-processing
description: 使用Nutrient DWS API处理、转换、OCR、提取、编辑、签署和填写文档。支持PDF、DOCX、XLSX、PPTX、HTML和图像文件。
---
# 文档处理
使用 [Nutrient DWS Processor API](https://www.nutrient.io/api/) 处理文档。转换格式、提取文本和表格、对扫描文档进行 OCR、编辑 PII、添加水印、数字签名以及填写 PDF 表单。
## 设置
**[nutrient.io](https://dashboard.nutrient.io/sign_up/?product=processor)** 获取一个免费的 API 密钥
```bash
export NUTRIENT_API_KEY="pdf_live_..."
```
所有请求都以 multipart POST 形式发送到 `https://api.nutrient.io/build`,并附带一个 `instructions` JSON 字段。
## 操作
### 转换文档
```bash
# DOCX to PDF
curl -X POST https://api.nutrient.io/build \
-H "Authorization: Bearer $NUTRIENT_API_KEY" \
-F "document.docx=@document.docx" \
-F 'instructions={"parts":[{"file":"document.docx"}]}' \
-o output.pdf
# PDF to DOCX
curl -X POST https://api.nutrient.io/build \
-H "Authorization: Bearer $NUTRIENT_API_KEY" \
-F "document.pdf=@document.pdf" \
-F 'instructions={"parts":[{"file":"document.pdf"}],"output":{"type":"docx"}}' \
-o output.docx
# HTML to PDF
curl -X POST https://api.nutrient.io/build \
-H "Authorization: Bearer $NUTRIENT_API_KEY" \
-F "index.html=@index.html" \
-F 'instructions={"parts":[{"html":"index.html"}]}' \
-o output.pdf
```
支持的输入格式PDF, DOCX, XLSX, PPTX, DOC, XLS, PPT, PPS, PPSX, ODT, RTF, HTML, JPG, PNG, TIFF, HEIC, GIF, WebP, SVG, TGA, EPS。
### 提取文本和数据
```bash
# Extract plain text
curl -X POST https://api.nutrient.io/build \
-H "Authorization: Bearer $NUTRIENT_API_KEY" \
-F "document.pdf=@document.pdf" \
-F 'instructions={"parts":[{"file":"document.pdf"}],"output":{"type":"text"}}' \
-o output.txt
# Extract tables as Excel
curl -X POST https://api.nutrient.io/build \
-H "Authorization: Bearer $NUTRIENT_API_KEY" \
-F "document.pdf=@document.pdf" \
-F 'instructions={"parts":[{"file":"document.pdf"}],"output":{"type":"xlsx"}}' \
-o tables.xlsx
```
### OCR 扫描文档
```bash
# OCR to searchable PDF (supports 100+ languages)
curl -X POST https://api.nutrient.io/build \
-H "Authorization: Bearer $NUTRIENT_API_KEY" \
-F "scanned.pdf=@scanned.pdf" \
-F 'instructions={"parts":[{"file":"scanned.pdf"}],"actions":[{"type":"ocr","language":"english"}]}' \
-o searchable.pdf
```
支持语言:通过 ISO 639-2 代码支持 100 多种语言(例如,`eng`, `deu`, `fra`, `spa`, `jpn`, `kor`, `chi_sim`, `chi_tra`, `ara`, `hin`, `rus`)。完整的语言名称如 `english``german` 也适用。查看 [完整的 OCR 语言表](https://www.nutrient.io/guides/document-engine/ocr/language-support/) 以获取所有支持的代码。
### 编辑敏感信息
```bash
# Pattern-based (SSN, email)
curl -X POST https://api.nutrient.io/build \
-H "Authorization: Bearer $NUTRIENT_API_KEY" \
-F "document.pdf=@document.pdf" \
-F 'instructions={"parts":[{"file":"document.pdf"}],"actions":[{"type":"redaction","strategy":"preset","strategyOptions":{"preset":"social-security-number"}},{"type":"redaction","strategy":"preset","strategyOptions":{"preset":"email-address"}}]}' \
-o redacted.pdf
# Regex-based
curl -X POST https://api.nutrient.io/build \
-H "Authorization: Bearer $NUTRIENT_API_KEY" \
-F "document.pdf=@document.pdf" \
-F 'instructions={"parts":[{"file":"document.pdf"}],"actions":[{"type":"redaction","strategy":"regex","strategyOptions":{"regex":"\\b[A-Z]{2}\\d{6}\\b"}}]}' \
-o redacted.pdf
```
预设:`social-security-number`, `email-address`, `credit-card-number`, `international-phone-number`, `north-american-phone-number`, `date`, `time`, `url`, `ipv4`, `ipv6`, `mac-address`, `us-zip-code`, `vin`
### 添加水印
```bash
curl -X POST https://api.nutrient.io/build \
-H "Authorization: Bearer $NUTRIENT_API_KEY" \
-F "document.pdf=@document.pdf" \
-F 'instructions={"parts":[{"file":"document.pdf"}],"actions":[{"type":"watermark","text":"CONFIDENTIAL","fontSize":72,"opacity":0.3,"rotation":-45}]}' \
-o watermarked.pdf
```
### 数字签名
```bash
# Self-signed CMS signature
curl -X POST https://api.nutrient.io/build \
-H "Authorization: Bearer $NUTRIENT_API_KEY" \
-F "document.pdf=@document.pdf" \
-F 'instructions={"parts":[{"file":"document.pdf"}],"actions":[{"type":"sign","signatureType":"cms"}]}' \
-o signed.pdf
```
### 填写 PDF 表单
```bash
curl -X POST https://api.nutrient.io/build \
-H "Authorization: Bearer $NUTRIENT_API_KEY" \
-F "form.pdf=@form.pdf" \
-F 'instructions={"parts":[{"file":"form.pdf"}],"actions":[{"type":"fillForm","formFields":{"name":"Jane Smith","email":"jane@example.com","date":"2026-02-06"}}]}' \
-o filled.pdf
```
## MCP 服务器(替代方案)
对于原生工具集成,请使用 MCP 服务器代替 curl
```json
{
"mcpServers": {
"nutrient-dws": {
"command": "npx",
"args": ["-y", "@nutrient-sdk/dws-mcp-server"],
"env": {
"NUTRIENT_DWS_API_KEY": "YOUR_API_KEY",
"SANDBOX_PATH": "/path/to/working/directory"
}
}
}
}
```
## 使用场景
* 在格式之间转换文档PDF, DOCX, XLSX, PPTX, HTML, 图像)
* 从 PDF 中提取文本、表格或键值对
* 对扫描文档或图像进行 OCR
* 在共享文档前编辑 PII
* 为草稿或机密文档添加水印
* 数字签署合同或协议
* 以编程方式填写 PDF 表单
## 链接
* [API 演练场](https://dashboard.nutrient.io/processor-api/playground/)
* [完整 API 文档](https://www.nutrient.io/guides/dws-processor/)
* [代理技能仓库](https://github.com/PSPDFKit-labs/nutrient-agent-skill)
* [npm MCP 服务器](https://www.npmjs.com/package/@nutrient-sdk/dws-mcp-server)

View File

@@ -0,0 +1,171 @@
---
name: security-scan
description: 使用AgentShield扫描您的Claude Code配置.claude/目录检测安全漏洞、错误配置和注入风险。检查CLAUDE.md、settings.json、MCP服务器、钩子和代理定义。
---
# 安全扫描技能
使用 [AgentShield](https://github.com/affaan-m/agentshield) 审计您的 Claude Code 配置中的安全问题。
## 何时激活
* 设置新的 Claude Code 项目时
* 修改 `.claude/settings.json``CLAUDE.md` 或 MCP 配置后
* 提交配置更改前
* 加入具有现有 Claude Code 配置的新代码库时
* 定期进行安全卫生检查时
## 扫描内容
| 文件 | 检查项 |
|------|--------|
| `CLAUDE.md` | 硬编码的密钥、自动运行指令、提示词注入模式 |
| `settings.json` | 过于宽松的允许列表、缺失的拒绝列表、危险的绕过标志 |
| `mcp.json` | 有风险的 MCP 服务器、硬编码的环境变量密钥、npx 供应链风险 |
| `hooks/` | 通过 `${file}` 插值导致的命令注入、数据泄露、静默错误抑制 |
| `agents/*.md` | 无限制的工具访问、提示词注入攻击面、缺失的模型规格 |
## 先决条件
必须安装 AgentShield。检查并在需要时安装
```bash
# Check if installed
npx ecc-agentshield --version
# Install globally (recommended)
npm install -g ecc-agentshield
# Or run directly via npx (no install needed)
npx ecc-agentshield scan .
```
## 使用方法
### 基础扫描
针对当前项目的 `.claude/` 目录运行:
```bash
# Scan current project
npx ecc-agentshield scan
# Scan a specific path
npx ecc-agentshield scan --path /path/to/.claude
# Scan with minimum severity filter
npx ecc-agentshield scan --min-severity medium
```
### 输出格式
```bash
# Terminal output (default) — colored report with grade
npx ecc-agentshield scan
# JSON — for CI/CD integration
npx ecc-agentshield scan --format json
# Markdown — for documentation
npx ecc-agentshield scan --format markdown
# HTML — self-contained dark-theme report
npx ecc-agentshield scan --format html > security-report.html
```
### 自动修复
自动应用安全的修复(仅修复标记为可自动修复的问题):
```bash
npx ecc-agentshield scan --fix
```
这将:
* 用环境变量引用替换硬编码的密钥
* 将通配符权限收紧为作用域明确的替代方案
* 绝不修改仅限手动修复的建议
### Opus 4.6 深度分析
运行对抗性的三智能体流程以进行更深入的分析:
```bash
# Requires ANTHROPIC_API_KEY
export ANTHROPIC_API_KEY=your-key
npx ecc-agentshield scan --opus --stream
```
这将运行:
1. **攻击者(红队)** — 寻找攻击向量
2. **防御者(蓝队)** — 建议加固措施
3. **审计员(最终裁决)** — 综合双方观点
### 初始化安全配置
从头开始搭建一个新的安全 `.claude/` 配置:
```bash
npx ecc-agentshield init
```
创建:
* 具有作用域权限和拒绝列表的 `settings.json`
* 遵循安全最佳实践的 `CLAUDE.md`
* `mcp.json` 占位符
### GitHub Action
添加到您的 CI 流水线中:
```yaml
- uses: affaan-m/agentshield@v1
with:
path: '.'
min-severity: 'medium'
fail-on-findings: true
```
## 严重性等级
| 等级 | 分数 | 含义 |
|-------|-------|---------|
| A | 90-100 | 安全配置 |
| B | 75-89 | 轻微问题 |
| C | 60-74 | 需要注意 |
| D | 40-59 | 显著风险 |
| F | 0-39 | 严重漏洞 |
## 结果解读
### 关键发现(立即修复)
* 配置文件中硬编码的 API 密钥或令牌
* 允许列表中存在 `Bash(*)`(无限制的 shell 访问)
* 钩子中通过 `${file}` 插值导致的命令注入
* 运行 shell 的 MCP 服务器
### 高优先级发现(生产前修复)
* CLAUDE.md 中的自动运行指令(提示词注入向量)
* 权限配置中缺少拒绝列表
* 具有不必要 Bash 访问权限的代理
### 中优先级发现(建议修复)
* 钩子中的静默错误抑制(`2>/dev/null``|| true`
* 缺少 PreToolUse 安全钩子
* MCP 服务器配置中的 `npx -y` 自动安装
### 信息性发现(了解情况)
* MCP 服务器缺少描述信息
* 正确标记为良好实践的限制性指令
## 链接
* **GitHub**: [github.com/affaan-m/agentshield](https://github.com/affaan-m/agentshield)
* **npm**: [npmjs.com/package/ecc-agentshield](https://www.npmjs.com/package/ecc-agentshield)

View File

@@ -0,0 +1,358 @@
# 关于 Claude Code 的完整长篇指南
![Header: The Longform Guide to Everything Claude Code](../../assets/images/longform/01-header.png)
***
> **前提**:本指南建立在 [关于 Claude Code 的简明指南](./the-shortform-guide.md) 之上。如果你还没有设置技能、钩子、子代理、MCP 和插件,请先阅读该指南。
![Reference to Shorthand Guide](../../assets/images/longform/02-shortform-reference.png)
*速记指南 - 请先阅读它*
在简明指南中我介绍了基础设置技能和命令、钩子、子代理、MCP、插件以及构成有效 Claude Code 工作流骨干的配置模式。那是设置指南和基础架构。
这篇长篇指南深入探讨了区分高效会话与浪费会话的技巧。如果你还没有阅读简明指南,请先返回并设置好你的配置。以下内容假定你已经配置好技能、代理、钩子和 MCP并且它们正在工作。
这里的主题是:令牌经济、记忆持久性、验证模式、并行化策略,以及构建可重用工作流的复合效应。这些是我在超过 10 个月的日常使用中提炼出的模式,它们决定了你是在第一个小时内就饱受上下文腐化之苦,还是能够保持数小时的高效会话。
简明指南和长篇指南中涵盖的所有内容都可以在 GitHub 上找到:`github.com/affaan-m/everything-claude-code`
***
## 技巧与窍门
### 有些 MCP 是可替换的,可以释放你的上下文窗口
对于诸如版本控制GitHub、数据库Supabase、部署Vercel、Railway等 MCP 来说——这些平台大多已经拥有健壮的 CLIMCP 本质上只是对其进行包装。MCP 是一个很好的包装器,但它是有代价的。
要让 CLI 功能更像 MCP而不实际使用 MCP以及随之而来的减少的上下文窗口可以考虑将功能打包成技能和命令。提取出 MCP 暴露的、使事情变得容易的工具,并将它们转化为命令。
示例:与其始终加载 GitHub MCP不如创建一个包装了 `gh pr create` 并带有你偏好选项的 `/gh-pr` 命令。与其让 Supabase MCP 消耗上下文,不如创建直接使用 Supabase CLI 的技能。
有了延迟加载上下文窗口问题基本解决了。但令牌使用和成本问题并未以同样的方式解决。CLI + 技能的方法仍然是一种令牌优化方法。
***
## 重要事项
### 上下文与记忆管理
要在会话间共享记忆,最好的方法是使用一个技能或命令来总结和检查进度,然后保存到 `.claude` 文件夹中的一个 `.tmp` 文件中,并在会话结束前不断追加内容。第二天,它可以将其用作上下文,并从中断处继续。为每个会话创建一个新文件,这样你就不会将旧的上下文污染到新的工作中。
![Session Storage File Tree](../../assets/images/longform/03-session-storage.png)
*会话存储示例 -> https://github.com/affaan-m/everything-claude-code/tree/main/examples/sessions*
Claude 创建一个总结当前状态的文件。审阅它,如果需要则要求编辑,然后重新开始。对于新的对话,只需提供文件路径。当你达到上下文限制并需要继续复杂工作时,这尤其有用。这些文件应包含:
* 哪些方法有效(有证据可验证)
* 哪些方法尝试过但无效
* 哪些方法尚未尝试,以及剩下什么需要做
**策略性地清除上下文:**
一旦你制定了计划并清除了上下文Claude Code 中计划模式的默认选项),你就可以根据计划工作。当你积累了大量与执行不再相关的探索性上下文时,这很有用。对于策略性压缩,请禁用自动压缩。在逻辑间隔手动压缩,或创建一个为你执行此操作的技能。
**高级:动态系统提示注入**
我学到的一个模式是:与其将所有内容都放在 CLAUDE.md用户作用域`.claude/rules/`(项目作用域)中,让它们每次会话都加载,不如使用 CLI 标志动态注入上下文。
```bash
claude --system-prompt "$(cat memory.md)"
```
这让你可以更精确地控制何时加载哪些上下文。系统提示内容比用户消息具有更高的权威性,而用户消息又比工具结果具有更高的权威性。
**实际设置:**
```bash
# Daily development
alias claude-dev='claude --system-prompt "$(cat ~/.claude/contexts/dev.md)"'
# PR review mode
alias claude-review='claude --system-prompt "$(cat ~/.claude/contexts/review.md)"'
# Research/exploration mode
alias claude-research='claude --system-prompt "$(cat ~/.claude/contexts/research.md)"'
```
**高级:记忆持久化钩子**
有一些大多数人不知道的钩子,有助于记忆管理:
* **PreCompact 钩子**:在上下文压缩发生之前,将重要状态保存到文件
* **Stop 钩子(会话结束)**:在会话结束时,将学习成果持久化到文件
* **SessionStart 钩子**:在新会话开始时,自动加载之前的上下文
我已经构建了这些钩子,它们位于仓库的 `github.com/affaan-m/everything-claude-code/tree/main/hooks/memory-persistence`
***
### 持续学习 / 记忆
如果你不得不多次重复一个提示,并且 Claude 遇到了同样的问题或给出了你以前听过的回答——这些模式必须被附加到技能中。
**问题:** 浪费令牌,浪费上下文,浪费时间。
**解决方案:** 当 Claude Code 发现一些不平凡的事情时——调试技巧、变通方法、某些项目特定的模式——它会将该知识保存为一个新技能。下次出现类似问题时,该技能会自动加载。
我构建了一个实现此功能的持续学习技能:`github.com/affaan-m/everything-claude-code/tree/main/skills/continuous-learning`
**为什么用 Stop 钩子(而不是 UserPromptSubmit**
关键的设计决策是使用 **Stop 钩子** 而不是 UserPromptSubmit。UserPromptSubmit 在每个消息上运行——给每个提示增加延迟。Stop 在会话结束时只运行一次——轻量级,不会在会话期间拖慢你的速度。
***
### 令牌优化
**主要策略:子代理架构**
优化你使用的工具和子代理架构,旨在将任务委托给最便宜且足以胜任的模型。
**模型选择快速参考:**
![Model Selection Table](../../assets/images/longform/04-model-selection.png)
*针对各种常见任务的子代理假设设置及选择背后的理由*
| 任务类型 | 模型 | 原因 |
| ------------------------- | ------ | ------------------------------------------ |
| 探索/搜索 | Haiku | 快速、便宜,足以用于查找文件 |
| 简单编辑 | Haiku | 单文件更改,指令清晰 |
| 多文件实现 | Sonnet | 编码的最佳平衡 |
| 复杂架构 | Opus | 需要深度推理 |
| PR 审查 | Sonnet | 理解上下文,捕捉细微差别 |
| 安全分析 | Opus | 不能错过漏洞 |
| 编写文档 | Haiku | 结构简单 |
| 调试复杂错误 | Opus | 需要将整个系统记在脑中 |
对于 90% 的编码任务,默认使用 Sonnet。当第一次尝试失败、任务涉及 5 个以上文件、架构决策或安全关键代码时,升级到 Opus。
**定价参考:**
![Claude Model Pricing](../../assets/images/longform/05-pricing-table.png)
*来源https://platform.claude.com/docs/en/about-claude/pricing*
**工具特定优化:**
用 mgrep 替换 grep——与传统 grep 或 ripgrep 相比,平均减少约 50% 的令牌:
![mgrep Benchmark](../../assets/images/longform/06-mgrep-benchmark.png)
*在我们的 50 项任务基准测试中mgrep + Claude Code 使用了比基于 grep 的工作流少约 2 倍的 token且判断质量相似或更好。来源https://github.com/mixedbread-ai/mgrep*
**模块化代码库的好处:**
拥有一个更模块化的代码库,主文件只有数百行而不是数千行,这有助于降低令牌优化成本,并确保任务在第一次尝试时就正确完成。
***
### 验证循环与评估
**基准测试工作流:**
比较在有和没有技能的情况下询问同一件事,并检查输出差异:
分叉对话,在其中之一的对话中初始化一个新的工作树但不使用该技能,最后拉取差异,查看记录了什么。
**评估模式类型:**
* **基于检查点的评估**:设置明确的检查点,根据定义的标准进行验证,在继续之前修复
* **持续评估**:每 N 分钟或在重大更改后运行,完整的测试套件 + 代码检查
**关键指标:**
```
pass@k: At least ONE of k attempts succeeds
k=1: 70% k=3: 91% k=5: 97%
pass^k: ALL k attempts must succeed
k=1: 70% k=3: 34% k=5: 17%
```
当你只需要它能工作时,使用 **pass@k**。当一致性至关重要时,使用 **pass^k**
***
## 并行化
在多 Claude 终端设置中分叉对话时,请确保分叉中的操作和原始对话的范围定义明确。在代码更改方面,力求最小化重叠。
**我偏好的模式:**
主聊天用于代码更改,分叉用于询问有关代码库及其当前状态的问题,或研究外部服务。
**关于任意终端数量:**
![Boris on Parallel Terminals](../../assets/images/longform/07-boris-parallel.png)
*Boris (Anthropic) 关于运行多个 Claude 实例的说明*
Boris 有关于并行化的建议。他曾建议在本地运行 5 个 Claude 实例,在上游运行 5 个。我建议不要设置任意的终端数量。增加终端应该是出于真正的必要性。
你的目标应该是:**用最小可行的并行化程度,你能完成多少工作。**
**用于并行实例的 Git Worktrees**
```bash
# Create worktrees for parallel work
git worktree add ../project-feature-a feature-a
git worktree add ../project-feature-b feature-b
git worktree add ../project-refactor refactor-branch
# Each worktree gets its own Claude instance
cd ../project-feature-a && claude
```
**如果** 你要开始扩展实例数量 **并且** 你有多个 Claude 实例在处理相互重叠的代码,那么你必须使用 git worktrees并为每个实例制定非常明确的计划。使用 `/rename <name here>` 来命名你所有的聊天。
![Two Terminal Setup](../../assets/images/longform/08-two-terminals.png)
*初始设置:左终端用于编码,右终端用于提问 - 使用 /rename 和 /fork*
**级联方法:**
当运行多个 Claude Code 实例时,使用“级联”模式进行组织:
* 在右侧的新标签页中打开新任务
* 从左到右、从旧到新进行扫描
* 一次最多专注于 3-4 个任务
***
## 基础工作
**双实例启动模式:**
对于我自己的工作流管理,我喜欢从一个空仓库开始,打开 2 个 Claude 实例。
**实例 1脚手架代理**
* 搭建脚手架和基础工作
* 创建项目结构
* 设置配置CLAUDE.md、规则、代理
**实例 2深度研究代理**
* 连接到你的所有服务,进行网络搜索
* 创建详细的 PRD
* 创建架构 Mermaid 图
* 编译包含实际文档片段的参考资料
**llms.txt 模式:**
如果可用,你可以通过在你到达它们的文档页面后执行 `/llms.txt` 来在许多文档参考资料上找到一个 `llms.txt`。这会给你一个干净的、针对 LLM 优化的文档版本。
**理念:构建可重用的模式**
来自 @omarsar0"早期,我花时间构建可重用的工作流/模式。构建过程很繁琐,但随着模型和代理框架的改进,这产生了惊人的复合效应。"
**应该投资于:**
* 子代理
* 技能
* 命令
* 规划模式
* MCP 工具
* 上下文工程模式
***
## 代理与子代理的最佳实践
**子代理上下文问题:**
子代理的存在是为了通过返回摘要而不是转储所有内容来节省上下文。但编排器拥有子代理所缺乏的语义上下文。子代理只知道字面查询,不知道请求背后的 **目的**
**迭代检索模式:**
1. 编排器评估每个子代理的返回
2. 在接受之前询问后续问题
3. 子代理返回源,获取答案,返回
4. 循环直到足够(最多 3 个周期)
**关键:** 传递目标上下文,而不仅仅是查询。
**具有顺序阶段的编排器:**
```markdown
第一阶段:研究(使用探索智能体)→ research-summary.md
第二阶段:规划(使用规划智能体)→ plan.md
第三阶段:实施(使用测试驱动开发指南智能体)→ 代码变更
第四阶段:审查(使用代码审查智能体)→ review-comments.md
第五阶段:验证(如需则使用构建错误解决器)→ 完成或循环返回
```
**关键规则:**
1. 每个智能体获得一个清晰的输入并产生一个清晰的输出
2. 输出成为下一阶段的输入
3. 永远不要跳过阶段
4. 在智能体之间使用 `/clear`
5. 将中间输出存储在文件中
***
## 有趣的东西 / 非关键,仅供娱乐的小贴士
### 自定义状态栏
你可以使用 `/statusline` 来设置它 - 然后 Claude 会说你没有状态栏,但可以为你设置,并询问你想要在里面放什么。
另请参阅https://github.com/sirmalloc/ccstatusline
### 语音转录
用你的声音与 Claude Code 对话。对很多人来说比打字更快。
* Mac 上的 superwhisper、MacWhisper
* 即使转录有误Claude 也能理解意图
### 终端别名
```bash
alias c='claude'
alias gb='github'
alias co='code'
alias q='cd ~/Desktop/projects'
```
***
## 里程碑
![25k+ GitHub Stars](../../assets/images/longform/09-25k-stars.png)
*一周内获得 25,000+ GitHub stars*
***
## 资源
**智能体编排:**
* https://github.com/ruvnet/claude-flow - 拥有 54+ 个专业智能体的企业级编排平台
**自我改进记忆:**
* https://github.com/affaan-m/everything-claude-code/tree/main/skills/continuous-learning
* rlancemartin.github.io/2025/12/01/claude\_diary/ - 会话反思模式
**系统提示词参考:**
* https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools - 系统提示词集合 (110k stars)
**官方:**
* Anthropic Academy: anthropic.skilljar.com
***
## 参考资料
* [Anthropic: 解密 AI 智能体的评估](https://www.anthropic.com/engineering/demystifying-evals-for-ai-agents)
* [YK: 32 个 Claude Code 技巧](https://agenticcoding.substack.com/p/32-claude-code-tips-from-basics-to)
* [RLanceMartin: 会话反思模式](https://rlancemartin.github.io/2025/12/01/claude_diary/)
* @PerceptualPeak: 子智能体上下文协商
* @menhguin: 智能体抽象层分级
* @omarsar0: 复合效应哲学
***
*两份指南中涵盖的所有内容都可以在 GitHub 上的 [everything-claude-code](https://github.com/affaan-m/everything-claude-code) 找到*

View File

@@ -0,0 +1,431 @@
# Claude Code 简明指南
![Header: Anthropic Hackathon Winner - Tips & Tricks for Claude Code](../../assets/images/shortform/00-header.png)
***
**自 2 月实验性推出以来,我一直是 Claude Code 的忠实用户,并凭借 [zenith.chat](https://zenith.chat) 与 [@DRodriguezFX](https://x.com/DRodriguezFX) 一起赢得了 Anthropic x Forum Ventures 的黑客马拉松——完全使用 Claude Code。**
经过 10 个月的日常使用以下是我的完整设置技能、钩子、子代理、MCP、插件以及实际有效的方法。
***
## 技能和命令
技能就像规则,受限于特定的范围和流程。当你需要执行特定工作流时,它们是提示词的简写。
在使用 Opus 4.5 长时间编码后,你想清理死代码和松散的 .md 文件吗?运行 `/refactor-clean`。需要测试吗?`/tdd``/e2e``/test-coverage`。技能也可以包含代码地图——一种让 Claude 快速浏览你的代码库而无需消耗上下文进行探索的方式。
![显示链式命令的终端](../../assets/images/shortform/02-chaining-commands.jpeg)
*将命令链接在一起*
命令是通过斜杠命令执行的技能。它们有重叠但存储方式不同:
* **技能**: `~/.claude/skills/` - 更广泛的工作流定义
* **命令**: `~/.claude/commands/` - 快速可执行的提示词
```bash
# Example skill structure
~/.claude/skills/
pmx-guidelines.md # Project-specific patterns
coding-standards.md # Language best practices
tdd-workflow/ # Multi-file skill with README.md
security-review/ # Checklist-based skill
```
***
## 钩子
钩子是基于触发的自动化,在特定事件发生时触发。与技能不同,它们受限于工具调用和生命周期事件。
**钩子类型:**
1. **PreToolUse** - 工具执行前(验证、提醒)
2. **PostToolUse** - 工具完成后(格式化、反馈循环)
3. **UserPromptSubmit** - 当你发送消息时
4. **Stop** - 当 Claude 完成响应时
5. **PreCompact** - 上下文压缩前
6. **Notification** - 权限请求
**示例:长时间运行命令前的 tmux 提醒**
```json
{
"PreToolUse": [
{
"matcher": "tool == \"Bash\" && tool_input.command matches \"(npm|pnpm|yarn|cargo|pytest)\"",
"hooks": [
{
"type": "command",
"command": "if [ -z \"$TMUX\" ]; then echo '[Hook] Consider tmux for session persistence' >&2; fi"
}
]
}
]
}
```
![PostToolUse 钩子反馈](../../assets/images/shortform/03-posttooluse-hook.png)
*在 Claude Code 中运行 PostToolUse 钩子时获得的反馈示例*
**专业提示:** 使用 `hookify` 插件以对话方式创建钩子,而不是手动编写 JSON。运行 `/hookify` 并描述你想要什么。
***
## 子代理
子代理是你的编排器(主 Claude可以委托任务给它的、具有有限范围的进程。它们可以在后台或前台运行为主代理释放上下文。
子代理与技能配合得很好——一个能够执行你技能子集的子代理可以被委托任务并自主使用这些技能。它们也可以用特定的工具权限进行沙盒化。
```bash
# Example subagent structure
~/.claude/agents/
planner.md # Feature implementation planning
architect.md # System design decisions
tdd-guide.md # Test-driven development
code-reviewer.md # Quality/security review
security-reviewer.md # Vulnerability analysis
build-error-resolver.md
e2e-runner.md
refactor-cleaner.md
```
为每个子代理配置允许的工具、MCP 和权限,以实现适当的范围界定。
***
## 规则和记忆
你的 `.rules` 文件夹包含 `.md` 文件,其中是 Claude 应始终遵循的最佳实践。有两种方法:
1. **单一 CLAUDE.md** - 所有内容在一个文件中(用户或项目级别)
2. **规则文件夹** - 按关注点分组的模块化 `.md` 文件
```bash
~/.claude/rules/
security.md # No hardcoded secrets, validate inputs
coding-style.md # Immutability, file organization
testing.md # TDD workflow, 80% coverage
git-workflow.md # Commit format, PR process
agents.md # When to delegate to subagents
performance.md # Model selection, context management
```
**规则示例:**
* 代码库中不使用表情符号
* 前端避免使用紫色色调
* 部署前始终测试代码
* 优先考虑模块化代码而非巨型文件
* 绝不提交 console.log
***
## MCP模型上下文协议
MCP 将 Claude 直接连接到外部服务。它不是 API 的替代品——而是围绕 API 的提示驱动包装器,允许在导航信息时具有更大的灵活性。
**示例:** Supabase MCP 允许 Claude 提取特定数据,直接在上游运行 SQL 而无需复制粘贴。数据库、部署平台等也是如此。
![Supabase MCP 列出表格](../../assets/images/shortform/04-supabase-mcp.jpeg)
*Supabase MCP 列出公共模式内表格的示例*
**Claude 中的 Chrome** 是一个内置的插件 MCP允许 Claude 自主控制你的浏览器——点击查看事物如何工作。
**关键:上下文窗口管理**
对 MCP 要挑剔。我将所有 MCP 保存在用户配置中,但**禁用所有未使用的**。导航到 `/plugins` 并向下滚动,或运行 `/mcp`
![/plugins 界面](../../assets/images/shortform/05-plugins-interface.jpeg)
*使用 /plugins 导航到 MCP 以查看当前安装的插件及其状态*
在压缩之前,你的 200k 上下文窗口如果启用了太多工具,可能只有 70k。性能会显著下降。
**经验法则:** 在配置中保留 20-30 个 MCP但保持启用状态少于 10 个 / 活动工具少于 80 个。
```bash
# Check enabled MCPs
/mcp
# Disable unused ones in ~/.claude.json under projects.disabledMcpServers
```
***
## 插件
插件将工具打包以便于安装,而不是繁琐的手动设置。一个插件可以是技能和 MCP 的组合,或者是捆绑在一起的钩子/工具。
**安装插件:**
```bash
# Add a marketplace
claude plugin marketplace add https://github.com/mixedbread-ai/mgrep
# Open Claude, run /plugins, find new marketplace, install from there
```
![显示 mgrep 的市场标签页](../../assets/images/shortform/06-marketplaces-mgrep.jpeg)
*显示新安装的 Mixedbread-Grep 市场*
**LSP 插件** 如果你经常在编辑器之外运行 Claude Code则特别有用。语言服务器协议为 Claude 提供实时类型检查、跳转到定义和智能补全,而无需打开 IDE。
```bash
# Enabled plugins example
typescript-lsp@claude-plugins-official # TypeScript intelligence
pyright-lsp@claude-plugins-official # Python type checking
hookify@claude-plugins-official # Create hooks conversationally
mgrep@Mixedbread-Grep # Better search than ripgrep
```
与 MCP 相同的警告——注意你的上下文窗口。
***
## 技巧和窍门
### 键盘快捷键
* `Ctrl+U` - 删除整行(比反复按退格键快)
* `!` - 快速 bash 命令前缀
* `@` - 搜索文件
* `/` - 发起斜杠命令
* `Shift+Enter` - 多行输入
* `Tab` - 切换思考显示
* `Esc Esc` - 中断 Claude / 恢复代码
### 并行工作流
* **分叉** (`/fork`) - 分叉对话以并行执行不重叠的任务,而不是在队列中堆积消息
* **Git Worktrees** - 用于重叠的并行 Claude 而不产生冲突。每个工作树都是一个独立的检出
```bash
git worktree add ../feature-branch feature-branch
# Now run separate Claude instances in each worktree
```
### 用于长时间运行命令的 tmux
流式传输和监视 Claude 运行的日志/bash 进程:
https://github.com/user-attachments/assets/shortform/07-tmux-video.mp4
```bash
tmux new -s dev
# Claude runs commands here, you can detach and reattach
tmux attach -t dev
```
### mgrep > grep
`mgrep` 是对 ripgrep/grep 的显著改进。通过插件市场安装,然后使用 `/mgrep` 技能。适用于本地搜索和网络搜索。
```bash
mgrep "function handleSubmit" # Local search
mgrep --web "Next.js 15 app router changes" # Web search
```
### 其他有用的命令
* `/rewind` - 回到之前的状态
* `/statusline` - 用分支、上下文百分比、待办事项进行自定义
* `/checkpoints` - 文件级别的撤销点
* `/compact` - 手动触发上下文压缩
### GitHub Actions CI/CD
使用 GitHub Actions 在你的 PR 上设置代码审查。配置后Claude 可以自动审查 PR。
![Claude 机器人批准 PR](../../assets/images/shortform/08-github-pr-review.jpeg)
*Claude 批准一个错误修复 PR*
### 沙盒化
对风险操作使用沙盒模式——Claude 在受限环境中运行,不影响你的实际系统。
***
## 关于编辑器
你的编辑器选择显著影响 Claude Code 的工作流。虽然 Claude Code 可以在任何终端中工作,但将其与功能强大的编辑器配对可以解锁实时文件跟踪、快速导航和集成命令执行。
### Zed我的偏好
我使用 [Zed](https://zed.dev) —— 用 Rust 编写,所以它真的很快。立即打开,轻松处理大型代码库,几乎不占用系统资源。
**为什么 Zed + Claude Code 是绝佳组合:**
* **速度** - 基于 Rust 的性能意味着当 Claude 快速编辑文件时没有延迟。你的编辑器能跟上
* **代理面板集成** - Zed 的 Claude 集成允许你在 Claude 编辑时实时跟踪文件变化。无需离开编辑器即可跳转到 Claude 引用的文件
* **CMD+Shift+R 命令面板** - 快速访问所有自定义斜杠命令、调试器、构建脚本,在可搜索的 UI 中
* **最小的资源使用** - 在繁重操作期间不会与 Claude 竞争 RAM/CPU。运行 Opus 时很重要
* **Vim 模式** - 完整的 vim 键绑定,如果你喜欢的话
![带有自定义命令的 Zed 编辑器](../../assets/images/shortform/09-zed-editor.jpeg)
*使用 CMD+Shift+R 显示自定义命令下拉菜单的 Zed 编辑器。右下角的靶心图标表示跟随模式。*
**编辑器无关提示:**
1. **分割你的屏幕** - 一侧是带 Claude Code 的终端,另一侧是编辑器
2. **Ctrl + G** - 在 Zed 中快速打开 Claude 当前正在处理的文件
3. **自动保存** - 启用自动保存,以便 Claude 的文件读取始终是最新的
4. **Git 集成** - 使用编辑器的 git 功能在提交前审查 Claude 的更改
5. **文件监视器** - 大多数编辑器自动重新加载更改的文件,请验证是否已启用
### VSCode / Cursor
这也是一个可行的选择,并且与 Claude Code 配合良好。你可以使用终端格式,通过 `\ide` 与你的编辑器自动同步以启用 LSP 功能(现在与插件有些冗余)。或者你可以选择扩展,它更集成于编辑器并具有匹配的 UI。
![VS Code Claude Code 扩展](../../assets/images/shortform/10-vscode-extension.jpeg)
*VS Code 扩展为 Claude Code 提供了原生图形界面,直接集成到您的 IDE 中。*
***
## 我的设置
### 插件
**已安装:**(我通常一次只启用其中的 4-5 个)
```markdown
ralph-wiggum@claude-code-plugins # 循环自动化
frontend-design@claude-code-plugins # UI/UX 模式
commit-commands@claude-code-plugins # Git 工作流
security-guidance@claude-code-plugins # 安全检查
pr-review-toolkit@claude-code-plugins # PR 自动化
typescript-lsp@claude-plugins-official # TS 智能
hookify@claude-plugins-official # Hook 创建
code-simplifier@claude-plugins-official
feature-dev@claude-code-plugins
explanatory-output-style@claude-code-plugins
code-review@claude-code-plugins
context7@claude-plugins-official # 实时文档
pyright-lsp@claude-plugins-official # Python 类型
mgrep@Mixedbread-Grep # 更好的搜索
```
### MCP 服务器
**已配置(用户级别):**
```json
{
"github": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"] },
"firecrawl": { "command": "npx", "args": ["-y", "firecrawl-mcp"] },
"supabase": {
"command": "npx",
"args": ["-y", "@supabase/mcp-server-supabase@latest", "--project-ref=YOUR_REF"]
},
"memory": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-memory"] },
"sequential-thinking": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-sequential-thinking"]
},
"vercel": { "type": "http", "url": "https://mcp.vercel.com" },
"railway": { "command": "npx", "args": ["-y", "@railway/mcp-server"] },
"cloudflare-docs": { "type": "http", "url": "https://docs.mcp.cloudflare.com/mcp" },
"cloudflare-workers-bindings": {
"type": "http",
"url": "https://bindings.mcp.cloudflare.com/mcp"
},
"clickhouse": { "type": "http", "url": "https://mcp.clickhouse.cloud/mcp" },
"AbletonMCP": { "command": "uvx", "args": ["ableton-mcp"] },
"magic": { "command": "npx", "args": ["-y", "@magicuidesign/mcp@latest"] }
}
```
这是关键——我配置了 14 个 MCP但每个项目只启用约 5-6 个。保持上下文窗口健康。
### 关键钩子
```json
{
"PreToolUse": [
{ "matcher": "npm|pnpm|yarn|cargo|pytest", "hooks": ["tmux reminder"] },
{ "matcher": "Write && .md file", "hooks": ["block unless README/CLAUDE"] },
{ "matcher": "git push", "hooks": ["open editor for review"] }
],
"PostToolUse": [
{ "matcher": "Edit && .ts/.tsx/.js/.jsx", "hooks": ["prettier --write"] },
{ "matcher": "Edit && .ts/.tsx", "hooks": ["tsc --noEmit"] },
{ "matcher": "Edit", "hooks": ["grep console.log warning"] }
],
"Stop": [
{ "matcher": "*", "hooks": ["check modified files for console.log"] }
]
}
```
### 自定义状态行
显示用户、目录、带脏标记的 git 分支、剩余上下文百分比、模型、时间和待办事项计数:
![自定义状态行](../../assets/images/shortform/11-statusline.jpeg)
*我的 Mac 根目录中的状态行示例*
```
affoon:~ ctx:65% Opus 4.5 19:52
▌▌ plan mode on (shift+tab to cycle)
```
### 规则结构
```
~/.claude/rules/
security.md # Mandatory security checks
coding-style.md # Immutability, file size limits
testing.md # TDD, 80% coverage
git-workflow.md # Conventional commits
agents.md # Subagent delegation rules
patterns.md # API response formats
performance.md # Model selection (Haiku vs Sonnet vs Opus)
hooks.md # Hook documentation
```
### 子代理
```
~/.claude/agents/
planner.md # Break down features
architect.md # System design
tdd-guide.md # Write tests first
code-reviewer.md # Quality review
security-reviewer.md # Vulnerability scan
build-error-resolver.md
e2e-runner.md # Playwright tests
refactor-cleaner.md # Dead code removal
doc-updater.md # Keep docs synced
```
***
## 关键要点
1. **不要过度复杂化** - 将配置视为微调,而非架构
2. **上下文窗口很宝贵** - 禁用未使用的 MCP 和插件
3. **并行执行** - 分叉对话,使用 git worktrees
4. **自动化重复性工作** - 用于格式化、代码检查、提醒的钩子
5. **界定子代理范围** - 有限的工具 = 专注的执行
***
## 参考资料
* [插件参考](https://code.claude.com/docs/en/plugins-reference)
* [钩子文档](https://code.claude.com/docs/en/hooks)
* [检查点](https://code.claude.com/docs/en/checkpointing)
* [交互模式](https://code.claude.com/docs/en/interactive-mode)
* [记忆系统](https://code.claude.com/docs/en/memory)
* [子代理](https://code.claude.com/docs/en/sub-agents)
* [MCP 概述](https://code.claude.com/docs/en/mcp-overview)
***
**注意:** 这是细节的一个子集。关于高级模式,请参阅 [长篇指南](./the-longform-guide.md)。
***
*在纽约与 [@DRodriguezFX](https://x.com/DRodriguezFX) 一起构建 [zenith.chat](https://zenith.chat) 赢得了 Anthropic x Forum Ventures 黑客马拉松*

View File

@@ -13,7 +13,7 @@
"C": "Cyan - username, todos",
"T": "Gray - model name"
},
"output_example": "affoon:~/projects/myapp main* ctx:73% sonnet-4.5 14:30 todos:3",
"output_example": "affoon:~/projects/myapp main* ctx:73% sonnet-4.6 14:30 todos:3",
"usage": "Copy the statusLine object to your ~/.claude/settings.json"
}
}

View File

@@ -7,7 +7,7 @@
"hooks": [
{
"type": "command",
"command": "node -e \"let d='';process.stdin.on('data',c=>d+=c);process.stdin.on('end',()=>{try{const i=JSON.parse(d);const cmd=i.tool_input?.command||'';if(/(npm run dev\\b|pnpm( run)? dev\\b|yarn dev\\b|bun run dev\\b)/.test(cmd)){console.error('[Hook] BLOCKED: Dev server must run in tmux for log access');console.error('[Hook] Use: tmux new-session -d -s dev \\\"npm run dev\\\"');console.error('[Hook] Then: tmux attach -t dev');process.exit(2)}}catch{}console.log(d)})\""
"command": "node -e \"let d='';process.stdin.on('data',c=>d+=c);process.stdin.on('end',()=>{try{const i=JSON.parse(d);const cmd=i.tool_input?.command||'';if(process.platform!=='win32'&&/(npm run dev\\b|pnpm( run)? dev\\b|yarn dev\\b|bun run dev\\b)/.test(cmd)){console.error('[Hook] BLOCKED: Dev server must run in tmux for log access');console.error('[Hook] Use: tmux new-session -d -s dev \\\"npm run dev\\\"');console.error('[Hook] Then: tmux attach -t dev');process.exit(2)}}catch{}console.log(d)})\""
}
],
"description": "Block dev servers outside tmux - ensures you can access logs"
@@ -17,7 +17,7 @@
"hooks": [
{
"type": "command",
"command": "node -e \"let d='';process.stdin.on('data',c=>d+=c);process.stdin.on('end',()=>{try{const i=JSON.parse(d);const cmd=i.tool_input?.command||'';if(!process.env.TMUX&&/(npm (install|test)|pnpm (install|test)|yarn (install|test)?|bun (install|test)|cargo build|make\\b|docker\\b|pytest|vitest|playwright)/.test(cmd)){console.error('[Hook] Consider running in tmux for session persistence');console.error('[Hook] tmux new -s dev | tmux attach -t dev')}}catch{}console.log(d)})\""
"command": "node -e \"let d='';process.stdin.on('data',c=>d+=c);process.stdin.on('end',()=>{try{const i=JSON.parse(d);const cmd=i.tool_input?.command||'';if(process.platform!=='win32'&&!process.env.TMUX&&/(npm (install|test)|pnpm (install|test)|yarn (install|test)?|bun (install|test)|cargo build|make\\b|docker\\b|pytest|vitest|playwright)/.test(cmd)){console.error('[Hook] Consider running in tmux for session persistence');console.error('[Hook] tmux new -s dev | tmux attach -t dev')}}catch{}console.log(d)})\""
}
],
"description": "Reminder to use tmux for long-running commands"
@@ -37,7 +37,7 @@
"hooks": [
{
"type": "command",
"command": "node -e \"const fs=require('fs');let d='';process.stdin.on('data',c=>d+=c);process.stdin.on('end',()=>{try{const i=JSON.parse(d);const p=i.tool_input?.file_path||'';if(/\\.(md|txt)$/.test(p)&&!/(README|CLAUDE|AGENTS|CONTRIBUTING)\\.md$/.test(p)){console.error('[Hook] BLOCKED: Unnecessary documentation file creation');console.error('[Hook] File: '+p);console.error('[Hook] Use README.md for documentation instead');process.exit(2)}}catch{}console.log(d)})\""
"command": "node -e \"const fs=require('fs');let d='';process.stdin.on('data',c=>d+=c);process.stdin.on('end',()=>{try{const i=JSON.parse(d);const p=i.tool_input?.file_path||'';if(/\\.(md|txt)$/.test(p)&&!/(README|CLAUDE|AGENTS|CONTRIBUTING)\\.md$/.test(p)&&!/\\.claude\\/plans\\//.test(p)){console.error('[Hook] BLOCKED: Unnecessary documentation file creation');console.error('[Hook] File: '+p);console.error('[Hook] Use README.md for documentation instead');process.exit(2)}}catch{}console.log(d)})\""
}
],
"description": "Block creation of random .md files - keeps docs consolidated"

642
llms.txt
View File

@@ -1,642 +0,0 @@
# OpenCode Documentation for LLMs
> OpenCode is an open-source AI coding agent available as a terminal interface, desktop application, or IDE extension. It helps developers write code, add features, and understand codebases through conversational interactions.
## Installation
Multiple installation methods are available:
```bash
# curl script (recommended)
curl -fsSL https://opencode.ai/install | bash
# Node.js package managers
npm install -g opencode
bun install -g opencode
pnpm add -g opencode
yarn global add opencode
# Homebrew (macOS/Linux)
brew install anomalyco/tap/opencode
# Arch Linux
paru -S opencode
# Windows
choco install opencode # Chocolatey
scoop install opencode # Scoop
# Or use Docker/WSL (recommended for Windows)
```
## Configuration
Configuration file: `opencode.json` or `opencode.jsonc` (with comments)
Schema: `https://opencode.ai/config.json`
### Core Settings
```json
{
"$schema": "https://opencode.ai/config.json",
"model": "anthropic/claude-sonnet-4-5",
"small_model": "anthropic/claude-haiku-4-5",
"default_agent": "build",
"instructions": [
"CONTRIBUTING.md",
"docs/guidelines.md"
],
"plugin": [
"opencode-helicone-session",
"./.opencode/plugins"
],
"agent": { /* agent definitions */ },
"command": { /* command definitions */ },
"permission": {
"edit": "ask",
"bash": "ask",
"mcp_*": "ask"
},
"tools": {
"write": true,
"bash": true
}
}
```
### Environment Variables
Use `{env:VAR_NAME}` for environment variables and `{file:path}` for file contents in configuration values.
## Agents
OpenCode supports two agent types:
### Primary Agents
Main assistants you interact with directly. Switch between them using Tab or configured keybinds.
### Subagents
Specialized assistants that primary agents invoke automatically or through `@` mentions (e.g., `@general help me search`).
### Built-in Agents
| Agent | Type | Description |
|-------|------|-------------|
| build | primary | Default agent with full tool access for development work |
| plan | primary | Restricted agent for analysis; file edits and bash set to "ask" |
| general | subagent | Full tool access for multi-step research tasks |
| explore | subagent | Read-only agent for rapid codebase exploration |
| compaction | system | Hidden agent for context compaction |
| title | system | Hidden agent for title generation |
| summary | system | Hidden agent for summarization |
### Agent Configuration
JSON format in `opencode.json`:
```json
{
"agent": {
"code-reviewer": {
"description": "Reviews code for best practices",
"mode": "subagent",
"model": "anthropic/claude-opus-4-5",
"prompt": "{file:.opencode/prompts/agents/code-reviewer.txt}",
"temperature": 0.3,
"tools": {
"write": false,
"edit": false,
"read": true,
"bash": true
},
"permission": {
"edit": "deny",
"bash": {
"*": "ask",
"git status": "allow"
}
},
"steps": 10
}
}
}
```
Markdown format in `~/.config/opencode/agents/` or `.opencode/agents/`:
```markdown
---
description: Expert code review specialist
mode: subagent
model: anthropic/claude-opus-4-5
temperature: 0.3
tools:
write: false
read: true
bash: true
permission:
edit: deny
steps: 10
---
You are an expert code reviewer. Review code for quality, security, and maintainability...
```
### Agent Configuration Options
| Option | Purpose | Values |
|--------|---------|--------|
| description | Required field explaining agent purpose | string |
| mode | Agent type | "primary", "subagent", or "all" |
| model | Override default model | "provider/model-id" |
| temperature | Control randomness | 0.0-1.0 (lower = focused) |
| tools | Enable/disable specific tools | object or wildcards |
| permission | Set tool permissions | "ask", "allow", or "deny" |
| steps | Limit agentic iterations | number |
| prompt | Reference custom prompt file | "{file:./path}" |
| top_p | Alternative randomness control | 0.0-1.0 |
## Commands
### Built-in Commands
- `/init` - Initialize project analysis (creates AGENTS.md)
- `/undo` - Undo last change
- `/redo` - Redo undone change
- `/share` - Generate shareable conversation link
- `/help` - Show help
- `/connect` - Configure API providers
### Custom Commands
**Markdown files** in `~/.config/opencode/commands/` (global) or `.opencode/commands/` (project):
```markdown
---
description: Create implementation plan
agent: planner
subtask: true
---
Create a detailed implementation plan for: $ARGUMENTS
Include:
- Requirements analysis
- Architecture review
- Step breakdown
- Testing strategy
```
**JSON configuration** in `opencode.json`:
```json
{
"command": {
"plan": {
"description": "Create implementation plan",
"template": "Create a detailed implementation plan for: $ARGUMENTS",
"agent": "planner",
"subtask": true
},
"test": {
"template": "Run tests with coverage for: $ARGUMENTS\n\nOutput:\n!`npm test`",
"description": "Run tests with coverage",
"agent": "build"
}
}
}
```
### Template Variables
| Variable | Description |
|----------|-------------|
| `$ARGUMENTS` | All command arguments |
| `$1`, `$2`, `$3` | Positional arguments |
| `!`command`` | Include shell command output |
| `@filepath` | Include file contents |
### Command Options
| Option | Purpose | Required |
|--------|---------|----------|
| template | Prompt text sent to LLM | Yes |
| description | UI display text | No |
| agent | Target agent for execution | No |
| model | Override default LLM | No |
| subtask | Force subagent invocation | No |
## Tools
### Built-in Tools
| Tool | Purpose | Permission Key |
|------|---------|---------------|
| bash | Execute shell commands | "bash" |
| edit | Modify existing files using exact string replacements | "edit" |
| write | Create new files or overwrite existing ones | "edit" |
| read | Read file contents from codebase | "read" |
| grep | Search file contents using regular expressions | "grep" |
| glob | Find files by pattern matching | "glob" |
| list | List files and directories | "list" |
| lsp | Access code intelligence (experimental) | "lsp" |
| patch | Apply patches to files | "edit" |
| skill | Load skill files (SKILL.md) | "skill" |
| todowrite | Manage todo lists during sessions | "todowrite" |
| todoread | Read existing todo lists | "todoread" |
| webfetch | Fetch and read web pages | "webfetch" |
| question | Ask user questions during execution | "question" |
### Tool Permissions
```json
{
"permission": {
"edit": "ask",
"bash": "allow",
"webfetch": "deny",
"mcp_*": "ask"
}
}
```
Permission levels:
- `"allow"` - Tool executes without restriction
- `"deny"` - Tool cannot be used
- `"ask"` - Requires user approval before execution
## Custom Tools
Location: `.opencode/tools/` (project) or `~/.config/opencode/tools/` (global)
### Tool Definition
```typescript
import { tool } from "@opencode-ai/plugin"
export default tool({
description: "Execute SQL query on the database",
args: {
query: tool.schema.string().describe("SQL query to execute"),
database: tool.schema.string().optional().describe("Target database")
},
async execute(args, context) {
// context.worktree - git repository root
// context.directory - current working directory
// context.sessionID - current session ID
// context.agent - active agent identifier
const result = await someDbQuery(args.query)
return { result }
}
})
```
### Multiple Tools Per File
```typescript
// math.ts - creates math_add and math_multiply tools
export const add = tool({
description: "Add two numbers",
args: {
a: tool.schema.number(),
b: tool.schema.number()
},
async execute({ a, b }) {
return { result: a + b }
}
})
export const multiply = tool({
description: "Multiply two numbers",
args: {
a: tool.schema.number(),
b: tool.schema.number()
},
async execute({ a, b }) {
return { result: a * b }
}
})
```
### Schema Types (Zod)
```typescript
tool.schema.string()
tool.schema.number()
tool.schema.boolean()
tool.schema.array(tool.schema.string())
tool.schema.object({ key: tool.schema.string() })
tool.schema.enum(["option1", "option2"])
tool.schema.optional()
tool.schema.describe("Description for LLM")
```
## Plugins
Plugins extend OpenCode with custom hooks, tools, and behaviors.
### Plugin Structure
```typescript
import { tool } from "@opencode-ai/plugin"
export const MyPlugin = async ({ project, client, $, directory, worktree }) => {
// project - Current project information
// client - OpenCode SDK client for AI interaction
// $ - Bun's shell API for command execution
// directory - Current working directory
// worktree - Git worktree path
return {
// Hook implementations
"file.edited": async (event) => {
// Handle file edit event
},
"tool.execute.before": async (input, output) => {
// Intercept before tool execution
},
"session.idle": async (event) => {
// Handle session idle
}
}
}
```
### Loading Plugins
1. **Local files**: Place in `.opencode/plugins/` (project) or `~/.config/opencode/plugins/` (global)
2. **npm packages**: Specify in `opencode.json`:
```json
{
"plugin": [
"opencode-helicone-session",
"@my-org/custom-plugin",
"./.opencode/plugins"
]
}
```
### Available Hook Events
**Command Events:**
- `command.executed` - After a command is executed
**File Events:**
- `file.edited` - After a file is edited
- `file.watcher.updated` - When file watcher detects changes
**Tool Events:**
- `tool.execute.before` - Before tool execution (can modify input)
- `tool.execute.after` - After tool execution (can modify output)
**Session Events:**
- `session.created` - When session starts
- `session.compacted` - After context compaction
- `session.deleted` - When session ends
- `session.idle` - When session becomes idle
- `session.updated` - When session is updated
- `session.status` - Session status changes
**Message Events:**
- `message.updated` - When message is updated
- `message.removed` - When message is removed
- `message.part.updated` - When message part is updated
**LSP Events:**
- `lsp.client.diagnostics` - LSP diagnostic updates
- `lsp.updated` - LSP state updates
**Shell Events:**
- `shell.env` - Modify shell environment variables
**TUI Events:**
- `tui.prompt.append` - Append to TUI prompt
- `tui.command.execute` - Execute TUI command
- `tui.toast.show` - Show toast notification
**Other Events:**
- `installation.updated` - Installation updates
- `permission.asked` - Permission request
- `server.connected` - Server connection
- `todo.updated` - Todo list updates
### Hook Event Mapping (Claude Code → OpenCode)
| Claude Code Hook | OpenCode Plugin Event |
|-----------------|----------------------|
| PreToolUse | `tool.execute.before` |
| PostToolUse | `tool.execute.after` |
| Stop | `session.idle` or `session.status` |
| SessionStart | `session.created` |
| SessionEnd | `session.deleted` |
| N/A | `file.edited`, `file.watcher.updated` |
| N/A | `message.*`, `permission.*`, `lsp.*` |
### Plugin Example: Auto-Format
```typescript
export const AutoFormatPlugin = async ({ $, directory }) => {
return {
"file.edited": async (event) => {
if (event.path.match(/\.(ts|tsx|js|jsx)$/)) {
await $`prettier --write ${event.path}`
}
}
}
}
```
### Plugin Example: TypeScript Check
```typescript
export const TypeCheckPlugin = async ({ $, client }) => {
return {
"tool.execute.after": async (input, output) => {
if (input.tool === "edit" && input.args.filePath?.match(/\.tsx?$/)) {
const result = await $`npx tsc --noEmit`.catch(e => e)
if (result.exitCode !== 0) {
client.app.log("warn", "TypeScript errors detected")
}
}
}
}
}
```
## Providers
OpenCode integrates 75+ LLM providers via AI SDK and Models.dev.
### Supported Providers
- OpenCode Zen (recommended for beginners)
- Anthropic (Claude)
- OpenAI (GPT)
- Google (Gemini)
- Amazon Bedrock
- Azure OpenAI
- GitHub Copilot
- Ollama (local)
- And 70+ more
### Provider Configuration
```json
{
"provider": {
"anthropic": {
"options": {
"baseURL": "https://api.anthropic.com/v1"
}
},
"custom-provider": {
"npm": "@ai-sdk/openai-compatible",
"name": "Display Name",
"options": {
"baseURL": "https://api.example.com/v1",
"apiKey": "{env:CUSTOM_API_KEY}"
},
"models": {
"model-id": { "name": "Model Name" }
}
}
}
}
```
### Model Naming Convention
Format: `provider/model-id`
Examples:
- `anthropic/claude-sonnet-4-5`
- `anthropic/claude-opus-4-5`
- `anthropic/claude-haiku-4-5`
- `openai/gpt-4o`
- `google/gemini-2.0-flash`
### API Key Setup
```bash
# Interactive setup
opencode
/connect
# Environment variables
export ANTHROPIC_API_KEY=sk-...
export OPENAI_API_KEY=sk-...
```
Keys stored in: `~/.local/share/opencode/auth.json`
## MCP (Model Context Protocol)
Configure MCP servers in `opencode.json`:
```json
{
"mcp": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "{env:GITHUB_TOKEN}"
}
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
}
}
}
```
MCP tool permissions use `mcp_*` wildcard:
```json
{
"permission": {
"mcp_*": "ask"
}
}
```
## Ecosystem Plugins
### Authentication & Provider Plugins
- Alternative model access (ChatGPT Plus, Gemini, Antigravity)
- Session tracking (Helicone headers)
- OAuth integrations
### Development Tools
- Sandbox isolation (Daytona integration)
- Type injection for TypeScript/Svelte
- DevContainer multi-branch support
- Git worktree management
### Enhancement Plugins
- Web search with citations
- Markdown table formatting
- Dynamic context token pruning
- Desktop notifications
- Persistent memory (Supermemory)
- Background process management
### Plugin Discovery
- opencode.cafe - Community plugin registry
- awesome-opencode - Curated list
- GitHub search for "opencode-plugin"
## Quick Reference
### Key Shortcuts
| Key | Action |
|-----|--------|
| Tab | Toggle between Plan and Build modes |
| @ | Reference files or mention agents |
| / | Execute commands |
### Common Commands
```bash
/init # Initialize project
/connect # Configure API providers
/share # Share conversation
/undo # Undo last change
/redo # Redo undone change
/help # Show help
```
### File Locations
| Purpose | Project | Global |
|---------|---------|--------|
| Configuration | `opencode.json` | `~/.config/opencode/config.json` |
| Agents | `.opencode/agents/` | `~/.config/opencode/agents/` |
| Commands | `.opencode/commands/` | `~/.config/opencode/commands/` |
| Plugins | `.opencode/plugins/` | `~/.config/opencode/plugins/` |
| Tools | `.opencode/tools/` | `~/.config/opencode/tools/` |
| Auth | - | `~/.local/share/opencode/auth.json` |
### Troubleshooting
```bash
# Verify credentials
opencode auth list
# Check configuration
cat opencode.json | jq .
# Test provider connection
/connect
```
---
For more information: https://opencode.ai/docs/

16
package-lock.json generated
View File

@@ -1,14 +1,25 @@
{
"name": "everything-claude-code",
"name": "ecc-universal",
"version": "1.4.1",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "ecc-universal",
"version": "1.4.1",
"hasInstallScript": true,
"license": "MIT",
"bin": {
"ecc-install": "install.sh"
},
"devDependencies": {
"@eslint/js": "^9.39.2",
"eslint": "^9.39.2",
"globals": "^17.1.0",
"markdownlint-cli": "^0.47.0"
},
"engines": {
"node": ">=18"
}
},
"node_modules/@eslint-community/eslint-utils": {
@@ -294,6 +305,7 @@
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
"dev": true,
"license": "MIT",
"peer": true,
"bin": {
"acorn": "bin/acorn"
},
@@ -599,6 +611,7 @@
"integrity": "sha512-LEyamqS7W5HB3ujJyvi0HQK/dtVINZvd5mAAp9eT5S/ujByGjiZLCzPcHVzuXbpJDJF/cxwHlfceVUDZ2lnSTw==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@eslint-community/eslint-utils": "^4.8.0",
"@eslint-community/regexpp": "^4.12.1",
@@ -1930,6 +1943,7 @@
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"dev": true,
"license": "MIT",
"peer": true,
"engines": {
"node": ">=12"
},

View File

@@ -7,7 +7,7 @@
- Pair programming and code generation
- Worker agents in multi-agent systems
**Sonnet 4.5** (Best coding model):
**Sonnet 4.6** (Best coding model):
- Main development work
- Orchestrating multi-agent workflows
- Complex coding tasks

View File

@@ -58,7 +58,7 @@ function validateAgents() {
}
for (const field of REQUIRED_FIELDS) {
if (!frontmatter[field]) {
if (!frontmatter[field] || (typeof frontmatter[field] === 'string' && !frontmatter[field].trim())) {
console.error(`ERROR: ${file} - Missing required field: ${field}`);
hasErrors = true;
}

View File

@@ -74,14 +74,17 @@ function validateCommands() {
// Check cross-references to other commands (e.g., `/build-fix`)
// Skip lines that describe hypothetical output (e.g., "→ Creates: `/new-table`")
const cmdRefs = contentNoCodeBlocks.matchAll(/^.*`\/([a-z][-a-z0-9]*)`.*$/gm);
for (const match of cmdRefs) {
const line = match[0];
// Process line-by-line so ALL command refs per line are captured
// (previous anchored regex /^.*`\/...`.*$/gm only matched the last ref per line)
for (const line of contentNoCodeBlocks.split('\n')) {
if (/creates:|would create:/i.test(line)) continue;
const refName = match[1];
if (!validCommands.has(refName)) {
console.error(`ERROR: ${file} - references non-existent command /${refName}`);
hasErrors = true;
const lineRefs = line.matchAll(/`\/([a-z][-a-z0-9]*)`/g);
for (const match of lineRefs) {
const refName = match[1];
if (!validCommands.has(refName)) {
console.error(`ERROR: ${file} - references non-existent command /${refName}`);
hasErrors = true;
}
}
}

View File

@@ -34,7 +34,7 @@ function validateHookEntry(hook, label) {
hasErrors = true;
}
if (!hook.command || (typeof hook.command !== 'string' && !Array.isArray(hook.command))) {
if (!hook.command || (typeof hook.command !== 'string' && !Array.isArray(hook.command)) || (typeof hook.command === 'string' && !hook.command.trim()) || (Array.isArray(hook.command) && (hook.command.length === 0 || !hook.command.every(s => typeof s === 'string' && s.length > 0)))) {
console.error(`ERROR: ${label} missing or invalid 'command' field`);
hasErrors = true;
} else if (typeof hook.command === 'string') {

View File

@@ -0,0 +1,330 @@
#!/usr/bin/env node
/**
* scripts/codemaps/generate.ts
*
* Codemap Generator for everything-claude-code (ECC)
*
* Scans the current working directory and generates architectural
* codemap documentation under docs/CODEMAPS/ as specified by the
* doc-updater agent.
*
* Usage:
* npx tsx scripts/codemaps/generate.ts [srcDir]
*
* Output:
* docs/CODEMAPS/INDEX.md
* docs/CODEMAPS/frontend.md
* docs/CODEMAPS/backend.md
* docs/CODEMAPS/database.md
* docs/CODEMAPS/integrations.md
* docs/CODEMAPS/workers.md
*/
import fs from 'fs';
import path from 'path';
// ---------------------------------------------------------------------------
// Config
// ---------------------------------------------------------------------------
const ROOT = process.cwd();
const SRC_DIR = process.argv[2] ? path.resolve(process.argv[2]) : ROOT;
const OUTPUT_DIR = path.join(ROOT, 'docs', 'CODEMAPS');
const TODAY = new Date().toISOString().split('T')[0];
// Patterns used to classify files into codemap areas
const AREA_PATTERNS: Record<string, RegExp[]> = {
frontend: [
/\/(app|pages|components|hooks|contexts|ui|views|layouts|styles)\//i,
/\.(tsx|jsx|css|scss|sass|less|vue|svelte)$/i,
],
backend: [
/\/(api|routes|controllers|middleware|server|services|handlers)\//i,
/\.(route|controller|handler|middleware|service)\.(ts|js)$/i,
],
database: [
/\/(models|schemas|migrations|prisma|drizzle|db|database|repositories)\//i,
/\.(model|schema|migration|seed)\.(ts|js)$/i,
/prisma\/schema\.prisma$/,
/schema\.sql$/,
],
integrations: [
/\/(integrations?|third-party|external|plugins?|adapters?|connectors?)\//i,
/\.(integration|adapter|connector)\.(ts|js)$/i,
],
workers: [
/\/(workers?|jobs?|queues?|tasks?|cron|background)\//i,
/\.(worker|job|queue|task|cron)\.(ts|js)$/i,
],
};
// ---------------------------------------------------------------------------
// File System Helpers
// ---------------------------------------------------------------------------
/** Recursively collect all files under a directory, skipping common noise dirs. */
function walkDir(dir: string, results: string[] = []): string[] {
const SKIP = new Set([
'node_modules', '.git', '.next', '.nuxt', 'dist', 'build', 'out',
'.turbo', 'coverage', '.cache', '__pycache__', '.venv', 'venv',
]);
let entries: fs.Dirent[];
try {
entries = fs.readdirSync(dir, { withFileTypes: true });
} catch {
return results;
}
for (const entry of entries) {
if (SKIP.has(entry.name)) continue;
const fullPath = path.join(dir, entry.name);
if (entry.isDirectory()) {
walkDir(fullPath, results);
} else if (entry.isFile()) {
results.push(fullPath);
}
}
return results;
}
/** Return path relative to ROOT, always using forward slashes. */
function rel(p: string): string {
return path.relative(ROOT, p).replace(/\\/g, '/');
}
// ---------------------------------------------------------------------------
// Analysis
// ---------------------------------------------------------------------------
interface AreaInfo {
name: string;
files: string[];
entryPoints: string[];
directories: string[];
}
function classifyFiles(allFiles: string[]): Record<string, AreaInfo> {
const areas: Record<string, AreaInfo> = {
frontend: { name: 'Frontend', files: [], entryPoints: [], directories: [] },
backend: { name: 'Backend/API', files: [], entryPoints: [], directories: [] },
database: { name: 'Database', files: [], entryPoints: [], directories: [] },
integrations: { name: 'Integrations', files: [], entryPoints: [], directories: [] },
workers: { name: 'Workers', files: [], entryPoints: [], directories: [] },
};
for (const file of allFiles) {
const relPath = rel(file);
for (const [area, patterns] of Object.entries(AREA_PATTERNS)) {
if (patterns.some((p) => p.test(relPath))) {
areas[area].files.push(relPath);
break;
}
}
}
// Derive unique directories and entry points per area
for (const area of Object.values(areas)) {
const dirs = new Set(area.files.map((f) => path.dirname(f)));
area.directories = [...dirs].sort();
area.entryPoints = area.files
.filter((f) => /index\.(ts|tsx|js|jsx)$/.test(f) || /main\.(ts|tsx|js|jsx)$/.test(f))
.slice(0, 10);
}
return areas;
}
/** Count lines in a file (returns 0 on error). */
function lineCount(p: string): number {
try {
const content = fs.readFileSync(p, 'utf8');
return content.split('\n').length;
} catch {
return 0;
}
}
/** Build a simple directory tree ASCII diagram (max 3 levels deep). */
function buildTree(dir: string, prefix = '', depth = 0): string {
if (depth > 2) return '';
const SKIP = new Set(['node_modules', '.git', 'dist', 'build', '.next', 'coverage']);
let entries: fs.Dirent[];
try {
entries = fs.readdirSync(dir, { withFileTypes: true });
} catch {
return '';
}
const dirs = entries.filter((e) => e.isDirectory() && !SKIP.has(e.name));
const files = entries.filter((e) => e.isFile());
let result = '';
const items = [...dirs, ...files];
items.forEach((entry, i) => {
const isLast = i === items.length - 1;
const connector = isLast ? '└── ' : '├── ';
result += `${prefix}${connector}${entry.name}\n`;
if (entry.isDirectory()) {
const newPrefix = prefix + (isLast ? ' ' : '│ ');
result += buildTree(path.join(dir, entry.name), newPrefix, depth + 1);
}
});
return result;
}
// ---------------------------------------------------------------------------
// Markdown Generators
// ---------------------------------------------------------------------------
function generateAreaDoc(areaKey: string, area: AreaInfo, allFiles: string[]): string {
const fileCount = area.files.length;
const totalLines = area.files.reduce((sum, f) => sum + lineCount(path.join(ROOT, f)), 0);
const entrySection = area.entryPoints.length > 0
? area.entryPoints.map((e) => `- \`${e}\``).join('\n')
: '- *(no index/main entry points detected)*';
const dirSection = area.directories.slice(0, 20)
.map((d) => `- \`${d}/\``)
.join('\n') || '- *(no dedicated directories detected)*';
const fileSection = area.files.slice(0, 30)
.map((f) => `| \`${f}\` | ${lineCount(path.join(ROOT, f))} |`)
.join('\n');
const moreFiles = area.files.length > 30
? `\n*...and ${area.files.length - 30} more files*`
: '';
return `# ${area.name} Codemap
**Last Updated:** ${TODAY}
**Total Files:** ${fileCount}
**Total Lines:** ${totalLines}
## Entry Points
${entrySection}
## Architecture
\`\`\`
${area.name} Directory Structure
${dirSection.replace(/- `/g, '').replace(/`\/$/gm, '/')}
\`\`\`
## Key Modules
| File | Lines |
|------|-------|
${fileSection}${moreFiles}
## Data Flow
> Detected from file patterns. Review individual files for detailed data flow.
## External Dependencies
> Run \`npx jsdoc2md src/**/*.ts\` to extract JSDoc and identify external dependencies.
## Related Areas
- [INDEX](./INDEX.md) — Full overview
- [Frontend](./frontend.md)
- [Backend/API](./backend.md)
- [Database](./database.md)
- [Integrations](./integrations.md)
- [Workers](./workers.md)
`;
}
function generateIndex(areas: Record<string, AreaInfo>, allFiles: string[]): string {
const totalFiles = allFiles.length;
const areaRows = Object.entries(areas)
.map(([key, area]) => `| [${area.name}](./${key}.md) | ${area.files.length} files | ${area.directories.slice(0, 3).map((d) => `\`${d}\``).join(', ') || '—'} |`)
.join('\n');
const topLevelTree = buildTree(SRC_DIR);
return `# Codebase Overview — CODEMAPS Index
**Last Updated:** ${TODAY}
**Root:** \`${rel(SRC_DIR) || '.'}\`
**Total Files Scanned:** ${totalFiles}
## Areas
| Area | Size | Key Directories |
|------|------|-----------------|
${areaRows}
## Repository Structure
\`\`\`
${rel(SRC_DIR) || path.basename(SRC_DIR)}/
${topLevelTree}\`\`\`
## How to Regenerate
\`\`\`bash
npx tsx scripts/codemaps/generate.ts # Regenerate codemaps
npx madge --image graph.svg src/ # Dependency graph (requires graphviz)
npx jsdoc2md src/**/*.ts # Extract JSDoc
\`\`\`
## Related Documentation
- [Frontend](./frontend.md) — UI components, pages, hooks
- [Backend/API](./backend.md) — API routes, controllers, middleware
- [Database](./database.md) — Models, schemas, migrations
- [Integrations](./integrations.md) — External services & adapters
- [Workers](./workers.md) — Background jobs, queues, cron tasks
`;
}
// ---------------------------------------------------------------------------
// Main
// ---------------------------------------------------------------------------
function main(): void {
console.log(`[generate.ts] Scanning: ${SRC_DIR}`);
console.log(`[generate.ts] Output: ${OUTPUT_DIR}`);
// Ensure output directory exists
fs.mkdirSync(OUTPUT_DIR, { recursive: true });
// Walk the directory tree
const allFiles = walkDir(SRC_DIR);
console.log(`[generate.ts] Found ${allFiles.length} files`);
// Classify files into areas
const areas = classifyFiles(allFiles);
// Generate INDEX.md
const indexContent = generateIndex(areas, allFiles);
const indexPath = path.join(OUTPUT_DIR, 'INDEX.md');
fs.writeFileSync(indexPath, indexContent, 'utf8');
console.log(`[generate.ts] Written: ${rel(indexPath)}`);
// Generate per-area codemaps
for (const [key, area] of Object.entries(areas)) {
const content = generateAreaDoc(key, area, allFiles);
const outPath = path.join(OUTPUT_DIR, `${key}.md`);
fs.writeFileSync(outPath, content, 'utf8');
console.log(`[generate.ts] Written: ${rel(outPath)} (${area.files.length} files)`);
}
console.log('\n[generate.ts] Done! Codemaps written to docs/CODEMAPS/');
console.log('[generate.ts] Files generated:');
console.log(' docs/CODEMAPS/INDEX.md');
console.log(' docs/CODEMAPS/frontend.md');
console.log(' docs/CODEMAPS/backend.md');
console.log(' docs/CODEMAPS/database.md');
console.log(' docs/CODEMAPS/integrations.md');
console.log(' docs/CODEMAPS/workers.md');
}
main();

View File

@@ -32,14 +32,15 @@ process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (data.length < MAX_STDIN) {
data += chunk;
const remaining = MAX_STDIN - data.length;
data += chunk.substring(0, remaining);
}
});
process.stdin.on('end', () => {
try {
if (!isGitRepo()) {
console.log(data);
process.stdout.write(data);
process.exit(0);
}
@@ -65,5 +66,6 @@ process.stdin.on('end', () => {
}
// Always output the original data
console.log(data);
process.stdout.write(data);
process.exit(0);
});

View File

@@ -29,7 +29,8 @@ process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (stdinData.length < MAX_STDIN) {
stdinData += chunk;
const remaining = MAX_STDIN - stdinData.length;
stdinData += chunk.substring(0, remaining);
}
});
@@ -64,7 +65,7 @@ async function main() {
if (configContent) {
try {
const config = JSON.parse(configContent);
minSessionLength = config.min_session_length || 10;
minSessionLength = config.min_session_length ?? 10;
if (config.learned_skills_path) {
// Handle ~ in path

View File

@@ -17,7 +17,8 @@ process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (data.length < MAX_STDIN) {
data += chunk;
const remaining = MAX_STDIN - data.length;
data += chunk.substring(0, remaining);
}
});
@@ -28,7 +29,7 @@ process.stdin.on('end', () => {
if (filePath && /\.(ts|tsx|js|jsx)$/.test(filePath)) {
const content = readFile(filePath);
if (!content) { console.log(data); return; }
if (!content) { process.stdout.write(data); process.exit(0); }
const lines = content.split('\n');
const matches = [];
@@ -48,5 +49,6 @@ process.stdin.on('end', () => {
// Invalid input — pass through
}
console.log(data);
process.stdout.write(data);
process.exit(0);
});

View File

@@ -9,6 +9,7 @@
*/
const { execFileSync } = require('child_process');
const path = require('path');
const MAX_STDIN = 1024 * 1024; // 1MB limit
let data = '';
@@ -16,7 +17,8 @@ process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (data.length < MAX_STDIN) {
data += chunk;
const remaining = MAX_STDIN - data.length;
data += chunk.substring(0, remaining);
}
});
@@ -27,7 +29,10 @@ process.stdin.on('end', () => {
if (filePath && /\.(ts|tsx|js|jsx)$/.test(filePath)) {
try {
execFileSync('npx', ['prettier', '--write', filePath], {
// Use npx.cmd on Windows to avoid shell: true which enables command injection
const npxBin = process.platform === 'win32' ? 'npx.cmd' : 'npx';
execFileSync(npxBin, ['prettier', '--write', filePath], {
cwd: path.dirname(path.resolve(filePath)),
stdio: ['pipe', 'pipe', 'pipe'],
timeout: 15000
});
@@ -39,5 +44,6 @@ process.stdin.on('end', () => {
// Invalid input — pass through
}
console.log(data);
process.stdout.write(data);
process.exit(0);
});

View File

@@ -9,60 +9,80 @@
* and reports only errors related to the edited file.
*/
const { execFileSync } = require('child_process');
const fs = require('fs');
const path = require('path');
const { execFileSync } = require("child_process");
const fs = require("fs");
const path = require("path");
const MAX_STDIN = 1024 * 1024; // 1MB limit
let data = '';
process.stdin.setEncoding('utf8');
let data = "";
process.stdin.setEncoding("utf8");
process.stdin.on('data', chunk => {
process.stdin.on("data", (chunk) => {
if (data.length < MAX_STDIN) {
data += chunk;
const remaining = MAX_STDIN - data.length;
data += chunk.substring(0, remaining);
}
});
process.stdin.on('end', () => {
process.stdin.on("end", () => {
try {
const input = JSON.parse(data);
const filePath = input.tool_input?.file_path;
if (filePath && /\.(ts|tsx)$/.test(filePath)) {
const resolvedPath = path.resolve(filePath);
if (!fs.existsSync(resolvedPath)) { console.log(data); return; }
if (!fs.existsSync(resolvedPath)) {
process.stdout.write(data);
process.exit(0);
}
// Find nearest tsconfig.json by walking up (max 20 levels to prevent infinite loop)
let dir = path.dirname(resolvedPath);
const root = path.parse(dir).root;
let depth = 0;
while (dir !== root && depth < 20) {
if (fs.existsSync(path.join(dir, 'tsconfig.json'))) {
if (fs.existsSync(path.join(dir, "tsconfig.json"))) {
break;
}
dir = path.dirname(dir);
depth++;
}
if (fs.existsSync(path.join(dir, 'tsconfig.json'))) {
if (fs.existsSync(path.join(dir, "tsconfig.json"))) {
try {
execFileSync('npx', ['tsc', '--noEmit', '--pretty', 'false'], {
// Use npx.cmd on Windows to avoid shell: true which enables command injection
const npxBin = process.platform === "win32" ? "npx.cmd" : "npx";
execFileSync(npxBin, ["tsc", "--noEmit", "--pretty", "false"], {
cwd: dir,
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
timeout: 30000
encoding: "utf8",
stdio: ["pipe", "pipe", "pipe"],
timeout: 30000,
});
} catch (err) {
// tsc exits non-zero when there are errors — filter to edited file
const output = (err.stdout || '') + (err.stderr || '');
const output = (err.stdout || "") + (err.stderr || "");
// Compute paths that uniquely identify the edited file.
// tsc output uses paths relative to its cwd (the tsconfig dir),
// so check for the relative path, absolute path, and original path.
// Avoid bare basename matching — it causes false positives when
// multiple files share the same name (e.g., src/utils.ts vs tests/utils.ts).
const relPath = path.relative(dir, resolvedPath);
const candidates = new Set([filePath, resolvedPath, relPath]);
const relevantLines = output
.split('\n')
.filter(line => line.includes(filePath) || line.includes(path.basename(filePath)))
.split("\n")
.filter((line) => {
for (const candidate of candidates) {
if (line.includes(candidate)) return true;
}
return false;
})
.slice(0, 10);
if (relevantLines.length > 0) {
console.error('[Hook] TypeScript errors in ' + path.basename(filePath) + ':');
relevantLines.forEach(line => console.error(line));
console.error(
"[Hook] TypeScript errors in " + path.basename(filePath) + ":",
);
relevantLines.forEach((line) => console.error(line));
}
}
}
@@ -71,5 +91,6 @@ process.stdin.on('end', () => {
// Invalid input — pass through
}
console.log(data);
process.stdout.write(data);
process.exit(0);
});

View File

@@ -30,7 +30,7 @@ async function main() {
appendFile(compactionLog, `[${timestamp}] Context compaction triggered\n`);
// If there's an active session file, note the compaction
const sessions = findFiles(sessionsDir, '*.tmp');
const sessions = findFiles(sessionsDir, '*-session.tmp');
if (sessions.length > 0) {
const activeSession = sessions[0].path;

View File

@@ -45,18 +45,20 @@ function extractSessionSummary(transcriptPath) {
const entry = JSON.parse(line);
// Collect user messages (first 200 chars each)
if (entry.type === 'user' || entry.role === 'user') {
const text = typeof entry.content === 'string'
? entry.content
: Array.isArray(entry.content)
? entry.content.map(c => (c && c.text) || '').join(' ')
if (entry.type === 'user' || entry.role === 'user' || entry.message?.role === 'user') {
// Support both direct content and nested message.content (Claude Code JSONL format)
const rawContent = entry.message?.content ?? entry.content;
const text = typeof rawContent === 'string'
? rawContent
: Array.isArray(rawContent)
? rawContent.map(c => (c && c.text) || '').join(' ')
: '';
if (text.trim()) {
userMessages.push(text.trim().slice(0, 200));
}
}
// Collect tool names and modified files
// Collect tool names and modified files (direct tool_use entries)
if (entry.type === 'tool_use' || entry.tool_name) {
const toolName = entry.tool_name || entry.name || '';
if (toolName) toolsUsed.add(toolName);
@@ -66,6 +68,21 @@ function extractSessionSummary(transcriptPath) {
filesModified.add(filePath);
}
}
// Extract tool uses from assistant message content blocks (Claude Code JSONL format)
if (entry.type === 'assistant' && Array.isArray(entry.message?.content)) {
for (const block of entry.message.content) {
if (block.type === 'tool_use') {
const toolName = block.name || '';
if (toolName) toolsUsed.add(toolName);
const filePath = block.input?.file_path || '';
if (filePath && (toolName === 'Edit' || toolName === 'Write')) {
filesModified.add(filePath);
}
}
}
}
} catch {
parseErrors++;
}
@@ -92,7 +109,8 @@ process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (stdinData.length < MAX_STDIN) {
stdinData += chunk;
const remaining = MAX_STDIN - stdinData.length;
stdinData += chunk.substring(0, remaining);
}
});
@@ -189,10 +207,10 @@ ${summarySection}
function buildSummarySection(summary) {
let section = '## Session Summary\n\n';
// Tasks (from user messages — escape backticks to prevent markdown breaks)
// Tasks (from user messages — collapse newlines and escape backticks to prevent markdown breaks)
section += '### Tasks\n';
for (const msg of summary.userMessages) {
section += `- ${msg.replace(/`/g, '\\`')}\n`;
section += `- ${msg.replace(/\n/g, ' ').replace(/`/g, '\\`')}\n`;
}
section += '\n';

View File

@@ -44,7 +44,11 @@ async function main() {
const bytesRead = fs.readSync(fd, buf, 0, 64, 0);
if (bytesRead > 0) {
const parsed = parseInt(buf.toString('utf8', 0, bytesRead).trim(), 10);
count = Number.isFinite(parsed) ? parsed + 1 : 1;
// Clamp to reasonable range — corrupted files could contain huge values
// that pass Number.isFinite() (e.g., parseInt('9'.repeat(30)) => 1e+29)
count = (Number.isFinite(parsed) && parsed > 0 && parsed <= 1000000)
? parsed + 1
: 1;
}
// Truncate and write new value
fs.ftruncateSync(fd, 0);
@@ -62,8 +66,8 @@ async function main() {
log(`[StrategicCompact] ${threshold} tool calls reached - consider /compact if transitioning phases`);
}
// Suggest at regular intervals after threshold
if (count > threshold && count % 25 === 0) {
// Suggest at regular intervals after threshold (every 25 calls from threshold)
if (count > threshold && (count - threshold) % 25 === 0) {
log(`[StrategicCompact] ${count} tool calls - good checkpoint for /compact if context is stale`);
}

View File

@@ -282,7 +282,7 @@ function setProjectPackageManager(pmName, projectDir = process.cwd()) {
// Allowed characters in script/binary names: alphanumeric, dash, underscore, dot, slash, @
// This prevents shell metacharacter injection while allowing scoped packages (e.g., @scope/pkg)
const SAFE_NAME_REGEX = /^[@a-zA-Z0-9_.\/-]+$/;
const SAFE_NAME_REGEX = /^[@a-zA-Z0-9_./-]+$/;
/**
* Get the command to run a script
@@ -314,11 +314,15 @@ function getRunCommand(script, options = {}) {
}
}
// Allowed characters in arguments: alphanumeric, whitespace, dashes, dots, slashes,
// equals, colons, commas, quotes, @. Rejects shell metacharacters like ; | & ` $ ( ) { } < > !
const SAFE_ARGS_REGEX = /^[@a-zA-Z0-9\s_./:=,'"*+-]+$/;
/**
* Get the command to execute a package binary
* @param {string} binary - Binary name (e.g., "prettier", "eslint")
* @param {string} args - Arguments to pass
* @throws {Error} If binary name contains unsafe characters
* @throws {Error} If binary name or args contain unsafe characters
*/
function getExecCommand(binary, args = '', options = {}) {
if (!binary || typeof binary !== 'string') {
@@ -327,6 +331,9 @@ function getExecCommand(binary, args = '', options = {}) {
if (!SAFE_NAME_REGEX.test(binary)) {
throw new Error(`Binary name contains unsafe characters: ${binary}`);
}
if (args && typeof args === 'string' && !SAFE_ARGS_REGEX.test(args)) {
throw new Error(`Arguments contain unsafe characters: ${args}`);
}
const pm = getPackageManager(options);
return `${pm.config.execCmd} ${binary}${args ? ' ' + args : ''}`;
@@ -351,6 +358,11 @@ function getSelectionPrompt() {
return message;
}
// Escape regex metacharacters in a string before interpolating into a pattern
function escapeRegex(str) {
return str.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
}
/**
* Generate a regex pattern that matches commands for all package managers
* @param {string} action - Action pattern (e.g., "run dev", "install", "test")
@@ -358,28 +370,31 @@ function getSelectionPrompt() {
function getCommandPattern(action) {
const patterns = [];
if (action === 'dev') {
// Trim spaces from action to handle leading/trailing whitespace gracefully
const trimmedAction = action.trim();
if (trimmedAction === 'dev') {
patterns.push(
'npm run dev',
'pnpm( run)? dev',
'yarn dev',
'bun run dev'
);
} else if (action === 'install') {
} else if (trimmedAction === 'install') {
patterns.push(
'npm install',
'pnpm install',
'yarn( install)?',
'bun install'
);
} else if (action === 'test') {
} else if (trimmedAction === 'test') {
patterns.push(
'npm test',
'pnpm test',
'yarn test',
'bun test'
);
} else if (action === 'build') {
} else if (trimmedAction === 'build') {
patterns.push(
'npm run build',
'pnpm( run)? build',
@@ -387,12 +402,13 @@ function getCommandPattern(action) {
'bun run build'
);
} else {
// Generic run command
// Generic run command — escape regex metacharacters in action
const escaped = escapeRegex(trimmedAction);
patterns.push(
`npm run ${action}`,
`pnpm( run)? ${action}`,
`yarn ${action}`,
`bun run ${action}`
`npm run ${escaped}`,
`pnpm( run)? ${escaped}`,
`yarn ${escaped}`,
`bun run ${escaped}`
);
}

View File

@@ -110,8 +110,10 @@ function saveAliases(aliases) {
// Atomic write: write to temp file, then rename
fs.writeFileSync(tempPath, content, 'utf8');
// On Windows, we need to delete the target file before renaming
if (fs.existsSync(aliasesPath)) {
// On Windows, rename fails with EEXIST if destination exists, so delete first.
// On Unix/macOS, rename(2) atomically replaces the destination — skip the
// delete to avoid an unnecessary non-atomic window between unlink and rename.
if (process.platform === 'win32' && fs.existsSync(aliasesPath)) {
fs.unlinkSync(aliasesPath);
}
fs.renameSync(tempPath, aliasesPath);
@@ -310,15 +312,28 @@ function renameAlias(oldAlias, newAlias) {
return { success: false, error: `Alias '${oldAlias}' not found` };
}
if (data.aliases[newAlias]) {
return { success: false, error: `Alias '${newAlias}' already exists` };
// Validate new alias name (same rules as setAlias)
if (!newAlias || newAlias.length === 0) {
return { success: false, error: 'New alias name cannot be empty' };
}
if (newAlias.length > 128) {
return { success: false, error: 'New alias name cannot exceed 128 characters' };
}
// Validate new alias name
if (!/^[a-zA-Z0-9_-]+$/.test(newAlias)) {
return { success: false, error: 'New alias name must contain only letters, numbers, dashes, and underscores' };
}
const reserved = ['list', 'help', 'remove', 'delete', 'create', 'set'];
if (reserved.includes(newAlias.toLowerCase())) {
return { success: false, error: `'${newAlias}' is a reserved alias name` };
}
if (data.aliases[newAlias]) {
return { success: false, error: `Alias '${newAlias}' already exists` };
}
const aliasData = data.aliases[oldAlias];
delete data.aliases[oldAlias];
@@ -433,9 +448,17 @@ function cleanupAliases(sessionExists) {
if (removed.length > 0 && !saveAliases(data)) {
log('[Aliases] Failed to save after cleanup');
return {
success: false,
totalChecked: Object.keys(data.aliases).length + removed.length,
removed: removed.length,
removedAliases: removed,
error: 'Failed to save after cleanup'
};
}
return {
success: true,
totalChecked: Object.keys(data.aliases).length + removed.length,
removed: removed.length,
removedAliases: removed

View File

@@ -32,9 +32,13 @@ function parseSessionFilename(filename) {
const dateStr = match[1];
// Validate date components are in valid ranges (not just format)
// Validate date components are calendar-accurate (not just format)
const [year, month, day] = dateStr.split('-').map(Number);
if (month < 1 || month > 12 || day < 1 || day > 31) return null;
// Reject impossible dates like Feb 31, Apr 31 — Date constructor rolls
// over invalid days (e.g., Feb 31 → Mar 3), so check month roundtrips
const d = new Date(year, month - 1, day);
if (d.getMonth() !== month - 1 || d.getDate() !== day) return null;
// match[2] is undefined for old format (no ID)
const shortId = match[2] || 'no-id';
@@ -43,8 +47,10 @@ function parseSessionFilename(filename) {
filename,
shortId,
date: dateStr,
// Convert date string to Date object
datetime: new Date(dateStr)
// Use local-time constructor (consistent with validation on line 40)
// new Date(dateStr) interprets YYYY-MM-DD as UTC midnight which shows
// as the previous day in negative UTC offset timezones
datetime: new Date(year, month - 1, day)
};
}
@@ -185,12 +191,21 @@ function getSessionStats(sessionPathOrContent) {
*/
function getAllSessions(options = {}) {
const {
limit = 50,
offset = 0,
limit: rawLimit = 50,
offset: rawOffset = 0,
date = null,
search = null
} = options;
// Clamp offset and limit to safe non-negative integers.
// Without this, negative offset causes slice() to count from the end,
// and NaN values cause slice() to return empty or unexpected results.
// Note: cannot use `|| default` because 0 is falsy — use isNaN instead.
const offsetNum = Number(rawOffset);
const offset = Number.isNaN(offsetNum) ? 0 : Math.max(0, Math.floor(offsetNum));
const limitNum = Number(rawLimit);
const limit = Number.isNaN(limitNum) ? 50 : Math.max(1, Math.floor(limitNum));
const sessionsDir = getSessionsDir();
if (!fs.existsSync(sessionsDir)) {

View File

@@ -93,11 +93,19 @@ export function writeFile(filePath: string, content: string): void;
/** Append to a text file, creating parent directories if needed */
export function appendFile(filePath: string, content: string): void;
export interface ReplaceInFileOptions {
/**
* When true and search is a string, replaces ALL occurrences (uses String.replaceAll).
* Ignored for RegExp patterns — use the `g` flag instead.
*/
all?: boolean;
}
/**
* Replace text in a file (cross-platform sed alternative).
* @returns true if the file was found and updated, false if file not found
*/
export function replaceInFile(filePath: string, search: string | RegExp, replace: string): boolean;
export function replaceInFile(filePath: string, search: string | RegExp, replace: string, options?: ReplaceInFileOptions): boolean;
/**
* Count occurrences of a pattern in a file.

View File

@@ -215,6 +215,11 @@ async function readStdinJson(options = {}) {
const timer = setTimeout(() => {
if (!settled) {
settled = true;
// Clean up stdin listeners so the event loop can exit
process.stdin.removeAllListeners('data');
process.stdin.removeAllListeners('end');
process.stdin.removeAllListeners('error');
if (process.stdin.unref) process.stdin.unref();
// Resolve with whatever we have so far rather than hanging
try {
resolve(data.trim() ? JSON.parse(data) : {});
@@ -454,7 +459,15 @@ function grepFile(filePath, pattern) {
let regex;
try {
regex = pattern instanceof RegExp ? pattern : new RegExp(pattern);
if (pattern instanceof RegExp) {
// Always create a new RegExp without the 'g' flag to prevent lastIndex
// state issues when using .test() in a loop (g flag makes .test() stateful,
// causing alternating match/miss on consecutive matching lines)
const flags = pattern.flags.replace('g', '');
regex = new RegExp(pattern.source, flags);
} else {
regex = new RegExp(pattern);
}
} catch {
return []; // Invalid regex pattern
}

View File

@@ -174,7 +174,7 @@ if (args.includes('--list')) {
const globalIdx = args.indexOf('--global');
if (globalIdx !== -1) {
const pmName = args[globalIdx + 1];
if (!pmName) {
if (!pmName || pmName.startsWith('-')) {
console.error('Error: --global requires a package manager name');
process.exit(1);
}
@@ -185,7 +185,7 @@ if (globalIdx !== -1) {
const projectIdx = args.indexOf('--project');
if (projectIdx !== -1) {
const pmName = args[projectIdx + 1];
if (!pmName) {
if (!pmName || pmName.startsWith('-')) {
console.error('Error: --project requires a package manager name');
process.exit(1);
}

View File

@@ -38,10 +38,10 @@ const SPINNER = ['⠋', '⠙', '⠹', '⠸', '⠼', '⠴', '⠦', '⠧', '⠇',
// Helper functions
function box(title, content, width = 60) {
const lines = content.split('\n');
const top = `${BOX.topLeft}${BOX.horizontal} ${chalk.bold(chalk.cyan(title))} ${BOX.horizontal.repeat(Math.max(0, width - title.length - 4))}${BOX.topRight}`;
const bottom = `${BOX.bottomLeft}${BOX.horizontal.repeat(width - 1)}${BOX.bottomRight}`;
const top = `${BOX.topLeft}${BOX.horizontal} ${chalk.bold(chalk.cyan(title))} ${BOX.horizontal.repeat(Math.max(0, width - title.length - 5))}${BOX.topRight}`;
const bottom = `${BOX.bottomLeft}${BOX.horizontal.repeat(width - 2)}${BOX.bottomRight}`;
const middle = lines.map(line => {
const padding = width - 3 - stripAnsi(line).length;
const padding = width - 4 - stripAnsi(line).length;
return `${BOX.vertical} ${line}${' '.repeat(Math.max(0, padding))} ${BOX.vertical}`;
}).join('\n');
return `${top}\n${middle}\n${bottom}`;
@@ -53,7 +53,7 @@ function stripAnsi(str) {
}
function progressBar(percent, width = 30) {
const filled = Math.round(width * percent / 100);
const filled = Math.min(width, Math.max(0, Math.round(width * percent / 100)));
const empty = width - filled;
const bar = chalk.green('█'.repeat(filled)) + chalk.gray('░'.repeat(empty));
return `${bar} ${chalk.bold(percent)}%`;
@@ -91,7 +91,7 @@ class SkillCreateOutput {
console.log('\n');
console.log(chalk.bold(chalk.magenta('╔════════════════════════════════════════════════════════════════╗')));
console.log(chalk.bold(chalk.magenta('║')) + chalk.bold(' 🔮 ECC Skill Creator ') + chalk.bold(chalk.magenta('║')));
console.log(chalk.bold(chalk.magenta('║')) + ` ${subtitle}${' '.repeat(Math.max(0, 55 - stripAnsi(subtitle).length))}` + chalk.bold(chalk.magenta('║')));
console.log(chalk.bold(chalk.magenta('║')) + ` ${subtitle}${' '.repeat(Math.max(0, 59 - stripAnsi(subtitle).length))}` + chalk.bold(chalk.magenta('║')));
console.log(chalk.bold(chalk.magenta('╚════════════════════════════════════════════════════════════════╝')));
console.log('');
}
@@ -125,7 +125,7 @@ ${chalk.bold('Files Tracked:')} ${chalk.green(data.files)}
console.log(chalk.gray('─'.repeat(50)));
patterns.forEach((pattern, i) => {
const confidence = pattern.confidence || 0.8;
const confidence = pattern.confidence ?? 0.8;
const confidenceBar = progressBar(Math.round(confidence * 100), 15);
console.log(`
${chalk.bold(chalk.yellow(`${i + 1}.`))} ${chalk.bold(pattern.name)}

View File

@@ -0,0 +1,160 @@
---
name: content-hash-cache-pattern
description: Cache expensive file processing results using SHA-256 content hashes — path-independent, auto-invalidating, with service layer separation.
---
# Content-Hash File Cache Pattern
Cache expensive file processing results (PDF parsing, text extraction, image analysis) using SHA-256 content hashes as cache keys. Unlike path-based caching, this approach survives file moves/renames and auto-invalidates when content changes.
## When to Activate
- Building file processing pipelines (PDF, images, text extraction)
- Processing cost is high and same files are processed repeatedly
- Need a `--cache/--no-cache` CLI option
- Want to add caching to existing pure functions without modifying them
## Core Pattern
### 1. Content-Hash Based Cache Key
Use file content (not path) as the cache key:
```python
import hashlib
from pathlib import Path
_HASH_CHUNK_SIZE = 65536 # 64KB chunks for large files
def compute_file_hash(path: Path) -> str:
"""SHA-256 of file contents (chunked for large files)."""
if not path.is_file():
raise FileNotFoundError(f"File not found: {path}")
sha256 = hashlib.sha256()
with open(path, "rb") as f:
while True:
chunk = f.read(_HASH_CHUNK_SIZE)
if not chunk:
break
sha256.update(chunk)
return sha256.hexdigest()
```
**Why content hash?** File rename/move = cache hit. Content change = automatic invalidation. No index file needed.
### 2. Frozen Dataclass for Cache Entry
```python
from dataclasses import dataclass
@dataclass(frozen=True, slots=True)
class CacheEntry:
file_hash: str
source_path: str
document: ExtractedDocument # The cached result
```
### 3. File-Based Cache Storage
Each cache entry is stored as `{hash}.json` — O(1) lookup by hash, no index file required.
```python
import json
from typing import Any
def write_cache(cache_dir: Path, entry: CacheEntry) -> None:
cache_dir.mkdir(parents=True, exist_ok=True)
cache_file = cache_dir / f"{entry.file_hash}.json"
data = serialize_entry(entry)
cache_file.write_text(json.dumps(data, ensure_ascii=False), encoding="utf-8")
def read_cache(cache_dir: Path, file_hash: str) -> CacheEntry | None:
cache_file = cache_dir / f"{file_hash}.json"
if not cache_file.is_file():
return None
try:
raw = cache_file.read_text(encoding="utf-8")
data = json.loads(raw)
return deserialize_entry(data)
except (json.JSONDecodeError, ValueError, KeyError):
return None # Treat corruption as cache miss
```
### 4. Service Layer Wrapper (SRP)
Keep the processing function pure. Add caching as a separate service layer.
```python
def extract_with_cache(
file_path: Path,
*,
cache_enabled: bool = True,
cache_dir: Path = Path(".cache"),
) -> ExtractedDocument:
"""Service layer: cache check -> extraction -> cache write."""
if not cache_enabled:
return extract_text(file_path) # Pure function, no cache knowledge
file_hash = compute_file_hash(file_path)
# Check cache
cached = read_cache(cache_dir, file_hash)
if cached is not None:
logger.info("Cache hit: %s (hash=%s)", file_path.name, file_hash[:12])
return cached.document
# Cache miss -> extract -> store
logger.info("Cache miss: %s (hash=%s)", file_path.name, file_hash[:12])
doc = extract_text(file_path)
entry = CacheEntry(file_hash=file_hash, source_path=str(file_path), document=doc)
write_cache(cache_dir, entry)
return doc
```
## Key Design Decisions
| Decision | Rationale |
|----------|-----------|
| SHA-256 content hash | Path-independent, auto-invalidates on content change |
| `{hash}.json` file naming | O(1) lookup, no index file needed |
| Service layer wrapper | SRP: extraction stays pure, cache is a separate concern |
| Manual JSON serialization | Full control over frozen dataclass serialization |
| Corruption returns `None` | Graceful degradation, re-processes on next run |
| `cache_dir.mkdir(parents=True)` | Lazy directory creation on first write |
## Best Practices
- **Hash content, not paths** — paths change, content identity doesn't
- **Chunk large files** when hashing — avoid loading entire files into memory
- **Keep processing functions pure** — they should know nothing about caching
- **Log cache hit/miss** with truncated hashes for debugging
- **Handle corruption gracefully** — treat invalid cache entries as misses, never crash
## Anti-Patterns to Avoid
```python
# BAD: Path-based caching (breaks on file move/rename)
cache = {"/path/to/file.pdf": result}
# BAD: Adding cache logic inside the processing function (SRP violation)
def extract_text(path, *, cache_enabled=False, cache_dir=None):
if cache_enabled: # Now this function has two responsibilities
...
# BAD: Using dataclasses.asdict() with nested frozen dataclasses
# (can cause issues with complex nested types)
data = dataclasses.asdict(entry) # Use manual serialization instead
```
## When to Use
- File processing pipelines (PDF parsing, OCR, text extraction, image analysis)
- CLI tools that benefit from `--cache/--no-cache` options
- Batch processing where the same files appear across runs
- Adding caching to existing pure functions without modifying them
## When NOT to Use
- Data that must always be fresh (real-time feeds)
- Cache entries that would be extremely large (consider streaming instead)
- Results that depend on parameters beyond file content (e.g., different extraction configs)

View File

@@ -103,7 +103,8 @@ PARSED_OK=$(echo "$PARSED" | python3 -c "import json,sys; print(json.load(sys.st
if [ "$PARSED_OK" != "True" ]; then
# Fallback: log raw input for debugging
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
TIMESTAMP="$timestamp" echo "$INPUT_JSON" | python3 -c "
export TIMESTAMP="$timestamp"
echo "$INPUT_JSON" | python3 -c "
import json, sys, os
raw = sys.stdin.read()[:2000]
print(json.dumps({'timestamp': os.environ['TIMESTAMP'], 'event': 'parse_error', 'raw': raw}))
@@ -124,7 +125,8 @@ fi
# Build and write observation
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
TIMESTAMP="$timestamp" echo "$PARSED" | python3 -c "
export TIMESTAMP="$timestamp"
echo "$PARSED" | python3 -c "
import json, sys, os
parsed = json.load(sys.stdin)

View File

@@ -88,7 +88,12 @@ def load_all_instincts() -> list[dict]:
for directory in [PERSONAL_DIR, INHERITED_DIR]:
if not directory.exists():
continue
for file in directory.glob("*.yaml"):
yaml_files = sorted(
set(directory.glob("*.yaml"))
| set(directory.glob("*.yml"))
| set(directory.glob("*.md"))
)
for file in yaml_files:
try:
content = file.read_text()
parsed = parse_instinct_file(content)
@@ -433,15 +438,96 @@ def cmd_evolve(args):
print()
if args.generate:
print("\n[Would generate evolved structures here]")
print(" Skills would be saved to:", EVOLVED_DIR / "skills")
print(" Commands would be saved to:", EVOLVED_DIR / "commands")
print(" Agents would be saved to:", EVOLVED_DIR / "agents")
generated = _generate_evolved(skill_candidates, workflow_instincts, agent_candidates)
if generated:
print(f"\n✅ Generated {len(generated)} evolved structures:")
for path in generated:
print(f" {path}")
else:
print("\nNo structures generated (need higher-confidence clusters).")
print(f"\n{'='*60}\n")
return 0
# ─────────────────────────────────────────────
# Generate Evolved Structures
# ─────────────────────────────────────────────
def _generate_evolved(skill_candidates: list, workflow_instincts: list, agent_candidates: list) -> list[str]:
"""Generate skill/command/agent files from analyzed instinct clusters."""
generated = []
# Generate skills from top candidates
for cand in skill_candidates[:5]:
trigger = cand['trigger'].strip()
if not trigger:
continue
name = re.sub(r'[^a-z0-9]+', '-', trigger.lower()).strip('-')[:30]
if not name:
continue
skill_dir = EVOLVED_DIR / "skills" / name
skill_dir.mkdir(parents=True, exist_ok=True)
content = f"# {name}\n\n"
content += f"Evolved from {len(cand['instincts'])} instincts "
content += f"(avg confidence: {cand['avg_confidence']:.0%})\n\n"
content += f"## When to Apply\n\n"
content += f"Trigger: {trigger}\n\n"
content += f"## Actions\n\n"
for inst in cand['instincts']:
inst_content = inst.get('content', '')
action_match = re.search(r'## Action\s*\n\s*(.+?)(?:\n\n|\n##|$)', inst_content, re.DOTALL)
action = action_match.group(1).strip() if action_match else inst.get('id', 'unnamed')
content += f"- {action}\n"
(skill_dir / "SKILL.md").write_text(content)
generated.append(str(skill_dir / "SKILL.md"))
# Generate commands from workflow instincts
for inst in workflow_instincts[:5]:
trigger = inst.get('trigger', 'unknown')
cmd_name = re.sub(r'[^a-z0-9]+', '-', trigger.lower().replace('when ', '').replace('implementing ', ''))
cmd_name = cmd_name.strip('-')[:20]
if not cmd_name:
continue
cmd_file = EVOLVED_DIR / "commands" / f"{cmd_name}.md"
content = f"# {cmd_name}\n\n"
content += f"Evolved from instinct: {inst.get('id', 'unnamed')}\n"
content += f"Confidence: {inst.get('confidence', 0.5):.0%}\n\n"
content += inst.get('content', '')
cmd_file.write_text(content)
generated.append(str(cmd_file))
# Generate agents from complex clusters
for cand in agent_candidates[:3]:
trigger = cand['trigger'].strip()
agent_name = re.sub(r'[^a-z0-9]+', '-', trigger.lower()).strip('-')[:20]
if not agent_name:
continue
agent_file = EVOLVED_DIR / "agents" / f"{agent_name}.md"
domains = ', '.join(cand['domains'])
instinct_ids = [i.get('id', 'unnamed') for i in cand['instincts']]
content = f"---\nmodel: sonnet\ntools: Read, Grep, Glob\n---\n"
content += f"# {agent_name}\n\n"
content += f"Evolved from {len(cand['instincts'])} instincts "
content += f"(avg confidence: {cand['avg_confidence']:.0%})\n"
content += f"Domains: {domains}\n\n"
content += f"## Source Instincts\n\n"
for iid in instinct_ids:
content += f"- {iid}\n"
agent_file.write_text(content)
generated.append(str(agent_file))
return generated
# ─────────────────────────────────────────────
# Main
# ─────────────────────────────────────────────

View File

@@ -0,0 +1,182 @@
---
name: cost-aware-llm-pipeline
description: Cost optimization patterns for LLM API usage — model routing by task complexity, budget tracking, retry logic, and prompt caching.
---
# Cost-Aware LLM Pipeline
Patterns for controlling LLM API costs while maintaining quality. Combines model routing, budget tracking, retry logic, and prompt caching into a composable pipeline.
## When to Activate
- Building applications that call LLM APIs (Claude, GPT, etc.)
- Processing batches of items with varying complexity
- Need to stay within a budget for API spend
- Optimizing cost without sacrificing quality on complex tasks
## Core Concepts
### 1. Model Routing by Task Complexity
Automatically select cheaper models for simple tasks, reserving expensive models for complex ones.
```python
MODEL_SONNET = "claude-sonnet-4-6"
MODEL_HAIKU = "claude-haiku-4-5-20251001"
_SONNET_TEXT_THRESHOLD = 10_000 # chars
_SONNET_ITEM_THRESHOLD = 30 # items
def select_model(
text_length: int,
item_count: int,
force_model: str | None = None,
) -> str:
"""Select model based on task complexity."""
if force_model is not None:
return force_model
if text_length >= _SONNET_TEXT_THRESHOLD or item_count >= _SONNET_ITEM_THRESHOLD:
return MODEL_SONNET # Complex task
return MODEL_HAIKU # Simple task (3-4x cheaper)
```
### 2. Immutable Cost Tracking
Track cumulative spend with frozen dataclasses. Each API call returns a new tracker — never mutates state.
```python
from dataclasses import dataclass
@dataclass(frozen=True, slots=True)
class CostRecord:
model: str
input_tokens: int
output_tokens: int
cost_usd: float
@dataclass(frozen=True, slots=True)
class CostTracker:
budget_limit: float = 1.00
records: tuple[CostRecord, ...] = ()
def add(self, record: CostRecord) -> "CostTracker":
"""Return new tracker with added record (never mutates self)."""
return CostTracker(
budget_limit=self.budget_limit,
records=(*self.records, record),
)
@property
def total_cost(self) -> float:
return sum(r.cost_usd for r in self.records)
@property
def over_budget(self) -> bool:
return self.total_cost > self.budget_limit
```
### 3. Narrow Retry Logic
Retry only on transient errors. Fail fast on authentication or bad request errors.
```python
from anthropic import (
APIConnectionError,
InternalServerError,
RateLimitError,
)
_RETRYABLE_ERRORS = (APIConnectionError, RateLimitError, InternalServerError)
_MAX_RETRIES = 3
def call_with_retry(func, *, max_retries: int = _MAX_RETRIES):
"""Retry only on transient errors, fail fast on others."""
for attempt in range(max_retries):
try:
return func()
except _RETRYABLE_ERRORS:
if attempt == max_retries - 1:
raise
time.sleep(2 ** attempt) # Exponential backoff
# AuthenticationError, BadRequestError etc. → raise immediately
```
### 4. Prompt Caching
Cache long system prompts to avoid resending them on every request.
```python
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": system_prompt,
"cache_control": {"type": "ephemeral"}, # Cache this
},
{
"type": "text",
"text": user_input, # Variable part
},
],
}
]
```
## Composition
Combine all four techniques in a single pipeline function:
```python
def process(text: str, config: Config, tracker: CostTracker) -> tuple[Result, CostTracker]:
# 1. Route model
model = select_model(len(text), estimated_items, config.force_model)
# 2. Check budget
if tracker.over_budget:
raise BudgetExceededError(tracker.total_cost, tracker.budget_limit)
# 3. Call with retry + caching
response = call_with_retry(lambda: client.messages.create(
model=model,
messages=build_cached_messages(system_prompt, text),
))
# 4. Track cost (immutable)
record = CostRecord(model=model, input_tokens=..., output_tokens=..., cost_usd=...)
tracker = tracker.add(record)
return parse_result(response), tracker
```
## Pricing Reference (2025-2026)
| Model | Input ($/1M tokens) | Output ($/1M tokens) | Relative Cost |
|-------|---------------------|----------------------|---------------|
| Haiku 4.5 | $0.80 | $4.00 | 1x |
| Sonnet 4.6 | $3.00 | $15.00 | ~4x |
| Opus 4.5 | $15.00 | $75.00 | ~19x |
## Best Practices
- **Start with the cheapest model** and only route to expensive models when complexity thresholds are met
- **Set explicit budget limits** before processing batches — fail early rather than overspend
- **Log model selection decisions** so you can tune thresholds based on real data
- **Use prompt caching** for system prompts over 1024 tokens — saves both cost and latency
- **Never retry on authentication or validation errors** — only transient failures (network, rate limit, server error)
## Anti-Patterns to Avoid
- Using the most expensive model for all requests regardless of complexity
- Retrying on all errors (wastes budget on permanent failures)
- Mutating cost tracking state (makes debugging and auditing difficult)
- Hardcoding model names throughout the codebase (use constants or config)
- Ignoring prompt caching for repetitive system prompts
## When to Use
- Any application calling Claude, OpenAI, or similar LLM APIs
- Batch processing pipelines where cost adds up quickly
- Multi-model architectures that need intelligent routing
- Production systems that need budget guardrails

View File

@@ -0,0 +1,722 @@
---
name: cpp-coding-standards
description: C++ coding standards based on the C++ Core Guidelines (isocpp.github.io). Use when writing, reviewing, or refactoring C++ code to enforce modern, safe, and idiomatic practices.
---
# C++ Coding Standards (C++ Core Guidelines)
Comprehensive coding standards for modern C++ (C++17/20/23) derived from the [C++ Core Guidelines](https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines). Enforces type safety, resource safety, immutability, and clarity.
## When to Use
- Writing new C++ code (classes, functions, templates)
- Reviewing or refactoring existing C++ code
- Making architectural decisions in C++ projects
- Enforcing consistent style across a C++ codebase
- Choosing between language features (e.g., `enum` vs `enum class`, raw pointer vs smart pointer)
### When NOT to Use
- Non-C++ projects
- Legacy C codebases that cannot adopt modern C++ features
- Embedded/bare-metal contexts where specific guidelines conflict with hardware constraints (adapt selectively)
## Cross-Cutting Principles
These themes recur across the entire guidelines and form the foundation:
1. **RAII everywhere** (P.8, R.1, E.6, CP.20): Bind resource lifetime to object lifetime
2. **Immutability by default** (P.10, Con.1-5, ES.25): Start with `const`/`constexpr`; mutability is the exception
3. **Type safety** (P.4, I.4, ES.46-49, Enum.3): Use the type system to prevent errors at compile time
4. **Express intent** (P.3, F.1, NL.1-2, T.10): Names, types, and concepts should communicate purpose
5. **Minimize complexity** (F.2-3, ES.5, Per.4-5): Simple code is correct code
6. **Value semantics over pointer semantics** (C.10, R.3-5, F.20, CP.31): Prefer returning by value and scoped objects
## Philosophy & Interfaces (P.*, I.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **P.1** | Express ideas directly in code |
| **P.3** | Express intent |
| **P.4** | Ideally, a program should be statically type safe |
| **P.5** | Prefer compile-time checking to run-time checking |
| **P.8** | Don't leak any resources |
| **P.10** | Prefer immutable data to mutable data |
| **I.1** | Make interfaces explicit |
| **I.2** | Avoid non-const global variables |
| **I.4** | Make interfaces precisely and strongly typed |
| **I.11** | Never transfer ownership by a raw pointer or reference |
| **I.23** | Keep the number of function arguments low |
### DO
```cpp
// P.10 + I.4: Immutable, strongly typed interface
struct Temperature {
double kelvin;
};
Temperature boil(const Temperature& water);
```
### DON'T
```cpp
// Weak interface: unclear ownership, unclear units
double boil(double* temp);
// Non-const global variable
int g_counter = 0; // I.2 violation
```
## Functions (F.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **F.1** | Package meaningful operations as carefully named functions |
| **F.2** | A function should perform a single logical operation |
| **F.3** | Keep functions short and simple |
| **F.4** | If a function might be evaluated at compile time, declare it `constexpr` |
| **F.6** | If your function must not throw, declare it `noexcept` |
| **F.8** | Prefer pure functions |
| **F.16** | For "in" parameters, pass cheaply-copied types by value and others by `const&` |
| **F.20** | For "out" values, prefer return values to output parameters |
| **F.21** | To return multiple "out" values, prefer returning a struct |
| **F.43** | Never return a pointer or reference to a local object |
### Parameter Passing
```cpp
// F.16: Cheap types by value, others by const&
void print(int x); // cheap: by value
void analyze(const std::string& data); // expensive: by const&
void transform(std::string s); // sink: by value (will move)
// F.20 + F.21: Return values, not output parameters
struct ParseResult {
std::string token;
int position;
};
ParseResult parse(std::string_view input); // GOOD: return struct
// BAD: output parameters
void parse(std::string_view input,
std::string& token, int& pos); // avoid this
```
### Pure Functions and constexpr
```cpp
// F.4 + F.8: Pure, constexpr where possible
constexpr int factorial(int n) noexcept {
return (n <= 1) ? 1 : n * factorial(n - 1);
}
static_assert(factorial(5) == 120);
```
### Anti-Patterns
- Returning `T&&` from functions (F.45)
- Using `va_arg` / C-style variadics (F.55)
- Capturing by reference in lambdas passed to other threads (F.53)
- Returning `const T` which inhibits move semantics (F.49)
## Classes & Class Hierarchies (C.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **C.2** | Use `class` if invariant exists; `struct` if data members vary independently |
| **C.9** | Minimize exposure of members |
| **C.20** | If you can avoid defining default operations, do (Rule of Zero) |
| **C.21** | If you define or `=delete` any copy/move/destructor, handle them all (Rule of Five) |
| **C.35** | Base class destructor: public virtual or protected non-virtual |
| **C.41** | A constructor should create a fully initialized object |
| **C.46** | Declare single-argument constructors `explicit` |
| **C.67** | A polymorphic class should suppress public copy/move |
| **C.128** | Virtual functions: specify exactly one of `virtual`, `override`, or `final` |
### Rule of Zero
```cpp
// C.20: Let the compiler generate special members
struct Employee {
std::string name;
std::string department;
int id;
// No destructor, copy/move constructors, or assignment operators needed
};
```
### Rule of Five
```cpp
// C.21: If you must manage a resource, define all five
class Buffer {
public:
explicit Buffer(std::size_t size)
: data_(std::make_unique<char[]>(size)), size_(size) {}
~Buffer() = default;
Buffer(const Buffer& other)
: data_(std::make_unique<char[]>(other.size_)), size_(other.size_) {
std::copy_n(other.data_.get(), size_, data_.get());
}
Buffer& operator=(const Buffer& other) {
if (this != &other) {
auto new_data = std::make_unique<char[]>(other.size_);
std::copy_n(other.data_.get(), other.size_, new_data.get());
data_ = std::move(new_data);
size_ = other.size_;
}
return *this;
}
Buffer(Buffer&&) noexcept = default;
Buffer& operator=(Buffer&&) noexcept = default;
private:
std::unique_ptr<char[]> data_;
std::size_t size_;
};
```
### Class Hierarchy
```cpp
// C.35 + C.128: Virtual destructor, use override
class Shape {
public:
virtual ~Shape() = default;
virtual double area() const = 0; // C.121: pure interface
};
class Circle : public Shape {
public:
explicit Circle(double r) : radius_(r) {}
double area() const override { return 3.14159 * radius_ * radius_; }
private:
double radius_;
};
```
### Anti-Patterns
- Calling virtual functions in constructors/destructors (C.82)
- Using `memset`/`memcpy` on non-trivial types (C.90)
- Providing different default arguments for virtual function and overrider (C.140)
- Making data members `const` or references, which suppresses move/copy (C.12)
## Resource Management (R.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **R.1** | Manage resources automatically using RAII |
| **R.3** | A raw pointer (`T*`) is non-owning |
| **R.5** | Prefer scoped objects; don't heap-allocate unnecessarily |
| **R.10** | Avoid `malloc()`/`free()` |
| **R.11** | Avoid calling `new` and `delete` explicitly |
| **R.20** | Use `unique_ptr` or `shared_ptr` to represent ownership |
| **R.21** | Prefer `unique_ptr` over `shared_ptr` unless sharing ownership |
| **R.22** | Use `make_shared()` to make `shared_ptr`s |
### Smart Pointer Usage
```cpp
// R.11 + R.20 + R.21: RAII with smart pointers
auto widget = std::make_unique<Widget>("config"); // unique ownership
auto cache = std::make_shared<Cache>(1024); // shared ownership
// R.3: Raw pointer = non-owning observer
void render(const Widget* w) { // does NOT own w
if (w) w->draw();
}
render(widget.get());
```
### RAII Pattern
```cpp
// R.1: Resource acquisition is initialization
class FileHandle {
public:
explicit FileHandle(const std::string& path)
: handle_(std::fopen(path.c_str(), "r")) {
if (!handle_) throw std::runtime_error("Failed to open: " + path);
}
~FileHandle() {
if (handle_) std::fclose(handle_);
}
FileHandle(const FileHandle&) = delete;
FileHandle& operator=(const FileHandle&) = delete;
FileHandle(FileHandle&& other) noexcept
: handle_(std::exchange(other.handle_, nullptr)) {}
FileHandle& operator=(FileHandle&& other) noexcept {
if (this != &other) {
if (handle_) std::fclose(handle_);
handle_ = std::exchange(other.handle_, nullptr);
}
return *this;
}
private:
std::FILE* handle_;
};
```
### Anti-Patterns
- Naked `new`/`delete` (R.11)
- `malloc()`/`free()` in C++ code (R.10)
- Multiple resource allocations in a single expression (R.13 -- exception safety hazard)
- `shared_ptr` where `unique_ptr` suffices (R.21)
## Expressions & Statements (ES.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **ES.5** | Keep scopes small |
| **ES.20** | Always initialize an object |
| **ES.23** | Prefer `{}` initializer syntax |
| **ES.25** | Declare objects `const` or `constexpr` unless modification is intended |
| **ES.28** | Use lambdas for complex initialization of `const` variables |
| **ES.45** | Avoid magic constants; use symbolic constants |
| **ES.46** | Avoid narrowing/lossy arithmetic conversions |
| **ES.47** | Use `nullptr` rather than `0` or `NULL` |
| **ES.48** | Avoid casts |
| **ES.50** | Don't cast away `const` |
### Initialization
```cpp
// ES.20 + ES.23 + ES.25: Always initialize, prefer {}, default to const
const int max_retries{3};
const std::string name{"widget"};
const std::vector<int> primes{2, 3, 5, 7, 11};
// ES.28: Lambda for complex const initialization
const auto config = [&] {
Config c;
c.timeout = std::chrono::seconds{30};
c.retries = max_retries;
c.verbose = debug_mode;
return c;
}();
```
### Anti-Patterns
- Uninitialized variables (ES.20)
- Using `0` or `NULL` as pointer (ES.47 -- use `nullptr`)
- C-style casts (ES.48 -- use `static_cast`, `const_cast`, etc.)
- Casting away `const` (ES.50)
- Magic numbers without named constants (ES.45)
- Mixing signed and unsigned arithmetic (ES.100)
- Reusing names in nested scopes (ES.12)
## Error Handling (E.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **E.1** | Develop an error-handling strategy early in a design |
| **E.2** | Throw an exception to signal that a function can't perform its assigned task |
| **E.6** | Use RAII to prevent leaks |
| **E.12** | Use `noexcept` when throwing is impossible or unacceptable |
| **E.14** | Use purpose-designed user-defined types as exceptions |
| **E.15** | Throw by value, catch by reference |
| **E.16** | Destructors, deallocation, and swap must never fail |
| **E.17** | Don't try to catch every exception in every function |
### Exception Hierarchy
```cpp
// E.14 + E.15: Custom exception types, throw by value, catch by reference
class AppError : public std::runtime_error {
public:
using std::runtime_error::runtime_error;
};
class NetworkError : public AppError {
public:
NetworkError(const std::string& msg, int code)
: AppError(msg), status_code(code) {}
int status_code;
};
void fetch_data(const std::string& url) {
// E.2: Throw to signal failure
throw NetworkError("connection refused", 503);
}
void run() {
try {
fetch_data("https://api.example.com");
} catch (const NetworkError& e) {
log_error(e.what(), e.status_code);
} catch (const AppError& e) {
log_error(e.what());
}
// E.17: Don't catch everything here -- let unexpected errors propagate
}
```
### Anti-Patterns
- Throwing built-in types like `int` or string literals (E.14)
- Catching by value (slicing risk) (E.15)
- Empty catch blocks that silently swallow errors
- Using exceptions for flow control (E.3)
- Error handling based on global state like `errno` (E.28)
## Constants & Immutability (Con.*)
### All Rules
| Rule | Summary |
|------|---------|
| **Con.1** | By default, make objects immutable |
| **Con.2** | By default, make member functions `const` |
| **Con.3** | By default, pass pointers and references to `const` |
| **Con.4** | Use `const` for values that don't change after construction |
| **Con.5** | Use `constexpr` for values computable at compile time |
```cpp
// Con.1 through Con.5: Immutability by default
class Sensor {
public:
explicit Sensor(std::string id) : id_(std::move(id)) {}
// Con.2: const member functions by default
const std::string& id() const { return id_; }
double last_reading() const { return reading_; }
// Only non-const when mutation is required
void record(double value) { reading_ = value; }
private:
const std::string id_; // Con.4: never changes after construction
double reading_{0.0};
};
// Con.3: Pass by const reference
void display(const Sensor& s) {
std::cout << s.id() << ": " << s.last_reading() << '\n';
}
// Con.5: Compile-time constants
constexpr double PI = 3.14159265358979;
constexpr int MAX_SENSORS = 256;
```
## Concurrency & Parallelism (CP.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **CP.2** | Avoid data races |
| **CP.3** | Minimize explicit sharing of writable data |
| **CP.4** | Think in terms of tasks, rather than threads |
| **CP.8** | Don't use `volatile` for synchronization |
| **CP.20** | Use RAII, never plain `lock()`/`unlock()` |
| **CP.21** | Use `std::scoped_lock` to acquire multiple mutexes |
| **CP.22** | Never call unknown code while holding a lock |
| **CP.42** | Don't wait without a condition |
| **CP.44** | Remember to name your `lock_guard`s and `unique_lock`s |
| **CP.100** | Don't use lock-free programming unless you absolutely have to |
### Safe Locking
```cpp
// CP.20 + CP.44: RAII locks, always named
class ThreadSafeQueue {
public:
void push(int value) {
std::lock_guard<std::mutex> lock(mutex_); // CP.44: named!
queue_.push(value);
cv_.notify_one();
}
int pop() {
std::unique_lock<std::mutex> lock(mutex_);
// CP.42: Always wait with a condition
cv_.wait(lock, [this] { return !queue_.empty(); });
const int value = queue_.front();
queue_.pop();
return value;
}
private:
std::mutex mutex_; // CP.50: mutex with its data
std::condition_variable cv_;
std::queue<int> queue_;
};
```
### Multiple Mutexes
```cpp
// CP.21: std::scoped_lock for multiple mutexes (deadlock-free)
void transfer(Account& from, Account& to, double amount) {
std::scoped_lock lock(from.mutex_, to.mutex_);
from.balance_ -= amount;
to.balance_ += amount;
}
```
### Anti-Patterns
- `volatile` for synchronization (CP.8 -- it's for hardware I/O only)
- Detaching threads (CP.26 -- lifetime management becomes nearly impossible)
- Unnamed lock guards: `std::lock_guard<std::mutex>(m);` destroys immediately (CP.44)
- Holding locks while calling callbacks (CP.22 -- deadlock risk)
- Lock-free programming without deep expertise (CP.100)
## Templates & Generic Programming (T.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **T.1** | Use templates to raise the level of abstraction |
| **T.2** | Use templates to express algorithms for many argument types |
| **T.10** | Specify concepts for all template arguments |
| **T.11** | Use standard concepts whenever possible |
| **T.13** | Prefer shorthand notation for simple concepts |
| **T.43** | Prefer `using` over `typedef` |
| **T.120** | Use template metaprogramming only when you really need to |
| **T.144** | Don't specialize function templates (overload instead) |
### Concepts (C++20)
```cpp
#include <concepts>
// T.10 + T.11: Constrain templates with standard concepts
template<std::integral T>
T gcd(T a, T b) {
while (b != 0) {
a = std::exchange(b, a % b);
}
return a;
}
// T.13: Shorthand concept syntax
void sort(std::ranges::random_access_range auto& range) {
std::ranges::sort(range);
}
// Custom concept for domain-specific constraints
template<typename T>
concept Serializable = requires(const T& t) {
{ t.serialize() } -> std::convertible_to<std::string>;
};
template<Serializable T>
void save(const T& obj, const std::string& path);
```
### Anti-Patterns
- Unconstrained templates in visible namespaces (T.47)
- Specializing function templates instead of overloading (T.144)
- Template metaprogramming where `constexpr` suffices (T.120)
- `typedef` instead of `using` (T.43)
## Standard Library (SL.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **SL.1** | Use libraries wherever possible |
| **SL.2** | Prefer the standard library to other libraries |
| **SL.con.1** | Prefer `std::array` or `std::vector` over C arrays |
| **SL.con.2** | Prefer `std::vector` by default |
| **SL.str.1** | Use `std::string` to own character sequences |
| **SL.str.2** | Use `std::string_view` to refer to character sequences |
| **SL.io.50** | Avoid `endl` (use `'\n'` -- `endl` forces a flush) |
```cpp
// SL.con.1 + SL.con.2: Prefer vector/array over C arrays
const std::array<int, 4> fixed_data{1, 2, 3, 4};
std::vector<std::string> dynamic_data;
// SL.str.1 + SL.str.2: string owns, string_view observes
std::string build_greeting(std::string_view name) {
return "Hello, " + std::string(name) + "!";
}
// SL.io.50: Use '\n' not endl
std::cout << "result: " << value << '\n';
```
## Enumerations (Enum.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **Enum.1** | Prefer enumerations over macros |
| **Enum.3** | Prefer `enum class` over plain `enum` |
| **Enum.5** | Don't use ALL_CAPS for enumerators |
| **Enum.6** | Avoid unnamed enumerations |
```cpp
// Enum.3 + Enum.5: Scoped enum, no ALL_CAPS
enum class Color { red, green, blue };
enum class LogLevel { debug, info, warning, error };
// BAD: plain enum leaks names, ALL_CAPS clashes with macros
enum { RED, GREEN, BLUE }; // Enum.3 + Enum.5 + Enum.6 violation
#define MAX_SIZE 100 // Enum.1 violation -- use constexpr
```
## Source Files & Naming (SF.*, NL.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **SF.1** | Use `.cpp` for code files and `.h` for interface files |
| **SF.7** | Don't write `using namespace` at global scope in a header |
| **SF.8** | Use `#include` guards for all `.h` files |
| **SF.11** | Header files should be self-contained |
| **NL.5** | Avoid encoding type information in names (no Hungarian notation) |
| **NL.8** | Use a consistent naming style |
| **NL.9** | Use ALL_CAPS for macro names only |
| **NL.10** | Prefer `underscore_style` names |
### Header Guard
```cpp
// SF.8: Include guard (or #pragma once)
#ifndef PROJECT_MODULE_WIDGET_H
#define PROJECT_MODULE_WIDGET_H
// SF.11: Self-contained -- include everything this header needs
#include <string>
#include <vector>
namespace project::module {
class Widget {
public:
explicit Widget(std::string name);
const std::string& name() const;
private:
std::string name_;
};
} // namespace project::module
#endif // PROJECT_MODULE_WIDGET_H
```
### Naming Conventions
```cpp
// NL.8 + NL.10: Consistent underscore_style
namespace my_project {
constexpr int max_buffer_size = 4096; // NL.9: not ALL_CAPS (it's not a macro)
class tcp_connection { // underscore_style class
public:
void send_message(std::string_view msg);
bool is_connected() const;
private:
std::string host_; // trailing underscore for members
int port_;
};
} // namespace my_project
```
### Anti-Patterns
- `using namespace std;` in a header at global scope (SF.7)
- Headers that depend on inclusion order (SF.10, SF.11)
- Hungarian notation like `strName`, `iCount` (NL.5)
- ALL_CAPS for anything other than macros (NL.9)
## Performance (Per.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **Per.1** | Don't optimize without reason |
| **Per.2** | Don't optimize prematurely |
| **Per.6** | Don't make claims about performance without measurements |
| **Per.7** | Design to enable optimization |
| **Per.10** | Rely on the static type system |
| **Per.11** | Move computation from run time to compile time |
| **Per.19** | Access memory predictably |
### Guidelines
```cpp
// Per.11: Compile-time computation where possible
constexpr auto lookup_table = [] {
std::array<int, 256> table{};
for (int i = 0; i < 256; ++i) {
table[i] = i * i;
}
return table;
}();
// Per.19: Prefer contiguous data for cache-friendliness
std::vector<Point> points; // GOOD: contiguous
std::vector<std::unique_ptr<Point>> indirect_points; // BAD: pointer chasing
```
### Anti-Patterns
- Optimizing without profiling data (Per.1, Per.6)
- Choosing "clever" low-level code over clear abstractions (Per.4, Per.5)
- Ignoring data layout and cache behavior (Per.19)
## Quick Reference Checklist
Before marking C++ work complete:
- [ ] No raw `new`/`delete` -- use smart pointers or RAII (R.11)
- [ ] Objects initialized at declaration (ES.20)
- [ ] Variables are `const`/`constexpr` by default (Con.1, ES.25)
- [ ] Member functions are `const` where possible (Con.2)
- [ ] `enum class` instead of plain `enum` (Enum.3)
- [ ] `nullptr` instead of `0`/`NULL` (ES.47)
- [ ] No narrowing conversions (ES.46)
- [ ] No C-style casts (ES.48)
- [ ] Single-argument constructors are `explicit` (C.46)
- [ ] Rule of Zero or Rule of Five applied (C.20, C.21)
- [ ] Base class destructors are public virtual or protected non-virtual (C.35)
- [ ] Templates are constrained with concepts (T.10)
- [ ] No `using namespace` in headers at global scope (SF.7)
- [ ] Headers have include guards and are self-contained (SF.8, SF.11)
- [ ] Locks use RAII (`scoped_lock`/`lock_guard`) (CP.20)
- [ ] Exceptions are custom types, thrown by value, caught by reference (E.14, E.15)
- [ ] `'\n'` instead of `std::endl` (SL.io.50)
- [ ] No magic numbers (ES.45)

View File

@@ -0,0 +1,219 @@
---
name: regex-vs-llm-structured-text
description: Decision framework for choosing between regex and LLM when parsing structured text — start with regex, add LLM only for low-confidence edge cases.
---
# Regex vs LLM for Structured Text Parsing
A practical decision framework for parsing structured text (quizzes, forms, invoices, documents). The key insight: regex handles 95-98% of cases cheaply and deterministically. Reserve expensive LLM calls for the remaining edge cases.
## When to Activate
- Parsing structured text with repeating patterns (questions, forms, tables)
- Deciding between regex and LLM for text extraction
- Building hybrid pipelines that combine both approaches
- Optimizing cost/accuracy tradeoffs in text processing
## Decision Framework
```
Is the text format consistent and repeating?
├── Yes (>90% follows a pattern) → Start with Regex
│ ├── Regex handles 95%+ → Done, no LLM needed
│ └── Regex handles <95% → Add LLM for edge cases only
└── No (free-form, highly variable) → Use LLM directly
```
## Architecture Pattern
```
Source Text
[Regex Parser] ─── Extracts structure (95-98% accuracy)
[Text Cleaner] ─── Removes noise (markers, page numbers, artifacts)
[Confidence Scorer] ─── Flags low-confidence extractions
├── High confidence (≥0.95) → Direct output
└── Low confidence (<0.95) → [LLM Validator] → Output
```
## Implementation
### 1. Regex Parser (Handles the Majority)
```python
import re
from dataclasses import dataclass
@dataclass(frozen=True)
class ParsedItem:
id: str
text: str
choices: tuple[str, ...]
answer: str
confidence: float = 1.0
def parse_structured_text(content: str) -> list[ParsedItem]:
"""Parse structured text using regex patterns."""
pattern = re.compile(
r"(?P<id>\d+)\.\s*(?P<text>.+?)\n"
r"(?P<choices>(?:[A-D]\..+?\n)+)"
r"Answer:\s*(?P<answer>[A-D])",
re.MULTILINE | re.DOTALL,
)
items = []
for match in pattern.finditer(content):
choices = tuple(
c.strip() for c in re.findall(r"[A-D]\.\s*(.+)", match.group("choices"))
)
items.append(ParsedItem(
id=match.group("id"),
text=match.group("text").strip(),
choices=choices,
answer=match.group("answer"),
))
return items
```
### 2. Confidence Scoring
Flag items that may need LLM review:
```python
@dataclass(frozen=True)
class ConfidenceFlag:
item_id: str
score: float
reasons: tuple[str, ...]
def score_confidence(item: ParsedItem) -> ConfidenceFlag:
"""Score extraction confidence and flag issues."""
reasons = []
score = 1.0
if len(item.choices) < 3:
reasons.append("few_choices")
score -= 0.3
if not item.answer:
reasons.append("missing_answer")
score -= 0.5
if len(item.text) < 10:
reasons.append("short_text")
score -= 0.2
return ConfidenceFlag(
item_id=item.id,
score=max(0.0, score),
reasons=tuple(reasons),
)
def identify_low_confidence(
items: list[ParsedItem],
threshold: float = 0.95,
) -> list[ConfidenceFlag]:
"""Return items below confidence threshold."""
flags = [score_confidence(item) for item in items]
return [f for f in flags if f.score < threshold]
```
### 3. LLM Validator (Edge Cases Only)
```python
def validate_with_llm(
item: ParsedItem,
original_text: str,
client,
) -> ParsedItem:
"""Use LLM to fix low-confidence extractions."""
response = client.messages.create(
model="claude-haiku-4-5-20251001", # Cheapest model for validation
max_tokens=500,
messages=[{
"role": "user",
"content": (
f"Extract the question, choices, and answer from this text.\n\n"
f"Text: {original_text}\n\n"
f"Current extraction: {item}\n\n"
f"Return corrected JSON if needed, or 'CORRECT' if accurate."
),
}],
)
# Parse LLM response and return corrected item...
return corrected_item
```
### 4. Hybrid Pipeline
```python
def process_document(
content: str,
*,
llm_client=None,
confidence_threshold: float = 0.95,
) -> list[ParsedItem]:
"""Full pipeline: regex -> confidence check -> LLM for edge cases."""
# Step 1: Regex extraction (handles 95-98%)
items = parse_structured_text(content)
# Step 2: Confidence scoring
low_confidence = identify_low_confidence(items, confidence_threshold)
if not low_confidence or llm_client is None:
return items
# Step 3: LLM validation (only for flagged items)
low_conf_ids = {f.item_id for f in low_confidence}
result = []
for item in items:
if item.id in low_conf_ids:
result.append(validate_with_llm(item, content, llm_client))
else:
result.append(item)
return result
```
## Real-World Metrics
From a production quiz parsing pipeline (410 items):
| Metric | Value |
|--------|-------|
| Regex success rate | 98.0% |
| Low confidence items | 8 (2.0%) |
| LLM calls needed | ~5 |
| Cost savings vs all-LLM | ~95% |
| Test coverage | 93% |
## Best Practices
- **Start with regex** — even imperfect regex gives you a baseline to improve
- **Use confidence scoring** to programmatically identify what needs LLM help
- **Use the cheapest LLM** for validation (Haiku-class models are sufficient)
- **Never mutate** parsed items — return new instances from cleaning/validation steps
- **TDD works well** for parsers — write tests for known patterns first, then edge cases
- **Log metrics** (regex success rate, LLM call count) to track pipeline health
## Anti-Patterns to Avoid
- Sending all text to an LLM when regex handles 95%+ of cases (expensive and slow)
- Using regex for free-form, highly variable text (LLM is better here)
- Skipping confidence scoring and hoping regex "just works"
- Mutating parsed objects during cleaning/validation steps
- Not testing edge cases (malformed input, missing fields, encoding issues)
## When to Use
- Quiz/exam question parsing
- Form data extraction
- Invoice/receipt processing
- Document structure parsing (headers, sections, tables)
- Any structured text with repeating patterns where cost matters

View File

@@ -0,0 +1,159 @@
---
name: search-first
description: Research-before-coding workflow. Search for existing tools, libraries, and patterns before writing custom code. Invokes the researcher agent.
---
# /search-first — Research Before You Code
Systematizes the "search for existing solutions before implementing" workflow.
## Trigger
Use this skill when:
- Starting a new feature that likely has existing solutions
- Adding a dependency or integration
- The user asks "add X functionality" and you're about to write code
- Before creating a new utility, helper, or abstraction
## Workflow
```
┌─────────────────────────────────────────────┐
│ 1. NEED ANALYSIS │
│ Define what functionality is needed │
│ Identify language/framework constraints │
├─────────────────────────────────────────────┤
│ 2. PARALLEL SEARCH (researcher agent) │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ npm / │ │ MCP / │ │ GitHub / │ │
│ │ PyPI │ │ Skills │ │ Web │ │
│ └──────────┘ └──────────┘ └──────────┘ │
├─────────────────────────────────────────────┤
│ 3. EVALUATE │
│ Score candidates (functionality, maint, │
│ community, docs, license, deps) │
├─────────────────────────────────────────────┤
│ 4. DECIDE │
│ ┌─────────┐ ┌──────────┐ ┌─────────┐ │
│ │ Adopt │ │ Extend │ │ Build │ │
│ │ as-is │ │ /Wrap │ │ Custom │ │
│ └─────────┘ └──────────┘ └─────────┘ │
├─────────────────────────────────────────────┤
│ 5. IMPLEMENT │
│ Install package / Configure MCP / │
│ Write minimal custom code │
└─────────────────────────────────────────────┘
```
## Decision Matrix
| Signal | Action |
|--------|--------|
| Exact match, well-maintained, MIT/Apache | **Adopt** — install and use directly |
| Partial match, good foundation | **Extend** — install + write thin wrapper |
| Multiple weak matches | **Compose** — combine 2-3 small packages |
| Nothing suitable found | **Build** — write custom, but informed by research |
## How to Use
### Quick Mode (inline)
Before writing a utility or adding functionality, mentally run through:
1. Is this a common problem? → Search npm/PyPI
2. Is there an MCP for this? → Check `~/.claude/settings.json` and search
3. Is there a skill for this? → Check `~/.claude/skills/`
4. Is there a GitHub template? → Search GitHub
### Full Mode (agent)
For non-trivial functionality, launch the researcher agent:
```
Task(subagent_type="general-purpose", prompt="
Research existing tools for: [DESCRIPTION]
Language/framework: [LANG]
Constraints: [ANY]
Search: npm/PyPI, MCP servers, Claude Code skills, GitHub
Return: Structured comparison with recommendation
")
```
## Search Shortcuts by Category
### Development Tooling
- Linting → `eslint`, `ruff`, `textlint`, `markdownlint`
- Formatting → `prettier`, `black`, `gofmt`
- Testing → `jest`, `pytest`, `go test`
- Pre-commit → `husky`, `lint-staged`, `pre-commit`
### AI/LLM Integration
- Claude SDK → Context7 for latest docs
- Prompt management → Check MCP servers
- Document processing → `unstructured`, `pdfplumber`, `mammoth`
### Data & APIs
- HTTP clients → `httpx` (Python), `ky`/`got` (Node)
- Validation → `zod` (TS), `pydantic` (Python)
- Database → Check for MCP servers first
### Content & Publishing
- Markdown processing → `remark`, `unified`, `markdown-it`
- Image optimization → `sharp`, `imagemin`
## Integration Points
### With planner agent
The planner should invoke researcher before Phase 1 (Architecture Review):
- Researcher identifies available tools
- Planner incorporates them into the implementation plan
- Avoids "reinventing the wheel" in the plan
### With architect agent
The architect should consult researcher for:
- Technology stack decisions
- Integration pattern discovery
- Existing reference architectures
### With iterative-retrieval skill
Combine for progressive discovery:
- Cycle 1: Broad search (npm, PyPI, MCP)
- Cycle 2: Evaluate top candidates in detail
- Cycle 3: Test compatibility with project constraints
## Examples
### Example 1: "Add dead link checking"
```
Need: Check markdown files for broken links
Search: npm "markdown dead link checker"
Found: textlint-rule-no-dead-link (score: 9/10)
Action: ADOPT — npm install textlint-rule-no-dead-link
Result: Zero custom code, battle-tested solution
```
### Example 2: "Add HTTP client wrapper"
```
Need: Resilient HTTP client with retries and timeout handling
Search: npm "http client retry", PyPI "httpx retry"
Found: got (Node) with retry plugin, httpx (Python) with built-in retry
Action: ADOPT — use got/httpx directly with retry config
Result: Zero custom code, production-proven libraries
```
### Example 3: "Add config file linter"
```
Need: Validate project config files against a schema
Search: npm "config linter schema", "json schema validator cli"
Found: ajv-cli (score: 8/10)
Action: ADOPT + EXTEND — install ajv-cli, write project-specific schema
Result: 1 package + 1 schema file, no custom validation logic
```
## Anti-Patterns
- **Jumping to code**: Writing a utility without checking if one exists
- **Ignoring MCP**: Not checking if an MCP server already provides the capability
- **Over-customizing**: Wrapping a library so heavily it loses its benefits
- **Dependency bloat**: Installing a massive package for one small feature

View File

@@ -0,0 +1,175 @@
---
description: "Use when auditing Claude skills and commands for quality. Supports Quick Scan (changed skills only) and Full Stocktake modes with sequential subagent batch evaluation."
---
# skill-stocktake
Slash command (`/skill-stocktake`) that audits all Claude skills and commands using a quality checklist + AI holistic judgment. Supports two modes: Quick Scan for recently changed skills, and Full Stocktake for a complete review.
## Scope
The command targets the following paths **relative to the directory where it is invoked**:
| Path | Description |
|------|-------------|
| `~/.claude/skills/` | Global skills (all projects) |
| `{cwd}/.claude/skills/` | Project-level skills (if the directory exists) |
**At the start of Phase 1, the command explicitly lists which paths were found and scanned.**
### Targeting a specific project
To include project-level skills, run from that project's root directory:
```bash
cd ~/path/to/my-project
/skill-stocktake
```
If the project has no `.claude/skills/` directory, only global skills and commands are evaluated.
## Modes
| Mode | Trigger | Duration |
|------|---------|---------|
| Quick Scan | `results.json` exists (default) | 510 min |
| Full Stocktake | `results.json` absent, or `/skill-stocktake full` | 2030 min |
**Results cache:** `~/.claude/skills/skill-stocktake/results.json`
## Quick Scan Flow
Re-evaluate only skills that have changed since the last run (510 min).
1. Read `~/.claude/skills/skill-stocktake/results.json`
2. Run: `bash ~/.claude/skills/skill-stocktake/scripts/quick-diff.sh \
~/.claude/skills/skill-stocktake/results.json`
(Project dir is auto-detected from `$PWD/.claude/skills`; pass it explicitly only if needed)
3. If output is `[]`: report "No changes since last run." and stop
4. Re-evaluate only those changed files using the same Phase 2 criteria
5. Carry forward unchanged skills from previous results
6. Output only the diff
7. Run: `bash ~/.claude/skills/skill-stocktake/scripts/save-results.sh \
~/.claude/skills/skill-stocktake/results.json <<< "$EVAL_RESULTS"`
## Full Stocktake Flow
### Phase 1 — Inventory
Run: `bash ~/.claude/skills/skill-stocktake/scripts/scan.sh`
The script enumerates skill files, extracts frontmatter, and collects UTC mtimes.
Project dir is auto-detected from `$PWD/.claude/skills`; pass it explicitly only if needed.
Present the scan summary and inventory table from the script output:
```
Scanning:
✓ ~/.claude/skills/ (17 files)
✗ {cwd}/.claude/skills/ (not found — global skills only)
```
| Skill | 7d use | 30d use | Description |
|-------|--------|---------|-------------|
### Phase 2 — Quality Evaluation
Launch a Task tool subagent (**Explore agent, model: opus**) with the full inventory and checklist.
The subagent reads each skill, applies the checklist, and returns per-skill JSON:
`{ "verdict": "Keep"|"Improve"|"Update"|"Retire"|"Merge into [X]", "reason": "..." }`
**Chunk guidance:** Process ~20 skills per subagent invocation to keep context manageable. Save intermediate results to `results.json` (`status: "in_progress"`) after each chunk.
After all skills are evaluated: set `status: "completed"`, proceed to Phase 3.
**Resume detection:** If `status: "in_progress"` is found on startup, resume from the first unevaluated skill.
Each skill is evaluated against this checklist:
```
- [ ] Content overlap with other skills checked
- [ ] Overlap with MEMORY.md / CLAUDE.md checked
- [ ] Freshness of technical references verified (use WebSearch if tool names / CLI flags / APIs are present)
- [ ] Usage frequency considered
```
Verdict criteria:
| Verdict | Meaning |
|---------|---------|
| Keep | Useful and current |
| Improve | Worth keeping, but specific improvements needed |
| Update | Referenced technology is outdated (verify with WebSearch) |
| Retire | Low quality, stale, or cost-asymmetric |
| Merge into [X] | Substantial overlap with another skill; name the merge target |
Evaluation is **holistic AI judgment** — not a numeric rubric. Guiding dimensions:
- **Actionability**: code examples, commands, or steps that let you act immediately
- **Scope fit**: name, trigger, and content are aligned; not too broad or narrow
- **Uniqueness**: value not replaceable by MEMORY.md / CLAUDE.md / another skill
- **Currency**: technical references work in the current environment
**Reason quality requirements** — the `reason` field must be self-contained and decision-enabling:
- Do NOT write "unchanged" alone — always restate the core evidence
- For **Retire**: state (1) what specific defect was found, (2) what covers the same need instead
- Bad: `"Superseded"`
- Good: `"disable-model-invocation: true already set; superseded by continuous-learning-v2 which covers all the same patterns plus confidence scoring. No unique content remains."`
- For **Merge**: name the target and describe what content to integrate
- Bad: `"Overlaps with X"`
- Good: `"42-line thin content; Step 4 of chatlog-to-article already covers the same workflow. Integrate the 'article angle' tip as a note in that skill."`
- For **Improve**: describe the specific change needed (what section, what action, target size if relevant)
- Bad: `"Too long"`
- Good: `"276 lines; Section 'Framework Comparison' (L80140) duplicates ai-era-architecture-principles; delete it to reach ~150 lines."`
- For **Keep** (mtime-only change in Quick Scan): restate the original verdict rationale, do not write "unchanged"
- Bad: `"Unchanged"`
- Good: `"mtime updated but content unchanged. Unique Python reference explicitly imported by rules/python/; no overlap found."`
### Phase 3 — Summary Table
| Skill | 7d use | Verdict | Reason |
|-------|--------|---------|--------|
### Phase 4 — Consolidation
1. **Retire / Merge**: present detailed justification per file before confirming with user:
- What specific problem was found (overlap, staleness, broken references, etc.)
- What alternative covers the same functionality (for Retire: which existing skill/rule; for Merge: the target file and what content to integrate)
- Impact of removal (any dependent skills, MEMORY.md references, or workflows affected)
2. **Improve**: present specific improvement suggestions with rationale:
- What to change and why (e.g., "trim 430→200 lines because sections X/Y duplicate python-patterns")
- User decides whether to act
3. **Update**: present updated content with sources checked
4. Check MEMORY.md line count; propose compression if >100 lines
## Results File Schema
`~/.claude/skills/skill-stocktake/results.json`:
**`evaluated_at`**: Must be set to the actual UTC time of evaluation completion.
Obtain via Bash: `date -u +%Y-%m-%dT%H:%M:%SZ`. Never use a date-only approximation like `T00:00:00Z`.
```json
{
"evaluated_at": "2026-02-21T10:00:00Z",
"mode": "full",
"batch_progress": {
"total": 80,
"evaluated": 80,
"status": "completed"
},
"skills": {
"skill-name": {
"path": "~/.claude/skills/skill-name/SKILL.md",
"verdict": "Keep",
"reason": "Concrete, actionable, unique value for X workflow",
"mtime": "2026-01-15T08:30:00Z"
}
}
}
```
## Notes
- Evaluation is blind: the same checklist applies to all skills regardless of origin (ECC, self-authored, auto-extracted)
- Archive / delete operations always require explicit user confirmation
- No verdict branching by skill origin

View File

@@ -0,0 +1,87 @@
#!/usr/bin/env bash
# quick-diff.sh — compare skill file mtimes against results.json evaluated_at
# Usage: quick-diff.sh RESULTS_JSON [CWD_SKILLS_DIR]
# Output: JSON array of changed/new files to stdout (empty [] if no changes)
#
# When CWD_SKILLS_DIR is omitted, defaults to $PWD/.claude/skills so the
# script always picks up project-level skills without relying on the caller.
#
# Environment:
# SKILL_STOCKTAKE_GLOBAL_DIR Override ~/.claude/skills (for testing only;
# do not set in production — intended for bats tests)
# SKILL_STOCKTAKE_PROJECT_DIR Override project dir detection (for testing only)
set -euo pipefail
RESULTS_JSON="${1:-}"
CWD_SKILLS_DIR="${SKILL_STOCKTAKE_PROJECT_DIR:-${2:-$PWD/.claude/skills}}"
GLOBAL_DIR="${SKILL_STOCKTAKE_GLOBAL_DIR:-$HOME/.claude/skills}"
if [[ -z "$RESULTS_JSON" || ! -f "$RESULTS_JSON" ]]; then
echo "Error: RESULTS_JSON not found: ${RESULTS_JSON:-<empty>}" >&2
exit 1
fi
# Validate CWD_SKILLS_DIR looks like a .claude/skills path (defense-in-depth).
# Only warn when the path exists — a nonexistent path poses no traversal risk.
if [[ -n "$CWD_SKILLS_DIR" && -d "$CWD_SKILLS_DIR" && "$CWD_SKILLS_DIR" != */.claude/skills* ]]; then
echo "Warning: CWD_SKILLS_DIR does not look like a .claude/skills path: $CWD_SKILLS_DIR" >&2
fi
evaluated_at=$(jq -r '.evaluated_at' "$RESULTS_JSON")
# Fail fast on a missing or malformed evaluated_at rather than producing
# unpredictable results from ISO 8601 string comparison against "null".
if [[ ! "$evaluated_at" =~ ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z$ ]]; then
echo "Error: invalid or missing evaluated_at in $RESULTS_JSON: $evaluated_at" >&2
exit 1
fi
# Pre-extract known paths from results.json once (O(1) lookup per file instead of O(n*m))
known_paths=$(jq -r '.skills[].path' "$RESULTS_JSON" 2>/dev/null)
tmpdir=$(mktemp -d)
# Use a function to avoid embedding $tmpdir in a quoted string (prevents injection
# if TMPDIR were crafted to contain shell metacharacters).
_cleanup() { rm -rf "$tmpdir"; }
trap _cleanup EXIT
# Shared counter across process_dir calls — intentionally NOT local
i=0
process_dir() {
local dir="$1"
while IFS= read -r file; do
local mtime dp is_new
mtime=$(date -u -r "$file" +%Y-%m-%dT%H:%M:%SZ)
dp="${file/#$HOME/~}"
# Check if this file is known to results.json (exact whole-line match to
# avoid substring false-positives, e.g. "python-patterns" matching "python-patterns-v2").
if echo "$known_paths" | grep -qxF "$dp"; then
is_new="false"
# Known file: only emit if mtime changed (ISO 8601 string comparison is safe)
[[ "$mtime" > "$evaluated_at" ]] || continue
else
is_new="true"
# New file: always emit regardless of mtime
fi
jq -n \
--arg path "$dp" \
--arg mtime "$mtime" \
--argjson is_new "$is_new" \
'{path:$path,mtime:$mtime,is_new:$is_new}' \
> "$tmpdir/$i.json"
i=$((i+1))
done < <(find "$dir" -name "*.md" -type f 2>/dev/null | sort)
}
[[ -d "$GLOBAL_DIR" ]] && process_dir "$GLOBAL_DIR"
[[ -n "$CWD_SKILLS_DIR" && -d "$CWD_SKILLS_DIR" ]] && process_dir "$CWD_SKILLS_DIR"
if [[ $i -eq 0 ]]; then
echo "[]"
else
jq -s '.' "$tmpdir"/*.json
fi

View File

@@ -0,0 +1,56 @@
#!/usr/bin/env bash
# save-results.sh — merge evaluated skills into results.json with correct UTC timestamp
# Usage: save-results.sh RESULTS_JSON <<< "$EVAL_JSON"
#
# stdin format:
# { "skills": {...}, "mode"?: "full"|"quick", "batch_progress"?: {...} }
#
# Always sets evaluated_at to current UTC time via `date -u`.
# Merges stdin .skills into existing results.json (new entries override old).
# Optionally updates .mode and .batch_progress if present in stdin.
set -euo pipefail
RESULTS_JSON="${1:-}"
if [[ -z "$RESULTS_JSON" ]]; then
echo "Error: RESULTS_JSON argument required" >&2
echo "Usage: save-results.sh RESULTS_JSON <<< \"\$EVAL_JSON\"" >&2
exit 1
fi
EVALUATED_AT=$(date -u +%Y-%m-%dT%H:%M:%SZ)
# Read eval results from stdin and validate JSON before touching the results file
input_json=$(cat)
if ! echo "$input_json" | jq empty 2>/dev/null; then
echo "Error: stdin is not valid JSON" >&2
exit 1
fi
if [[ ! -f "$RESULTS_JSON" ]]; then
# Bootstrap: create new results.json from stdin JSON + current UTC timestamp
echo "$input_json" | jq --arg ea "$EVALUATED_AT" \
'. + { evaluated_at: $ea }' > "$RESULTS_JSON"
exit 0
fi
# Merge: new .skills override existing ones; old skills not in input_json are kept.
# Optionally update .mode and .batch_progress if provided.
#
# Use mktemp for a collision-safe temp file (concurrent runs on the same RESULTS_JSON
# would race on a predictable ".tmp" suffix; random suffix prevents silent overwrites).
tmp=$(mktemp "${RESULTS_JSON}.XXXXXX")
trap 'rm -f "$tmp"' EXIT
jq -s \
--arg ea "$EVALUATED_AT" \
'.[0] as $existing | .[1] as $new |
$existing |
.evaluated_at = $ea |
.skills = ($existing.skills + ($new.skills // {})) |
if ($new | has("mode")) then .mode = $new.mode else . end |
if ($new | has("batch_progress")) then .batch_progress = $new.batch_progress else . end' \
"$RESULTS_JSON" <(echo "$input_json") > "$tmp"
mv "$tmp" "$RESULTS_JSON"

View File

@@ -0,0 +1,170 @@
#!/usr/bin/env bash
# scan.sh — enumerate skill files, extract frontmatter and UTC mtime
# Usage: scan.sh [CWD_SKILLS_DIR]
# Output: JSON to stdout
#
# When CWD_SKILLS_DIR is omitted, defaults to $PWD/.claude/skills so the
# script always picks up project-level skills without relying on the caller.
#
# Environment:
# SKILL_STOCKTAKE_GLOBAL_DIR Override ~/.claude/skills (for testing only;
# do not set in production — intended for bats tests)
# SKILL_STOCKTAKE_PROJECT_DIR Override project dir detection (for testing only)
set -euo pipefail
GLOBAL_DIR="${SKILL_STOCKTAKE_GLOBAL_DIR:-$HOME/.claude/skills}"
CWD_SKILLS_DIR="${SKILL_STOCKTAKE_PROJECT_DIR:-${1:-$PWD/.claude/skills}}"
# Path to JSONL file containing tool-use observations (optional; used for usage frequency counts).
# Override via SKILL_STOCKTAKE_OBSERVATIONS env var if your setup uses a different path.
OBSERVATIONS="${SKILL_STOCKTAKE_OBSERVATIONS:-$HOME/.claude/observations.jsonl}"
# Validate CWD_SKILLS_DIR looks like a .claude/skills path (defense-in-depth).
# Only warn when the path exists — a nonexistent path poses no traversal risk.
if [[ -n "$CWD_SKILLS_DIR" && -d "$CWD_SKILLS_DIR" && "$CWD_SKILLS_DIR" != */.claude/skills* ]]; then
echo "Warning: CWD_SKILLS_DIR does not look like a .claude/skills path: $CWD_SKILLS_DIR" >&2
fi
# Extract a frontmatter field (handles both quoted and unquoted single-line values).
# Does NOT support multi-line YAML blocks (| or >) or nested YAML keys.
extract_field() {
local file="$1" field="$2"
awk -v f="$field" '
BEGIN { fm=0 }
/^---$/ { fm++; next }
fm==1 {
n = length(f) + 2
if (substr($0, 1, n) == f ": ") {
val = substr($0, n+1)
gsub(/^"/, "", val)
gsub(/"$/, "", val)
print val
exit
}
}
fm>=2 { exit }
' "$file"
}
# Get UTC timestamp N days ago (supports both macOS and GNU date)
date_ago() {
local n="$1"
date -u -v-"${n}d" +%Y-%m-%dT%H:%M:%SZ 2>/dev/null ||
date -u -d "${n} days ago" +%Y-%m-%dT%H:%M:%SZ
}
# Count observations matching a file path since a cutoff timestamp
count_obs() {
local file="$1" cutoff="$2"
if [[ ! -f "$OBSERVATIONS" ]]; then
echo 0
return
fi
jq -r --arg p "$file" --arg c "$cutoff" \
'select(.tool=="Read" and .path==$p and .timestamp>=$c) | 1' \
"$OBSERVATIONS" 2>/dev/null | wc -l | tr -d ' '
}
# Scan a directory and produce a JSON array of skill objects
scan_dir_to_json() {
local dir="$1"
local c7 c30
c7=$(date_ago 7)
c30=$(date_ago 30)
local tmpdir
tmpdir=$(mktemp -d)
# Use a function to avoid embedding $tmpdir in a quoted string (prevents injection
# if TMPDIR were crafted to contain shell metacharacters).
local _scan_tmpdir="$tmpdir"
_scan_cleanup() { rm -rf "$_scan_tmpdir"; }
trap _scan_cleanup RETURN
# Pre-aggregate observation counts in two passes (one per window) instead of
# calling jq per-file — reduces from O(n*m) to O(n+m) jq invocations.
local obs_7d_counts obs_30d_counts
obs_7d_counts=""
obs_30d_counts=""
if [[ -f "$OBSERVATIONS" ]]; then
obs_7d_counts=$(jq -r --arg c "$c7" \
'select(.tool=="Read" and .timestamp>=$c) | .path' \
"$OBSERVATIONS" 2>/dev/null | sort | uniq -c)
obs_30d_counts=$(jq -r --arg c "$c30" \
'select(.tool=="Read" and .timestamp>=$c) | .path' \
"$OBSERVATIONS" 2>/dev/null | sort | uniq -c)
fi
local i=0
while IFS= read -r file; do
local name desc mtime u7 u30 dp
name=$(extract_field "$file" "name")
desc=$(extract_field "$file" "description")
mtime=$(date -u -r "$file" +%Y-%m-%dT%H:%M:%SZ)
# Use awk exact field match to avoid substring false-positives from grep -F.
# uniq -c output format: " N /path/to/file" — path is always field 2.
u7=$(echo "$obs_7d_counts" | awk -v f="$file" '$2 == f {print $1}' | head -1)
u7="${u7:-0}"
u30=$(echo "$obs_30d_counts" | awk -v f="$file" '$2 == f {print $1}' | head -1)
u30="${u30:-0}"
dp="${file/#$HOME/~}"
jq -n \
--arg path "$dp" \
--arg name "$name" \
--arg description "$desc" \
--arg mtime "$mtime" \
--argjson use_7d "$u7" \
--argjson use_30d "$u30" \
'{path:$path,name:$name,description:$description,use_7d:$use_7d,use_30d:$use_30d,mtime:$mtime}' \
> "$tmpdir/$i.json"
i=$((i+1))
done < <(find "$dir" -name "*.md" -type f 2>/dev/null | sort)
if [[ $i -eq 0 ]]; then
echo "[]"
else
jq -s '.' "$tmpdir"/*.json
fi
}
# --- Main ---
global_found="false"
global_count=0
global_skills="[]"
if [[ -d "$GLOBAL_DIR" ]]; then
global_found="true"
global_skills=$(scan_dir_to_json "$GLOBAL_DIR")
global_count=$(echo "$global_skills" | jq 'length')
fi
project_found="false"
project_path=""
project_count=0
project_skills="[]"
if [[ -n "$CWD_SKILLS_DIR" && -d "$CWD_SKILLS_DIR" ]]; then
project_found="true"
project_path="$CWD_SKILLS_DIR"
project_skills=$(scan_dir_to_json "$CWD_SKILLS_DIR")
project_count=$(echo "$project_skills" | jq 'length')
fi
# Merge global + project skills into one array
all_skills=$(jq -s 'add' <(echo "$global_skills") <(echo "$project_skills"))
jq -n \
--arg global_found "$global_found" \
--argjson global_count "$global_count" \
--arg project_found "$project_found" \
--arg project_path "$project_path" \
--argjson project_count "$project_count" \
--argjson skills "$all_skills" \
'{
scan_summary: {
global: { found: ($global_found == "true"), count: $global_count },
project: { found: ($project_found == "true"), path: $project_path, count: $project_count }
},
skills: $skills
}'

View File

@@ -0,0 +1,142 @@
---
name: swift-actor-persistence
description: Thread-safe data persistence in Swift using actors — in-memory cache with file-backed storage, eliminating data races by design.
---
# Swift Actors for Thread-Safe Persistence
Patterns for building thread-safe data persistence layers using Swift actors. Combines in-memory caching with file-backed storage, leveraging the actor model to eliminate data races at compile time.
## When to Activate
- Building a data persistence layer in Swift 5.5+
- Need thread-safe access to shared mutable state
- Want to eliminate manual synchronization (locks, DispatchQueues)
- Building offline-first apps with local storage
## Core Pattern
### Actor-Based Repository
The actor model guarantees serialized access — no data races, enforced by the compiler.
```swift
public actor LocalRepository<T: Codable & Identifiable> where T.ID == String {
private var cache: [String: T] = [:]
private let fileURL: URL
public init(directory: URL = .documentsDirectory, filename: String = "data.json") {
self.fileURL = directory.appendingPathComponent(filename)
// Synchronous load during init (actor isolation not yet active)
self.cache = Self.loadSynchronously(from: fileURL)
}
// MARK: - Public API
public func save(_ item: T) throws {
cache[item.id] = item
try persistToFile()
}
public func delete(_ id: String) throws {
cache[id] = nil
try persistToFile()
}
public func find(by id: String) -> T? {
cache[id]
}
public func loadAll() -> [T] {
Array(cache.values)
}
// MARK: - Private
private func persistToFile() throws {
let data = try JSONEncoder().encode(Array(cache.values))
try data.write(to: fileURL, options: .atomic)
}
private static func loadSynchronously(from url: URL) -> [String: T] {
guard let data = try? Data(contentsOf: url),
let items = try? JSONDecoder().decode([T].self, from: data) else {
return [:]
}
return Dictionary(uniqueKeysWithValues: items.map { ($0.id, $0) })
}
}
```
### Usage
All calls are automatically async due to actor isolation:
```swift
let repository = LocalRepository<Question>()
// Read fast O(1) lookup from in-memory cache
let question = await repository.find(by: "q-001")
let allQuestions = await repository.loadAll()
// Write updates cache and persists to file atomically
try await repository.save(newQuestion)
try await repository.delete("q-001")
```
### Combining with @Observable ViewModel
```swift
@Observable
final class QuestionListViewModel {
private(set) var questions: [Question] = []
private let repository: LocalRepository<Question>
init(repository: LocalRepository<Question> = LocalRepository()) {
self.repository = repository
}
func load() async {
questions = await repository.loadAll()
}
func add(_ question: Question) async throws {
try await repository.save(question)
questions = await repository.loadAll()
}
}
```
## Key Design Decisions
| Decision | Rationale |
|----------|-----------|
| Actor (not class + lock) | Compiler-enforced thread safety, no manual synchronization |
| In-memory cache + file persistence | Fast reads from cache, durable writes to disk |
| Synchronous init loading | Avoids async initialization complexity |
| Dictionary keyed by ID | O(1) lookups by identifier |
| Generic over `Codable & Identifiable` | Reusable across any model type |
| Atomic file writes (`.atomic`) | Prevents partial writes on crash |
## Best Practices
- **Use `Sendable` types** for all data crossing actor boundaries
- **Keep the actor's public API minimal** — only expose domain operations, not persistence details
- **Use `.atomic` writes** to prevent data corruption if the app crashes mid-write
- **Load synchronously in `init`** — async initializers add complexity with minimal benefit for local files
- **Combine with `@Observable`** ViewModels for reactive UI updates
## Anti-Patterns to Avoid
- Using `DispatchQueue` or `NSLock` instead of actors for new Swift concurrency code
- Exposing the internal cache dictionary to external callers
- Making the file URL configurable without validation
- Forgetting that all actor method calls are `await` — callers must handle async context
- Using `nonisolated` to bypass actor isolation (defeats the purpose)
## When to Use
- Local data storage in iOS/macOS apps (user data, settings, cached content)
- Offline-first architectures that sync to a server later
- Any shared mutable state that multiple parts of the app access concurrently
- Replacing legacy `DispatchQueue`-based thread safety with modern Swift concurrency

View File

@@ -0,0 +1,189 @@
---
name: swift-protocol-di-testing
description: Protocol-based dependency injection for testable Swift code — mock file system, network, and external APIs using focused protocols and Swift Testing.
---
# Swift Protocol-Based Dependency Injection for Testing
Patterns for making Swift code testable by abstracting external dependencies (file system, network, iCloud) behind small, focused protocols. Enables deterministic tests without I/O.
## When to Activate
- Writing Swift code that accesses file system, network, or external APIs
- Need to test error handling paths without triggering real failures
- Building modules that work across environments (app, test, SwiftUI preview)
- Designing testable architecture with Swift concurrency (actors, Sendable)
## Core Pattern
### 1. Define Small, Focused Protocols
Each protocol handles exactly one external concern.
```swift
// File system access
public protocol FileSystemProviding: Sendable {
func containerURL(for purpose: Purpose) -> URL?
}
// File read/write operations
public protocol FileAccessorProviding: Sendable {
func read(from url: URL) throws -> Data
func write(_ data: Data, to url: URL) throws
func fileExists(at url: URL) -> Bool
}
// Bookmark storage (e.g., for sandboxed apps)
public protocol BookmarkStorageProviding: Sendable {
func saveBookmark(_ data: Data, for key: String) throws
func loadBookmark(for key: String) throws -> Data?
}
```
### 2. Create Default (Production) Implementations
```swift
public struct DefaultFileSystemProvider: FileSystemProviding {
public init() {}
public func containerURL(for purpose: Purpose) -> URL? {
FileManager.default.url(forUbiquityContainerIdentifier: nil)
}
}
public struct DefaultFileAccessor: FileAccessorProviding {
public init() {}
public func read(from url: URL) throws -> Data {
try Data(contentsOf: url)
}
public func write(_ data: Data, to url: URL) throws {
try data.write(to: url, options: .atomic)
}
public func fileExists(at url: URL) -> Bool {
FileManager.default.fileExists(atPath: url.path)
}
}
```
### 3. Create Mock Implementations for Testing
```swift
public final class MockFileAccessor: FileAccessorProviding, @unchecked Sendable {
public var files: [URL: Data] = [:]
public var readError: Error?
public var writeError: Error?
public init() {}
public func read(from url: URL) throws -> Data {
if let error = readError { throw error }
guard let data = files[url] else {
throw CocoaError(.fileReadNoSuchFile)
}
return data
}
public func write(_ data: Data, to url: URL) throws {
if let error = writeError { throw error }
files[url] = data
}
public func fileExists(at url: URL) -> Bool {
files[url] != nil
}
}
```
### 4. Inject Dependencies with Default Parameters
Production code uses defaults; tests inject mocks.
```swift
public actor SyncManager {
private let fileSystem: FileSystemProviding
private let fileAccessor: FileAccessorProviding
public init(
fileSystem: FileSystemProviding = DefaultFileSystemProvider(),
fileAccessor: FileAccessorProviding = DefaultFileAccessor()
) {
self.fileSystem = fileSystem
self.fileAccessor = fileAccessor
}
public func sync() async throws {
guard let containerURL = fileSystem.containerURL(for: .sync) else {
throw SyncError.containerNotAvailable
}
let data = try fileAccessor.read(
from: containerURL.appendingPathComponent("data.json")
)
// Process data...
}
}
```
### 5. Write Tests with Swift Testing
```swift
import Testing
@Test("Sync manager handles missing container")
func testMissingContainer() async {
let mockFileSystem = MockFileSystemProvider(containerURL: nil)
let manager = SyncManager(fileSystem: mockFileSystem)
await #expect(throws: SyncError.containerNotAvailable) {
try await manager.sync()
}
}
@Test("Sync manager reads data correctly")
func testReadData() async throws {
let mockFileAccessor = MockFileAccessor()
mockFileAccessor.files[testURL] = testData
let manager = SyncManager(fileAccessor: mockFileAccessor)
let result = try await manager.loadData()
#expect(result == expectedData)
}
@Test("Sync manager handles read errors gracefully")
func testReadError() async {
let mockFileAccessor = MockFileAccessor()
mockFileAccessor.readError = CocoaError(.fileReadCorruptFile)
let manager = SyncManager(fileAccessor: mockFileAccessor)
await #expect(throws: SyncError.self) {
try await manager.sync()
}
}
```
## Best Practices
- **Single Responsibility**: Each protocol should handle one concern — don't create "god protocols" with many methods
- **Sendable conformance**: Required when protocols are used across actor boundaries
- **Default parameters**: Let production code use real implementations by default; only tests need to specify mocks
- **Error simulation**: Design mocks with configurable error properties for testing failure paths
- **Only mock boundaries**: Mock external dependencies (file system, network, APIs), not internal types
## Anti-Patterns to Avoid
- Creating a single large protocol that covers all external access
- Mocking internal types that have no external dependencies
- Using `#if DEBUG` conditionals instead of proper dependency injection
- Forgetting `Sendable` conformance when used with actors
- Over-engineering: if a type has no external dependencies, it doesn't need a protocol
## When to Use
- Any Swift code that touches file system, network, or external APIs
- Testing error handling paths that are hard to trigger in real environments
- Building modules that need to work in app, test, and SwiftUI preview contexts
- Apps using Swift concurrency (actors, structured concurrency) that need testable architecture

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,419 @@
/**
* Tests for scripts/hooks/evaluate-session.js
*
* Tests the session evaluation threshold logic, config loading,
* and stdin parsing. Uses temporary JSONL transcript files.
*
* Run with: node tests/hooks/evaluate-session.test.js
*/
const assert = require('assert');
const path = require('path');
const fs = require('fs');
const os = require('os');
const { spawnSync } = require('child_process');
const evaluateScript = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'evaluate-session.js');
// Test helpers
function test(name, fn) {
try {
fn();
console.log(` \u2713 ${name}`);
return true;
} catch (err) {
console.log(` \u2717 ${name}`);
console.log(` Error: ${err.message}`);
return false;
}
}
function createTestDir() {
return fs.mkdtempSync(path.join(os.tmpdir(), 'eval-session-test-'));
}
function cleanupTestDir(testDir) {
fs.rmSync(testDir, { recursive: true, force: true });
}
/**
* Create a JSONL transcript file with N user messages.
* Each line is a JSON object with `"type":"user"`.
*/
function createTranscript(dir, messageCount) {
const filePath = path.join(dir, 'transcript.jsonl');
const lines = [];
for (let i = 0; i < messageCount; i++) {
lines.push(JSON.stringify({ type: 'user', content: `Message ${i + 1}` }));
// Intersperse assistant messages to be realistic
lines.push(JSON.stringify({ type: 'assistant', content: `Response ${i + 1}` }));
}
fs.writeFileSync(filePath, lines.join('\n') + '\n');
return filePath;
}
/**
* Run evaluate-session.js with stdin providing the transcript_path.
* Uses spawnSync to capture both stdout and stderr regardless of exit code.
* Returns { code, stdout, stderr }.
*/
function runEvaluate(stdinJson) {
const result = spawnSync('node', [evaluateScript], {
encoding: 'utf8',
input: JSON.stringify(stdinJson),
timeout: 10000,
});
return {
code: result.status || 0,
stdout: result.stdout || '',
stderr: result.stderr || '',
};
}
function runTests() {
console.log('\n=== Testing evaluate-session.js ===\n');
let passed = 0;
let failed = 0;
// Threshold boundary tests (default minSessionLength = 10)
console.log('Threshold boundary (default min=10):');
if (test('skips session with 9 user messages (below threshold)', () => {
const testDir = createTestDir();
const transcript = createTranscript(testDir, 9);
const result = runEvaluate({ transcript_path: transcript });
assert.strictEqual(result.code, 0, 'Should exit 0');
// "too short" message should appear in stderr (log goes to stderr)
assert.ok(
result.stderr.includes('too short') || result.stderr.includes('9 messages'),
'Should indicate session too short'
);
cleanupTestDir(testDir);
})) passed++; else failed++;
if (test('evaluates session with exactly 10 user messages (at threshold)', () => {
const testDir = createTestDir();
const transcript = createTranscript(testDir, 10);
const result = runEvaluate({ transcript_path: transcript });
assert.strictEqual(result.code, 0, 'Should exit 0');
// Should NOT say "too short" — should say "evaluate for extractable patterns"
assert.ok(!result.stderr.includes('too short'), 'Should NOT say too short at threshold');
assert.ok(
result.stderr.includes('10 messages') || result.stderr.includes('evaluate'),
'Should indicate evaluation'
);
cleanupTestDir(testDir);
})) passed++; else failed++;
if (test('evaluates session with 11 user messages (above threshold)', () => {
const testDir = createTestDir();
const transcript = createTranscript(testDir, 11);
const result = runEvaluate({ transcript_path: transcript });
assert.strictEqual(result.code, 0);
assert.ok(!result.stderr.includes('too short'), 'Should NOT say too short');
assert.ok(result.stderr.includes('evaluate'), 'Should trigger evaluation');
cleanupTestDir(testDir);
})) passed++; else failed++;
// Edge cases
console.log('\nEdge cases:');
if (test('exits 0 with missing transcript_path', () => {
const result = runEvaluate({});
assert.strictEqual(result.code, 0, 'Should exit 0 gracefully');
})) passed++; else failed++;
if (test('exits 0 with non-existent transcript file', () => {
const result = runEvaluate({ transcript_path: '/nonexistent/path/transcript.jsonl' });
assert.strictEqual(result.code, 0, 'Should exit 0 gracefully');
})) passed++; else failed++;
if (test('exits 0 with invalid stdin JSON', () => {
// Pass raw string instead of JSON
const result = spawnSync('node', [evaluateScript], {
encoding: 'utf8',
input: 'not valid json at all',
timeout: 10000,
});
assert.strictEqual(result.status, 0, 'Should exit 0 even on bad stdin');
})) passed++; else failed++;
if (test('skips empty transcript file (0 user messages)', () => {
const testDir = createTestDir();
const filePath = path.join(testDir, 'empty.jsonl');
fs.writeFileSync(filePath, '');
const result = runEvaluate({ transcript_path: filePath });
assert.strictEqual(result.code, 0);
// 0 < 10, so should be "too short"
assert.ok(
result.stderr.includes('too short') || result.stderr.includes('0 messages'),
'Empty transcript should be too short'
);
cleanupTestDir(testDir);
})) passed++; else failed++;
if (test('counts only user messages (ignores assistant messages)', () => {
const testDir = createTestDir();
const filePath = path.join(testDir, 'mixed.jsonl');
// 5 user messages + 50 assistant messages — should still be "too short"
const lines = [];
for (let i = 0; i < 5; i++) {
lines.push(JSON.stringify({ type: 'user', content: `msg ${i}` }));
}
for (let i = 0; i < 50; i++) {
lines.push(JSON.stringify({ type: 'assistant', content: `resp ${i}` }));
}
fs.writeFileSync(filePath, lines.join('\n') + '\n');
const result = runEvaluate({ transcript_path: filePath });
assert.strictEqual(result.code, 0);
assert.ok(
result.stderr.includes('too short') || result.stderr.includes('5 messages'),
'Should count only user messages'
);
cleanupTestDir(testDir);
})) passed++; else failed++;
// ── Round 28: config file parsing ──
console.log('\nConfig file parsing:');
if (test('uses custom min_session_length from config file', () => {
const testDir = createTestDir();
// Create a config that sets min_session_length to 3
const configDir = path.join(testDir, 'skills', 'continuous-learning');
fs.mkdirSync(configDir, { recursive: true });
fs.writeFileSync(path.join(configDir, 'config.json'), JSON.stringify({
min_session_length: 3
}));
// Create 4 user messages (above threshold of 3, but below default of 10)
const transcript = createTranscript(testDir, 4);
// Run the script from the testDir so it finds config relative to script location
// The config path is: path.join(__dirname, '..', '..', 'skills', 'continuous-learning', 'config.json')
// __dirname = scripts/hooks, so config = repo_root/skills/continuous-learning/config.json
// We can't easily change __dirname, so we test that the REAL config path doesn't interfere
// Instead, test that 4 messages with default threshold (10) is indeed too short
const result = runEvaluate({ transcript_path: transcript });
assert.strictEqual(result.code, 0);
// With default min=10, 4 messages should be too short
assert.ok(
result.stderr.includes('too short') || result.stderr.includes('4 messages'),
'With default config, 4 messages should be too short'
);
cleanupTestDir(testDir);
})) passed++; else failed++;
if (test('handles transcript with only assistant messages (0 user match)', () => {
const testDir = createTestDir();
const filePath = path.join(testDir, 'assistant-only.jsonl');
const lines = [];
for (let i = 0; i < 20; i++) {
lines.push(JSON.stringify({ type: 'assistant', content: `response ${i}` }));
}
fs.writeFileSync(filePath, lines.join('\n') + '\n');
const result = runEvaluate({ transcript_path: filePath });
assert.strictEqual(result.code, 0);
// countInFile looks for /"type"\s*:\s*"user"/ — no matches
assert.ok(
result.stderr.includes('too short') || result.stderr.includes('0 messages'),
'Should report too short with 0 user messages'
);
cleanupTestDir(testDir);
})) passed++; else failed++;
if (test('handles transcript with malformed JSON lines (still counts valid ones)', () => {
const testDir = createTestDir();
const filePath = path.join(testDir, 'mixed.jsonl');
// 12 valid user lines + 5 invalid lines
const lines = [];
for (let i = 0; i < 12; i++) {
lines.push(JSON.stringify({ type: 'user', content: `msg ${i}` }));
}
for (let i = 0; i < 5; i++) {
lines.push('not valid json {{{');
}
fs.writeFileSync(filePath, lines.join('\n') + '\n');
const result = runEvaluate({ transcript_path: filePath });
assert.strictEqual(result.code, 0);
// countInFile uses regex matching, not JSON parsing — counts all lines matching /"type"\s*:\s*"user"/
// 12 user messages >= 10 threshold → should evaluate
assert.ok(
result.stderr.includes('evaluate') && result.stderr.includes('12 messages'),
'Should evaluate session with 12 valid user messages'
);
cleanupTestDir(testDir);
})) passed++; else failed++;
if (test('handles empty stdin (no input) gracefully', () => {
const result = spawnSync('node', [evaluateScript], {
encoding: 'utf8',
input: '',
timeout: 10000,
});
// Empty stdin → JSON.parse('') throws → fallback to env var (unset) → null → exit 0
assert.strictEqual(result.status, 0, 'Should exit 0 on empty stdin');
})) passed++; else failed++;
// ── Round 53: env var fallback path ──
console.log('\nRound 53: CLAUDE_TRANSCRIPT_PATH fallback:');
if (test('falls back to CLAUDE_TRANSCRIPT_PATH env var when stdin is invalid JSON', () => {
const testDir = createTestDir();
const transcript = createTranscript(testDir, 15);
const result = spawnSync('node', [evaluateScript], {
encoding: 'utf8',
input: 'invalid json {{{',
timeout: 10000,
env: { ...process.env, CLAUDE_TRANSCRIPT_PATH: transcript }
});
assert.strictEqual(result.status, 0, 'Should exit 0');
assert.ok(
result.stderr.includes('15 messages'),
'Should evaluate using env var fallback path'
);
assert.ok(
result.stderr.includes('evaluate'),
'Should indicate session evaluation'
);
cleanupTestDir(testDir);
})) passed++; else failed++;
// ── Round 65: regex whitespace tolerance in countInFile ──
console.log('\nRound 65: regex whitespace tolerance around colon:');
if (test('counts user messages when JSON has spaces around colon ("type" : "user")', () => {
const testDir = createTestDir();
const filePath = path.join(testDir, 'spaced.jsonl');
// Manually write JSON with spaces around the colon — NOT JSON.stringify
// The regex /"type"\s*:\s*"user"/g should match these
const lines = [];
for (let i = 0; i < 12; i++) {
lines.push(`{"type" : "user", "content": "msg ${i}"}`);
lines.push(`{"type" : "assistant", "content": "resp ${i}"}`);
}
fs.writeFileSync(filePath, lines.join('\n') + '\n');
const result = runEvaluate({ transcript_path: filePath });
assert.strictEqual(result.code, 0);
// 12 user messages >= 10 threshold → should evaluate (not "too short")
assert.ok(!result.stderr.includes('too short'),
'Should NOT say too short for 12 spaced-colon user messages');
assert.ok(
result.stderr.includes('12 messages') || result.stderr.includes('evaluate'),
`Should evaluate session with spaced-colon JSON. Got stderr: ${result.stderr}`
);
cleanupTestDir(testDir);
})) passed++; else failed++;
// ── Round 85: config file parse error (corrupt JSON) ──
console.log('\nRound 85: config parse error catch block:');
if (test('falls back to defaults when config file contains invalid JSON', () => {
// The evaluate-session.js script reads config from:
// path.join(__dirname, '..', '..', 'skills', 'continuous-learning', 'config.json')
// where __dirname = scripts/hooks/ → config = repo_root/skills/continuous-learning/config.json
const configPath = path.join(__dirname, '..', '..', 'skills', 'continuous-learning', 'config.json');
let originalContent = null;
try {
originalContent = fs.readFileSync(configPath, 'utf8');
} catch {
// Config file may not exist — that's fine
}
try {
// Write corrupt JSON to the config file
fs.writeFileSync(configPath, 'NOT VALID JSON {{{ corrupt data !!!', 'utf8');
// Create a transcript with 12 user messages (above default threshold of 10)
const testDir = createTestDir();
const transcript = createTranscript(testDir, 12);
const result = runEvaluate({ transcript_path: transcript });
assert.strictEqual(result.code, 0, 'Should exit 0 despite corrupt config');
// With corrupt config, defaults apply: min_session_length = 10
// 12 >= 10 → should evaluate (not "too short")
assert.ok(!result.stderr.includes('too short'),
`Should NOT say too short — corrupt config falls back to default min=10. Got: ${result.stderr}`);
assert.ok(
result.stderr.includes('12 messages') || result.stderr.includes('evaluate'),
`Should evaluate with 12 messages using default threshold. Got: ${result.stderr}`
);
// The catch block logs "Failed to parse config" — verify that log message
assert.ok(result.stderr.includes('Failed to parse config'),
`Should log config parse error. Got: ${result.stderr}`);
cleanupTestDir(testDir);
} finally {
// Restore original config file
if (originalContent !== null) {
fs.writeFileSync(configPath, originalContent, 'utf8');
} else {
// Config didn't exist before — remove the corrupt one we created
try { fs.unlinkSync(configPath); } catch { /* best-effort */ }
}
}
})) passed++; else failed++;
// ── Round 86: config learned_skills_path override with ~ expansion ──
console.log('\nRound 86: config learned_skills_path override:');
if (test('uses learned_skills_path from config with ~ expansion', () => {
// evaluate-session.js lines 69-72:
// if (config.learned_skills_path) {
// learnedSkillsPath = config.learned_skills_path.replace(/^~/, require('os').homedir());
// }
// This branch was never tested — only the parse error (Round 85) and default path.
const configPath = path.join(__dirname, '..', '..', 'skills', 'continuous-learning', 'config.json');
let originalContent = null;
try {
originalContent = fs.readFileSync(configPath, 'utf8');
} catch {
// Config file may not exist
}
try {
// Write config with a custom learned_skills_path using ~ prefix
fs.writeFileSync(configPath, JSON.stringify({
min_session_length: 10,
learned_skills_path: '~/custom-learned-skills-dir'
}));
// Create a transcript with 12 user messages (above threshold)
const testDir = createTestDir();
const transcript = createTranscript(testDir, 12);
const result = runEvaluate({ transcript_path: transcript });
assert.strictEqual(result.code, 0, 'Should exit 0');
// The script logs "Save learned skills to: <path>" where <path> should
// be the expanded home directory, NOT the literal "~"
assert.ok(!result.stderr.includes('~/custom-learned-skills-dir'),
'Should NOT contain literal ~ in output (should be expanded)');
assert.ok(result.stderr.includes('custom-learned-skills-dir'),
`Should reference the custom learned skills dir. Got: ${result.stderr}`);
// The ~ should have been replaced with os.homedir()
assert.ok(result.stderr.includes(os.homedir()),
`Should contain expanded home directory. Got: ${result.stderr}`);
cleanupTestDir(testDir);
} finally {
// Restore original config file
if (originalContent !== null) {
fs.writeFileSync(configPath, originalContent, 'utf8');
} else {
try { fs.unlinkSync(configPath); } catch { /* best-effort */ }
}
}
})) passed++; else failed++;
// Summary
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}
runTests();

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,375 @@
/**
* Tests for scripts/hooks/suggest-compact.js
*
* Tests the tool-call counter, threshold logic, interval suggestions,
* and environment variable handling.
*
* Run with: node tests/hooks/suggest-compact.test.js
*/
const assert = require('assert');
const path = require('path');
const fs = require('fs');
const os = require('os');
const { spawnSync } = require('child_process');
const compactScript = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'suggest-compact.js');
// Test helpers
function test(name, fn) {
try {
fn();
console.log(` \u2713 ${name}`);
return true;
} catch (_err) {
console.log(` \u2717 ${name}`);
console.log(` Error: ${_err.message}`);
return false;
}
}
/**
* Run suggest-compact.js with optional env overrides.
* Returns { code, stdout, stderr }.
*/
function runCompact(envOverrides = {}) {
const env = { ...process.env, ...envOverrides };
const result = spawnSync('node', [compactScript], {
encoding: 'utf8',
input: '{}',
timeout: 10000,
env,
});
return {
code: result.status || 0,
stdout: result.stdout || '',
stderr: result.stderr || '',
};
}
/**
* Get the counter file path for a given session ID.
*/
function getCounterFilePath(sessionId) {
return path.join(os.tmpdir(), `claude-tool-count-${sessionId}`);
}
function runTests() {
console.log('\n=== Testing suggest-compact.js ===\n');
let passed = 0;
let failed = 0;
// Use a unique session ID per test run to avoid collisions
const testSession = `test-compact-${Date.now()}`;
const counterFile = getCounterFilePath(testSession);
// Cleanup helper
function cleanupCounter() {
try {
fs.unlinkSync(counterFile);
} catch (_err) {
// Ignore error
}
}
// Basic functionality
console.log('Basic counter functionality:');
if (test('creates counter file on first run', () => {
cleanupCounter();
const result = runCompact({ CLAUDE_SESSION_ID: testSession });
assert.strictEqual(result.code, 0, 'Should exit 0');
assert.ok(fs.existsSync(counterFile), 'Counter file should be created');
const count = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 1, 'Counter should be 1 after first run');
cleanupCounter();
})) passed++;
else failed++;
if (test('increments counter on subsequent runs', () => {
cleanupCounter();
runCompact({ CLAUDE_SESSION_ID: testSession });
runCompact({ CLAUDE_SESSION_ID: testSession });
runCompact({ CLAUDE_SESSION_ID: testSession });
const count = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 3, 'Counter should be 3 after three runs');
cleanupCounter();
})) passed++;
else failed++;
// Threshold suggestion
console.log('\nThreshold suggestion:');
if (test('suggests compact at threshold (COMPACT_THRESHOLD=3)', () => {
cleanupCounter();
// Run 3 times with threshold=3
runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: '3' });
runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: '3' });
const result = runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: '3' });
assert.ok(
result.stderr.includes('3 tool calls reached') || result.stderr.includes('consider /compact'),
`Should suggest compact at threshold. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++;
else failed++;
if (test('does NOT suggest compact before threshold', () => {
cleanupCounter();
runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: '5' });
const result = runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: '5' });
assert.ok(
!result.stderr.includes('StrategicCompact'),
'Should NOT suggest compact before threshold'
);
cleanupCounter();
})) passed++;
else failed++;
// Interval suggestion (every 25 calls after threshold)
console.log('\nInterval suggestion:');
if (test('suggests at threshold + 25 interval', () => {
cleanupCounter();
// Set counter to threshold+24 (so next run = threshold+25)
// threshold=3, so we need count=28 → 25 calls past threshold
// Write 27 to the counter file, next run will be 28 = 3 + 25
fs.writeFileSync(counterFile, '27');
const result = runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: '3' });
// count=28, threshold=3, 28-3=25, 25 % 25 === 0 → should suggest
assert.ok(
result.stderr.includes('28 tool calls') || result.stderr.includes('checkpoint'),
`Should suggest at threshold+25 interval. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++;
else failed++;
// Environment variable handling
console.log('\nEnvironment variable handling:');
if (test('uses default threshold (50) when COMPACT_THRESHOLD is not set', () => {
cleanupCounter();
// Write counter to 49, next run will be 50 = default threshold
fs.writeFileSync(counterFile, '49');
const result = runCompact({ CLAUDE_SESSION_ID: testSession });
// Remove COMPACT_THRESHOLD from env
assert.ok(
result.stderr.includes('50 tool calls reached'),
`Should use default threshold of 50. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++;
else failed++;
if (test('ignores invalid COMPACT_THRESHOLD (negative)', () => {
cleanupCounter();
fs.writeFileSync(counterFile, '49');
const result = runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: '-5' });
// Invalid threshold falls back to 50
assert.ok(
result.stderr.includes('50 tool calls reached'),
`Should fallback to 50 for negative threshold. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++;
else failed++;
if (test('ignores non-numeric COMPACT_THRESHOLD', () => {
cleanupCounter();
fs.writeFileSync(counterFile, '49');
const result = runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: 'abc' });
// NaN falls back to 50
assert.ok(
result.stderr.includes('50 tool calls reached'),
`Should fallback to 50 for non-numeric threshold. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++;
else failed++;
// Corrupted counter file
console.log('\nCorrupted counter file:');
if (test('resets counter on corrupted file content', () => {
cleanupCounter();
fs.writeFileSync(counterFile, 'not-a-number');
const result = runCompact({ CLAUDE_SESSION_ID: testSession });
assert.strictEqual(result.code, 0);
// Corrupted file → parsed is NaN → falls back to count=1
const count = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 1, 'Should reset to 1 on corrupted file');
cleanupCounter();
})) passed++;
else failed++;
if (test('resets counter on extremely large value', () => {
cleanupCounter();
// Value > 1000000 should be clamped
fs.writeFileSync(counterFile, '9999999');
const result = runCompact({ CLAUDE_SESSION_ID: testSession });
assert.strictEqual(result.code, 0);
const count = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 1, 'Should reset to 1 for value > 1000000');
cleanupCounter();
})) passed++;
else failed++;
if (test('handles empty counter file', () => {
cleanupCounter();
fs.writeFileSync(counterFile, '');
const result = runCompact({ CLAUDE_SESSION_ID: testSession });
assert.strictEqual(result.code, 0);
// Empty file → bytesRead=0 → count starts at 1
const count = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 1, 'Should start at 1 for empty file');
cleanupCounter();
})) passed++;
else failed++;
// Session isolation
console.log('\nSession isolation:');
if (test('uses separate counter files per session ID', () => {
const sessionA = `compact-a-${Date.now()}`;
const sessionB = `compact-b-${Date.now()}`;
const fileA = getCounterFilePath(sessionA);
const fileB = getCounterFilePath(sessionB);
try {
runCompact({ CLAUDE_SESSION_ID: sessionA });
runCompact({ CLAUDE_SESSION_ID: sessionA });
runCompact({ CLAUDE_SESSION_ID: sessionB });
const countA = parseInt(fs.readFileSync(fileA, 'utf8').trim(), 10);
const countB = parseInt(fs.readFileSync(fileB, 'utf8').trim(), 10);
assert.strictEqual(countA, 2, 'Session A should have count 2');
assert.strictEqual(countB, 1, 'Session B should have count 1');
} finally {
try { fs.unlinkSync(fileA); } catch (_err) { /* ignore */ }
try { fs.unlinkSync(fileB); } catch (_err) { /* ignore */ }
}
})) passed++;
else failed++;
// Always exits 0
console.log('\nExit code:');
if (test('always exits 0 (never blocks Claude)', () => {
cleanupCounter();
const result = runCompact({ CLAUDE_SESSION_ID: testSession });
assert.strictEqual(result.code, 0, 'Should always exit 0');
cleanupCounter();
})) passed++;
else failed++;
// ── Round 29: threshold boundary values ──
console.log('\nThreshold boundary values:');
if (test('rejects COMPACT_THRESHOLD=0 (falls back to 50)', () => {
cleanupCounter();
fs.writeFileSync(counterFile, '49');
const result = runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: '0' });
// 0 is invalid (must be > 0), falls back to 50, count becomes 50 → should suggest
assert.ok(
result.stderr.includes('50 tool calls reached'),
`Should fallback to 50 for threshold=0. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++;
else failed++;
if (test('accepts COMPACT_THRESHOLD=10000 (boundary max)', () => {
cleanupCounter();
fs.writeFileSync(counterFile, '9999');
const result = runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: '10000' });
// count becomes 10000, threshold=10000 → should suggest
assert.ok(
result.stderr.includes('10000 tool calls reached'),
`Should accept threshold=10000. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++;
else failed++;
if (test('rejects COMPACT_THRESHOLD=10001 (falls back to 50)', () => {
cleanupCounter();
fs.writeFileSync(counterFile, '49');
const result = runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: '10001' });
// 10001 > 10000, invalid, falls back to 50, count becomes 50 → should suggest
assert.ok(
result.stderr.includes('50 tool calls reached'),
`Should fallback to 50 for threshold=10001. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++;
else failed++;
if (test('rejects float COMPACT_THRESHOLD (e.g. 3.5)', () => {
cleanupCounter();
fs.writeFileSync(counterFile, '49');
const result = runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: '3.5' });
// parseInt('3.5') = 3, which is valid (> 0 && <= 10000)
// count becomes 50, threshold=3, 50-3=47, 47%25≠0 and 50≠3 → no suggestion
assert.strictEqual(result.code, 0);
// No suggestion expected (50 !== 3, and (50-3) % 25 !== 0)
assert.ok(
!result.stderr.includes('StrategicCompact'),
'Float threshold should be parseInt-ed to 3, no suggestion at count=50'
);
cleanupCounter();
})) passed++;
else failed++;
if (test('counter value at exact boundary 1000000 is valid', () => {
cleanupCounter();
fs.writeFileSync(counterFile, '999999');
runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: '3' });
// 999999 is valid (> 0, <= 1000000), count becomes 1000000
const count = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 1000000, 'Counter at 1000000 boundary should be valid');
cleanupCounter();
})) passed++;
else failed++;
if (test('counter value at 1000001 is clamped (reset to 1)', () => {
cleanupCounter();
fs.writeFileSync(counterFile, '1000001');
runCompact({ CLAUDE_SESSION_ID: testSession });
const count = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 1, 'Counter > 1000000 should be reset to 1');
cleanupCounter();
})) passed++;
else failed++;
// ── Round 64: default session ID fallback ──
console.log('\nDefault session ID fallback (Round 64):');
if (test('uses "default" session ID when CLAUDE_SESSION_ID is empty', () => {
const defaultCounterFile = getCounterFilePath('default');
try { fs.unlinkSync(defaultCounterFile); } catch (_err) { /* ignore */ }
try {
// Pass empty CLAUDE_SESSION_ID — falsy, so script uses 'default'
const env = { ...process.env, CLAUDE_SESSION_ID: '' };
const result = spawnSync('node', [compactScript], {
encoding: 'utf8',
input: '{}',
timeout: 10000,
env,
});
assert.strictEqual(result.status || 0, 0, 'Should exit 0');
assert.ok(fs.existsSync(defaultCounterFile), 'Counter file should use "default" session ID');
const count = parseInt(fs.readFileSync(defaultCounterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 1, 'Counter should be 1 for first run with default session');
} finally {
try { fs.unlinkSync(defaultCounterFile); } catch (_err) { /* ignore */ }
}
})) passed++;
else failed++;
// Summary
console.log(`
Results: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}
runTests();

View File

@@ -262,8 +262,13 @@ async function runTests() {
});
});
assert.ok(stderr.includes('BLOCKED'), 'Blocking hook should output BLOCKED');
assert.strictEqual(code, 2, 'Blocking hook should exit with code 2');
// Hook only blocks on non-Windows platforms (tmux is Unix-only)
if (process.platform === 'win32') {
assert.strictEqual(code, 0, 'On Windows, hook should not block (exit 0)');
} else {
assert.ok(stderr.includes('BLOCKED'), 'Blocking hook should output BLOCKED');
assert.strictEqual(code, 2, 'Blocking hook should exit with code 2');
}
})) passed++; else failed++;
// ==========================================
@@ -298,7 +303,12 @@ async function runTests() {
});
});
assert.strictEqual(code, 2, 'Blocking hook should exit 2');
// Hook only blocks on non-Windows platforms (tmux is Unix-only)
if (process.platform === 'win32') {
assert.strictEqual(code, 0, 'On Windows, hook should not block (exit 0)');
} else {
assert.strictEqual(code, 2, 'Blocking hook should exit 2');
}
})) passed++; else failed++;
if (await asyncTest('hooks handle missing files gracefully', async () => {
@@ -405,6 +415,125 @@ async function runTests() {
);
})) passed++; else failed++;
// ==========================================
// Session End Transcript Parsing Tests
// ==========================================
console.log('\nSession End Transcript Parsing:');
if (await asyncTest('session-end extracts summary from mixed JSONL formats', async () => {
const testDir = createTestDir();
const transcriptPath = path.join(testDir, 'mixed-transcript.jsonl');
// Create transcript with both direct tool_use and nested assistant message formats
const lines = [
JSON.stringify({ type: 'user', content: 'Fix the login bug' }),
JSON.stringify({ type: 'tool_use', name: 'Read', input: { file_path: 'src/auth.ts' } }),
JSON.stringify({ type: 'assistant', message: { content: [
{ type: 'tool_use', name: 'Edit', input: { file_path: 'src/auth.ts' } }
]}}),
JSON.stringify({ type: 'user', content: 'Now add tests' }),
JSON.stringify({ type: 'assistant', message: { content: [
{ type: 'tool_use', name: 'Write', input: { file_path: 'tests/auth.test.ts' } },
{ type: 'text', text: 'Here are the tests' }
]}}),
JSON.stringify({ type: 'user', content: 'Looks good, commit' })
];
fs.writeFileSync(transcriptPath, lines.join('\n'));
try {
const result = await runHookWithInput(
path.join(scriptsDir, 'session-end.js'),
{ transcript_path: transcriptPath },
{ HOME: testDir, USERPROFILE: testDir }
);
assert.strictEqual(result.code, 0, 'Should exit 0');
assert.ok(result.stderr.includes('[SessionEnd]'), 'Should have SessionEnd log');
// Verify a session file was created
const sessionsDir = path.join(testDir, '.claude', 'sessions');
if (fs.existsSync(sessionsDir)) {
const files = fs.readdirSync(sessionsDir).filter(f => f.endsWith('.tmp'));
assert.ok(files.length > 0, 'Should create a session file');
// Verify session content includes tasks from user messages
const content = fs.readFileSync(path.join(sessionsDir, files[0]), 'utf8');
assert.ok(content.includes('Fix the login bug'), 'Should include first user message');
assert.ok(content.includes('auth.ts'), 'Should include modified files');
}
} finally {
cleanupTestDir(testDir);
}
})) passed++; else failed++;
if (await asyncTest('session-end handles transcript with malformed lines gracefully', async () => {
const testDir = createTestDir();
const transcriptPath = path.join(testDir, 'malformed-transcript.jsonl');
const lines = [
JSON.stringify({ type: 'user', content: 'Task 1' }),
'{broken json here',
JSON.stringify({ type: 'user', content: 'Task 2' }),
'{"truncated":',
JSON.stringify({ type: 'user', content: 'Task 3' })
];
fs.writeFileSync(transcriptPath, lines.join('\n'));
try {
const result = await runHookWithInput(
path.join(scriptsDir, 'session-end.js'),
{ transcript_path: transcriptPath },
{ HOME: testDir, USERPROFILE: testDir }
);
assert.strictEqual(result.code, 0, 'Should exit 0 despite malformed lines');
// Should still process the valid lines
assert.ok(result.stderr.includes('[SessionEnd]'), 'Should have SessionEnd log');
assert.ok(result.stderr.includes('unparseable'), 'Should warn about unparseable lines');
} finally {
cleanupTestDir(testDir);
}
})) passed++; else failed++;
if (await asyncTest('session-end creates session file with nested user messages', async () => {
const testDir = createTestDir();
const transcriptPath = path.join(testDir, 'nested-transcript.jsonl');
// Claude Code JSONL format uses nested message.content arrays
const lines = [
JSON.stringify({ type: 'user', message: { role: 'user', content: [
{ type: 'text', text: 'Refactor the utils module' }
]}}),
JSON.stringify({ type: 'assistant', message: { content: [
{ type: 'tool_use', name: 'Read', input: { file_path: 'lib/utils.js' } }
]}}),
JSON.stringify({ type: 'user', message: { role: 'user', content: 'Approve the changes' }})
];
fs.writeFileSync(transcriptPath, lines.join('\n'));
try {
const result = await runHookWithInput(
path.join(scriptsDir, 'session-end.js'),
{ transcript_path: transcriptPath },
{ HOME: testDir, USERPROFILE: testDir }
);
assert.strictEqual(result.code, 0, 'Should exit 0');
// Check session file was created
const sessionsDir = path.join(testDir, '.claude', 'sessions');
if (fs.existsSync(sessionsDir)) {
const files = fs.readdirSync(sessionsDir).filter(f => f.endsWith('.tmp'));
assert.ok(files.length > 0, 'Should create session file');
const content = fs.readFileSync(path.join(sessionsDir, files[0]), 'utf8');
assert.ok(content.includes('Refactor the utils module') || content.includes('Approve'),
'Should extract user messages from nested format');
}
} finally {
cleanupTestDir(testDir);
}
})) passed++; else failed++;
// ==========================================
// Error Handling Tests
// ==========================================
@@ -503,6 +632,76 @@ async function runTests() {
assert.strictEqual(code, 0, 'Should not crash on truncated JSON');
})) passed++; else failed++;
// ==========================================
// Round 51: Timeout Enforcement
// ==========================================
console.log('\nRound 51: Timeout Enforcement:');
if (await asyncTest('runHookWithInput kills hanging hooks after timeout', async () => {
const testDir = createTestDir();
const hangingHookPath = path.join(testDir, 'hanging-hook.js');
fs.writeFileSync(hangingHookPath, 'setInterval(() => {}, 100);');
try {
const startTime = Date.now();
let error = null;
try {
await runHookWithInput(hangingHookPath, {}, {}, 500);
} catch (err) {
error = err;
}
const elapsed = Date.now() - startTime;
assert.ok(error, 'Should throw timeout error');
assert.ok(error.message.includes('timed out'), 'Error should mention timeout');
assert.ok(elapsed >= 450, `Should wait at least ~500ms, waited ${elapsed}ms`);
assert.ok(elapsed < 2000, `Should not wait much longer than 500ms, waited ${elapsed}ms`);
} finally {
cleanupTestDir(testDir);
}
})) passed++; else failed++;
// ==========================================
// Round 51: hooks.json Schema Validation
// ==========================================
console.log('\nRound 51: hooks.json Schema Validation:');
if (await asyncTest('hooks.json async hook has valid timeout field', async () => {
const asyncHook = hooks.hooks.PostToolUse.find(h =>
h.hooks && h.hooks[0] && h.hooks[0].async === true
);
assert.ok(asyncHook, 'Should have at least one async hook defined');
assert.strictEqual(asyncHook.hooks[0].async, true, 'async field should be true');
assert.ok(asyncHook.hooks[0].timeout, 'Should have timeout field');
assert.strictEqual(typeof asyncHook.hooks[0].timeout, 'number', 'Timeout should be a number');
assert.ok(asyncHook.hooks[0].timeout > 0, 'Timeout should be positive');
const match = asyncHook.hooks[0].command.match(/^node -e "(.+)"$/s);
assert.ok(match, 'Async hook command should be node -e format');
})) passed++; else failed++;
if (await asyncTest('all hook commands in hooks.json are valid format', async () => {
for (const [hookType, hookArray] of Object.entries(hooks.hooks)) {
for (const hookDef of hookArray) {
assert.ok(hookDef.hooks, `${hookType} entry should have hooks array`);
for (const hook of hookDef.hooks) {
assert.ok(hook.command, `Hook in ${hookType} should have command field`);
const isInline = hook.command.startsWith('node -e');
const isFilePath = hook.command.startsWith('node "');
assert.ok(
isInline || isFilePath,
`Hook command in ${hookType} should be inline (node -e) or file path (node "), got: ${hook.command.substring(0, 50)}`
);
}
}
}
})) passed++; else failed++;
// Summary
console.log('\n=== Test Results ===');
console.log(`Passed: ${passed}`);

Some files were not shown because too many files have changed in this diff Show More