Compare commits

...

87 Commits

Author SHA1 Message Date
Affaan Mustafa
db27ba1eb2 chore: update README stats and add Codex platform support
- Stars: 42K+ -> 50K+, forks: 5K+ -> 6K+, contributors: 24 -> 30
- Skills: 43 -> 44 (search-first), commands: 31 -> 32 (learn-eval)
- Add Codex to supported platforms in FAQ
- Add search-first skill and learn-eval command to directory listing
- Update OpenCode feature parity table counts
2026-02-23 06:56:00 -08:00
Affaan Mustafa
3c833d8922 Merge pull request #265 from shimo4228/feat/skills/skill-stocktake
feat(skills): add skill-stocktake skill
2026-02-23 06:55:38 -08:00
Affaan Mustafa
156b89ed30 Merge pull request #268 from pangerlkr/patch-6
feat: add scripts/codemaps/generate.ts — fixes #247
2026-02-23 06:55:35 -08:00
Pangerkumzuk Longkumer
41ce1a52e5 feat: add scripts/codemaps/generate.ts codemap generator Fixes #247 - The generate.ts script referenced in agents/doc-updater.md was missing from the repository. This adds the actual implementation. The script: - Recursively walks the src directory (skipping node_modules, dist, etc.) - Classifies files into 5 areas: frontend, backend, database, integrations, workers - Generates docs/CODEMAPS/INDEX.md + one .md per area - Uses the codemap format defined in doc-updater.md - Supports optional srcDir argument: npx tsx scripts/codemaps/generate.ts [srcDir]
This script scans the current working directory and generates architectural codemap documentation in the specified output directory. It classifies files into areas such as frontend, backend, database, integrations, and workers, and creates markdown files for each area along with an index.
2026-02-22 16:19:16 +05:30
Jongchan
6f94c2e28f fix(search-first): add missing skill name frontmatter (#266)
fix: add missing name frontmatter for search-first skill
2026-02-21 16:04:39 -08:00
Tatsuya Shimomoto
91b7ccf56f feat(skills): add skill-stocktake skill 2026-02-22 04:29:40 +09:00
Affaan Mustafa
7daa830da9 Merge pull request #263 from shimo4228/feat/commands/learn-eval
feat(commands): add learn-eval command
2026-02-20 21:17:57 -08:00
Affaan Mustafa
7e57d1b831 Merge pull request #262 from shimo4228/feat/skills/search-first
feat(skills): add search-first skill
2026-02-20 21:17:54 -08:00
Tatsuya Shimomoto
ff47dace11 feat(commands): add learn-eval command 2026-02-21 12:10:39 +09:00
Tatsuya Shimomoto
c9dc53e862 feat(skills): add search-first skill 2026-02-21 12:10:25 +09:00
Affaan Mustafa
c8f54481b8 chore: update Sonnet model references from 4.5 to 4.6
chore: update Sonnet model references from 4.5 to 4.6
2026-02-20 10:59:12 -08:00
Affaan Mustafa
294fc4aad8 fix: CI/Test for issue #226 (hook override bug)
Fixed CI / Test for (issue#226)
2026-02-20 10:59:10 -08:00
yptse123
81aa8a72c3 chore: update Sonnet model references from 4.5 to 4.6
Update MODEL_SONNET constant and all documentation references
to reflect the new claude-sonnet-4-6 model version.
2026-02-20 18:08:10 +08:00
Affaan Mustafa
0e9f613fd1 Revert "feat(ecc): prune plugin 43→12 items, promote 7 rules to .claude/rules/ (#245)"
This reverts commit 1bd68ff534.
2026-02-20 01:11:30 -08:00
park-kyungchan
1bd68ff534 feat(ecc): prune plugin 43→12 items, promote 7 rules to .claude/rules/ (#245)
ECC community plugin pruning: removed 530+ non-essential files
(.cursor/, .opencode/, docs/ja-JP, docs/zh-CN, docs/zh-TW,
language-specific skills/agents/rules). Retained 4 agents,
3 commands, 5 skills. Promoted 13 rule files (8 common + 5
typescript) to .claude/rules/ for CC native loading. Extracted
reusable patterns to EXTRACTED-PATTERNS.md.
2026-02-19 22:34:51 -08:00
Affaan Mustafa
24047351c2 Merge pull request #251 from gangqian68/main
docs: add CLAUDE.md for Claude Code guidance
2026-02-19 04:56:49 -08:00
qian gang
66959c1dca docs: add CLAUDE.md for Claude Code guidance
Add project-level CLAUDE.md with test commands, architecture
overview, key commands, and contribution guidelines.
2026-02-19 16:50:08 +08:00
Pangerkumzuk Longkumer
5a0f6e9e1e Merge pull request #12 from pangerlkr/copilot/fix-ci-test-failures-again
Fix Windows CI test failures - platform-specific test adjustments
2026-02-19 06:43:40 +05:30
copilot-swe-agent[bot]
cf61ef7539 Fix Windows CI test failures - add platform checks and USERPROFILE support
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 15:10:32 +00:00
copilot-swe-agent[bot]
07e23e3e64 Initial plan 2026-02-18 15:00:14 +00:00
Pangerkumzuk Longkumer
8fc49ba0e8 Merge pull request #11 from pangerlkr/copilot/fix-ci-test-failures
Fix session-manager tests failing in CI due to missing test isolation
2026-02-18 20:29:40 +05:30
copilot-swe-agent[bot]
b90448aef6 Clarify cleanup comment scope in session-manager tests
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 14:05:46 +00:00
copilot-swe-agent[bot]
caab908be8 Fix session-manager test environment for Rounds 95-98
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 14:02:11 +00:00
copilot-swe-agent[bot]
7021d1f6cf Initial plan 2026-02-18 13:51:40 +00:00
Pangerkumzuk Longkumer
3ad211b01b Merge pull request #10 from pangerlkr/copilot/fix-matrix-test-failures
Fix platform-specific hook blocking tests for CI matrix
2026-02-18 19:21:21 +05:30
copilot-swe-agent[bot]
f61c9b0caf Fix integration/hooks tests to handle Windows platform differences
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 13:42:05 +00:00
copilot-swe-agent[bot]
b682ac7d79 Initial plan 2026-02-18 13:36:31 +00:00
Pangerkumzuk Longkumer
e1fca6e84d Merge branch 'affaan-m:main' into main 2026-02-18 18:33:04 +05:30
Pangerkumzuk Longkumer
07530ace5f Merge pull request #9 from pangerlkr/claude/fix-agentshield-security-scan
Fix test failures and remove broken AgentShield workflow
2026-02-18 13:42:01 +05:30
anthropic-code-agent[bot]
00464b6f60 Fix failing workflows: trim action in getCommandPattern and remove broken AgentShield scan
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 08:06:25 +00:00
anthropic-code-agent[bot]
0c78a7c779 Initial plan 2026-02-18 08:00:23 +00:00
Pangerkumzuk Longkumer
fca997001e Merge pull request #7 from pangerlkr/copilot/fix-workflow-failures
Fix stdin size limit enforcement in hook scripts
2026-02-18 13:16:01 +05:30
copilot-swe-agent[bot]
1eca3c9130 Fix stdin overflow bug in hook scripts - truncate chunks to stay within MAX_STDIN limit
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 07:40:12 +00:00
copilot-swe-agent[bot]
defcdc356e Initial plan 2026-02-18 07:28:13 +00:00
Pangerkumzuk Longkumer
b548ce47c9 Merge pull request #6 from pangerlkr/copilot/fix-workflow-actions
Fix copilot-setup-steps.yml YAML structure and address review feedback
2026-02-18 12:56:29 +05:30
copilot-swe-agent[bot]
90e6a8c63b Fix copilot-setup-steps.yml and address PR review comments
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 07:22:05 +00:00
copilot-swe-agent[bot]
c68f7efcdc Initial plan 2026-02-18 07:16:12 +00:00
Pangerkumzuk Longkumer
aa805d5240 Merge pull request #5 from pangerlkr/claude/fix-workflow-actions
Fix ESLint errors in test files and package manager
2026-02-18 12:42:38 +05:30
anthropic-code-agent[bot]
c5ca3c698c Fix ESLint errors in test files and package-manager.js
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-18 07:04:29 +00:00
anthropic-code-agent[bot]
7e928572c7 Initial plan 2026-02-18 06:58:35 +00:00
Pangerkumzuk Longkumer
0bf47bbb41 Update print statement from 'Hfix: update package manager tests and add summaryello' to 'Goodbye' 2026-02-18 07:29:16 +05:30
Pangerkumzuk Longkumer
2ad888ca82 Refactor console log formatting in tests 2026-02-18 07:21:58 +05:30
Pangerkumzuk Longkumer
8966282e48 fix: add comments to empty catch blocks (no-empty ESLint) 2026-02-18 07:18:40 +05:30
Pangerkumzuk Longkumer
3d97985559 fix: remove unused execFileSync import (no-unused-vars ESLint) 2026-02-18 07:15:53 +05:30
Pangerkumzuk Longkumer
d54124afad fix: remove useless escape characters in regex patterns (no-useless-escape ESLint) 2026-02-18 07:14:32 +05:30
Affaan Mustafa
0b11849f1e chore: update skill count from 37 to 43, add 5 new skills to directory listing
New community skills: content-hash-cache-pattern, cost-aware-llm-pipeline,
regex-vs-llm-structured-text, swift-actor-persistence, swift-protocol-di-testing
2026-02-16 20:04:57 -08:00
Affaan Mustafa
2c26d2d67c fix: add missing process.exit(0) to early return in post-edit-console-warn hook 2026-02-16 20:03:12 -08:00
Pangerkumzuk Longkumer
fdda6cbcd9 Merge branch 'main' into main 2026-02-17 07:00:12 +05:30
Affaan Mustafa
5cb9c1c2a5 Merge pull request #223 from shimo4228/feat/skills/regex-vs-llm-structured-text
feat(skills): add regex-vs-llm-structured-text skill
2026-02-16 14:19:02 -08:00
Affaan Mustafa
595127954f Merge pull request #222 from shimo4228/feat/skills/content-hash-cache-pattern
feat(skills): add content-hash-cache-pattern skill
2026-02-16 14:18:59 -08:00
Affaan Mustafa
bb084229aa Merge pull request #221 from shimo4228/feat/skills/swift-actor-persistence
feat(skills): add swift-actor-persistence skill
2026-02-16 14:18:57 -08:00
Affaan Mustafa
849bb3b425 Merge pull request #220 from shimo4228/feat/skills/swift-protocol-di-testing
feat(skills): add swift-protocol-di-testing skill
2026-02-16 14:18:55 -08:00
Affaan Mustafa
4db215f60d Merge pull request #219 from shimo4228/feat/skills/cost-aware-llm-pipeline
feat(skills): add cost-aware-llm-pipeline skill
2026-02-16 14:18:53 -08:00
Affaan Mustafa
bb1486c404 Merge pull request #229 from voidforall/feat/cpp-coding-standards
feat: add cpp coding standards skill
2026-02-16 14:18:35 -08:00
Affaan Mustafa
9339d4c88c Merge pull request #228 from hrygo/fix/observe.sh-timestamp-export
fix: correct TIMESTAMP environment variable syntax in observe.sh
2026-02-16 14:18:13 -08:00
Affaan Mustafa
2497a9b6e5 Merge pull request #241 from sungpeo/mkdir-rules-readme
docs: require default directory creation for initial user-level rules
2026-02-16 14:18:10 -08:00
Affaan Mustafa
e449471ed3 Merge pull request #230 from neal-zhu/fix/whitelist-plan-files-in-doc-hook
fix: whitelist .claude/plans/ in doc file creation hook
2026-02-16 14:18:08 -08:00
Sungpeo Kook
cad8db21b7 docs: add mkdir for rules directory in ja-JP and zh-CN READMEs 2026-02-17 01:48:37 +09:00
Sungpeo Kook
9d9258c7e1 docs: require default directory creation for initial user-level rules 2026-02-17 01:32:56 +09:00
Pangerkumzuk Longkumer
08ee723e85 Merge pull request #3 from pangerlkr/copilot/fix-markdownlint-errors-again
Fix markdownlint errors: MD038, MD058, MD025, MD034
2026-02-16 19:23:01 +05:30
Pangerkumzuk Longkumer
f11347a708 Merge pull request #4 from pangerlkr/copilot/fix-markdownlint-errors-another-one
Fix markdownlint errors (MD038, MD058, MD025, MD034)
2026-02-16 19:20:56 +05:30
copilot-swe-agent[bot]
586637f94c Revert unrelated package-lock.json changes
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-16 03:01:15 +00:00
copilot-swe-agent[bot]
2b6ff6b55e Initial plan for markdownlint error fixes
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-16 02:58:12 +00:00
copilot-swe-agent[bot]
2be6e09501 Initial plan 2026-02-16 02:55:40 +00:00
copilot-swe-agent[bot]
b1d47b22ea Initial plan 2026-02-16 02:37:38 +00:00
Pangerkumzuk Longkumer
9dd4f4409b Merge pull request #2 from pangerlkr/copilot/fix-markdownlint-errors
Fix markdownlint CI failures (MD038, MD058, MD025, MD034)
2026-02-16 07:58:23 +05:30
copilot-swe-agent[bot]
c5de2a7bf7 Remove misleading comments about trailing spaces
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-16 02:22:49 +00:00
copilot-swe-agent[bot]
af24c617bb Fix all markdownlint errors (MD038, MD058, MD025, MD034)
Co-authored-by: pangerlkr <73515951+pangerlkr@users.noreply.github.com>
2026-02-16 02:22:14 +00:00
copilot-swe-agent[bot]
2ca903d4c5 Initial plan 2026-02-16 02:20:28 +00:00
Pangerkumzuk Longkumer
4d98d9f125 Add Go environment setup step to workflow
Added a step to set up the Go environment in GitHub Actions workflow.
2026-02-16 07:10:39 +05:30
Maohua Zhu
40e80bcc61 fix: whitelist .claude/plans/ in doc file creation hook
The PreToolUse Write hook blocks creation of .md files to prevent
unnecessary documentation sprawl. However, it also blocks Claude Code's
built-in plan mode from writing plan files to .claude/plans/*.md,
causing "BLOCKED: Unnecessary documentation file creation" errors
every time a plan is created or updated.

Add .claude/plans/ path to the whitelist so plan files are not blocked.
2026-02-15 08:49:15 +08:00
Lin Yuan
eaf710847f fix syntax in illustrative example 2026-02-14 20:36:36 +00:00
voidforall
b169a2e1dd Update .cursor/skills/cpp-coding-standards/SKILL.md
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-02-14 20:33:59 +00:00
Lin Yuan
8b4aac4e56 feat: add cpp-coding-standards skill to skills/ and update README 2026-02-14 20:26:33 +00:00
Lin Yuan
08f60355d4 feat: add cpp-coding-standards skill based on C++ Core Guidelines
Comprehensive coding standards for modern C++ (C++17/20/23) derived
from isocpp.github.io/CppCoreGuidelines. Covers philosophy, functions,
classes, resource management, expressions, error handling, immutability,
concurrency, templates, standard library, enumerations, naming, and
performance with DO/DON'T examples and a pre-commit checklist.
2026-02-14 20:20:27 +00:00
黄飞虹
1f74889dbf fix: correct TIMESTAMP environment variable syntax in observe.sh
The inline environment variable syntax `TIMESTAMP="$timestamp" echo ...` 
does not work correctly because:
1. The pipe creates a subshell that doesn't inherit the variable
2. The environment variable is set for echo, not for the piped python

Fixed by using `export` and separating the commands:
- export TIMESTAMP="$timestamp"
- echo "$INPUT_JSON" | python3 -c "..."

This ensures the TIMESTAMP variable is available to the python subprocess.

Fixes #227

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-14 21:25:50 +08:00
Tatsuya Shimomoto
82d751556c feat(skills): add regex-vs-llm-structured-text skill
Decision framework and hybrid pipeline for choosing between regex
and LLM when parsing structured text.
2026-02-14 12:33:03 +09:00
Tatsuya Shimomoto
3847cc0e0d feat(skills): add content-hash-cache-pattern skill
SHA-256 content-hash based file caching with service layer
separation for expensive processing pipelines.
2026-02-14 12:31:30 +09:00
Tatsuya Shimomoto
94eaaad238 feat(skills): add swift-actor-persistence skill
Thread-safe data persistence patterns using Swift actors with
in-memory cache and file-backed storage.
2026-02-14 12:30:22 +09:00
Tatsuya Shimomoto
ab5be936e9 feat(skills): add swift-protocol-di-testing skill
Protocol-based dependency injection patterns for testable Swift code
with Swift Testing framework examples.
2026-02-14 12:18:48 +09:00
Tatsuya Shimomoto
219bd1ff88 feat(skills): add cost-aware-llm-pipeline skill
Cost optimization patterns for LLM API usage combining model routing,
budget tracking, retry logic, and prompt caching.
2026-02-14 12:16:05 +09:00
Pangerkumzuk Longkumer
9db98673d0 Merge branch 'affaan-m:main' into main 2026-02-09 17:12:53 +05:30
Pangerkumzuk Longkumer
fab2e05ae7 Merge branch 'affaan-m:main' into main 2026-02-02 10:53:41 +05:30
Graceme Kamei
8d65c6d429 Merge branch 'affaan-m:main' into main 2026-01-30 19:32:07 +05:30
Panger Lkr
9b2233b5bc Merge branch 'affaan-m:main' into main 2026-01-27 10:15:34 +05:30
Panger Lkr
5a26daf392 Merge pull request #1 from pangerlkr/pangerlkr-patch-1
feat: add cloud infrastructure security skill
2026-01-23 23:14:43 +05:30
Panger Lkr
438d082e30 feat: add cloud infrastructure security skill
Add comprehensive cloud and infrastructure security skill covering:
- IAM & access control (least privilege, MFA)
- Secrets management & rotation
- Network security (VPC, firewalls)
- Logging & monitoring setup
- CI/CD pipeline security
- Cloudflare/CDN security
- Backup & disaster recovery
- Pre-deployment checklist

Complements existing security-review skill with cloud-specific guidance.
2026-01-23 22:50:59 +05:30
43 changed files with 4017 additions and 351 deletions

View File

@@ -8,7 +8,7 @@ Pre-translated configurations for [Cursor IDE](https://cursor.com), part of the
|----------|-------|-------------|
| Rules | 27 | Coding standards, security, testing, patterns (common + TypeScript/Python/Go) |
| Agents | 13 | Specialized AI agents (planner, architect, code-reviewer, tdd-guide, etc.) |
| Skills | 37 | Agent skills for backend, frontend, security, TDD, and more |
| Skills | 43 | Agent skills for backend, frontend, security, TDD, and more |
| Commands | 31 | Slash commands for planning, reviewing, testing, and deployment |
| MCP Config | 1 | Pre-configured MCP servers (GitHub, Supabase, Vercel, Railway, etc.) |

View File

@@ -12,7 +12,7 @@ alwaysApply: true
- Pair programming and code generation
- Worker agents in multi-agent systems
**Sonnet 4.5** (Best coding model):
**Sonnet 4.6** (Best coding model):
- Main development work
- Orchestrating multi-agent workflows
- Complex coding tasks

View File

@@ -0,0 +1,722 @@
---
name: cpp-coding-standards
description: C++ coding standards based on the C++ Core Guidelines (isocpp.github.io). Use when writing, reviewing, or refactoring C++ code to enforce modern, safe, and idiomatic practices.
---
# C++ Coding Standards (C++ Core Guidelines)
Comprehensive coding standards for modern C++ (C++17/20/23) derived from the [C++ Core Guidelines](https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines). Enforces type safety, resource safety, immutability, and clarity.
## When to Use
- Writing new C++ code (classes, functions, templates)
- Reviewing or refactoring existing C++ code
- Making architectural decisions in C++ projects
- Enforcing consistent style across a C++ codebase
- Choosing between language features (e.g., `enum` vs `enum class`, raw pointer vs smart pointer)
### When NOT to Use
- Non-C++ projects
- Legacy C codebases that cannot adopt modern C++ features
- Embedded/bare-metal contexts where specific guidelines conflict with hardware constraints (adapt selectively)
## Cross-Cutting Principles
These themes recur across the entire guidelines and form the foundation:
1. **RAII everywhere** (P.8, R.1, E.6, CP.20): Bind resource lifetime to object lifetime
2. **Immutability by default** (P.10, Con.1-5, ES.25): Start with `const`/`constexpr`; mutability is the exception
3. **Type safety** (P.4, I.4, ES.46-49, Enum.3): Use the type system to prevent errors at compile time
4. **Express intent** (P.3, F.1, NL.1-2, T.10): Names, types, and concepts should communicate purpose
5. **Minimize complexity** (F.2-3, ES.5, Per.4-5): Simple code is correct code
6. **Value semantics over pointer semantics** (C.10, R.3-5, F.20, CP.31): Prefer returning by value and scoped objects
## Philosophy & Interfaces (P.*, I.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **P.1** | Express ideas directly in code |
| **P.3** | Express intent |
| **P.4** | Ideally, a program should be statically type safe |
| **P.5** | Prefer compile-time checking to run-time checking |
| **P.8** | Don't leak any resources |
| **P.10** | Prefer immutable data to mutable data |
| **I.1** | Make interfaces explicit |
| **I.2** | Avoid non-const global variables |
| **I.4** | Make interfaces precisely and strongly typed |
| **I.11** | Never transfer ownership by a raw pointer or reference |
| **I.23** | Keep the number of function arguments low |
### DO
```cpp
// P.10 + I.4: Immutable, strongly typed interface
struct Temperature {
double kelvin;
};
Temperature boil(const Temperature& water);
```
### DON'T
```cpp
// Weak interface: unclear ownership, unclear units
double boil(double* temp);
// Non-const global variable
int g_counter = 0; // I.2 violation
```
## Functions (F.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **F.1** | Package meaningful operations as carefully named functions |
| **F.2** | A function should perform a single logical operation |
| **F.3** | Keep functions short and simple |
| **F.4** | If a function might be evaluated at compile time, declare it `constexpr` |
| **F.6** | If your function must not throw, declare it `noexcept` |
| **F.8** | Prefer pure functions |
| **F.16** | For "in" parameters, pass cheaply-copied types by value and others by `const&` |
| **F.20** | For "out" values, prefer return values to output parameters |
| **F.21** | To return multiple "out" values, prefer returning a struct |
| **F.43** | Never return a pointer or reference to a local object |
### Parameter Passing
```cpp
// F.16: Cheap types by value, others by const&
void print(int x); // cheap: by value
void analyze(const std::string& data); // expensive: by const&
void transform(std::string s); // sink: by value (will move)
// F.20 + F.21: Return values, not output parameters
struct ParseResult {
std::string token;
int position;
};
ParseResult parse(std::string_view input); // GOOD: return struct
// BAD: output parameters
void parse(std::string_view input,
std::string& token, int& pos); // avoid this
```
### Pure Functions and constexpr
```cpp
// F.4 + F.8: Pure, constexpr where possible
constexpr int factorial(int n) noexcept {
return (n <= 1) ? 1 : n * factorial(n - 1);
}
static_assert(factorial(5) == 120);
```
### Anti-Patterns
- Returning `T&&` from functions (F.45)
- Using `va_arg` / C-style variadics (F.55)
- Capturing by reference in lambdas passed to other threads (F.53)
- Returning `const T` which inhibits move semantics (F.49)
## Classes & Class Hierarchies (C.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **C.2** | Use `class` if invariant exists; `struct` if data members vary independently |
| **C.9** | Minimize exposure of members |
| **C.20** | If you can avoid defining default operations, do (Rule of Zero) |
| **C.21** | If you define or `=delete` any copy/move/destructor, handle them all (Rule of Five) |
| **C.35** | Base class destructor: public virtual or protected non-virtual |
| **C.41** | A constructor should create a fully initialized object |
| **C.46** | Declare single-argument constructors `explicit` |
| **C.67** | A polymorphic class should suppress public copy/move |
| **C.128** | Virtual functions: specify exactly one of `virtual`, `override`, or `final` |
### Rule of Zero
```cpp
// C.20: Let the compiler generate special members
struct Employee {
std::string name;
std::string department;
int id;
// No destructor, copy/move constructors, or assignment operators needed
};
```
### Rule of Five
```cpp
// C.21: If you must manage a resource, define all five
class Buffer {
public:
explicit Buffer(std::size_t size)
: data_(std::make_unique<char[]>(size)), size_(size) {}
~Buffer() = default;
Buffer(const Buffer& other)
: data_(std::make_unique<char[]>(other.size_)), size_(other.size_) {
std::copy_n(other.data_.get(), size_, data_.get());
}
Buffer& operator=(const Buffer& other) {
if (this != &other) {
auto new_data = std::make_unique<char[]>(other.size_);
std::copy_n(other.data_.get(), other.size_, new_data.get());
data_ = std::move(new_data);
size_ = other.size_;
}
return *this;
}
Buffer(Buffer&&) noexcept = default;
Buffer& operator=(Buffer&&) noexcept = default;
private:
std::unique_ptr<char[]> data_;
std::size_t size_;
};
```
### Class Hierarchy
```cpp
// C.35 + C.128: Virtual destructor, use override
class Shape {
public:
virtual ~Shape() = default;
virtual double area() const = 0; // C.121: pure interface
};
class Circle : public Shape {
public:
explicit Circle(double r) : radius_(r) {}
double area() const override { return 3.14159 * radius_ * radius_; }
private:
double radius_;
};
```
### Anti-Patterns
- Calling virtual functions in constructors/destructors (C.82)
- Using `memset`/`memcpy` on non-trivial types (C.90)
- Providing different default arguments for virtual function and overrider (C.140)
- Making data members `const` or references, which suppresses move/copy (C.12)
## Resource Management (R.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **R.1** | Manage resources automatically using RAII |
| **R.3** | A raw pointer (`T*`) is non-owning |
| **R.5** | Prefer scoped objects; don't heap-allocate unnecessarily |
| **R.10** | Avoid `malloc()`/`free()` |
| **R.11** | Avoid calling `new` and `delete` explicitly |
| **R.20** | Use `unique_ptr` or `shared_ptr` to represent ownership |
| **R.21** | Prefer `unique_ptr` over `shared_ptr` unless sharing ownership |
| **R.22** | Use `make_shared()` to make `shared_ptr`s |
### Smart Pointer Usage
```cpp
// R.11 + R.20 + R.21: RAII with smart pointers
auto widget = std::make_unique<Widget>("config"); // unique ownership
auto cache = std::make_shared<Cache>(1024); // shared ownership
// R.3: Raw pointer = non-owning observer
void render(const Widget* w) { // does NOT own w
if (w) w->draw();
}
render(widget.get());
```
### RAII Pattern
```cpp
// R.1: Resource acquisition is initialization
class FileHandle {
public:
explicit FileHandle(const std::string& path)
: handle_(std::fopen(path.c_str(), "r")) {
if (!handle_) throw std::runtime_error("Failed to open: " + path);
}
~FileHandle() {
if (handle_) std::fclose(handle_);
}
FileHandle(const FileHandle&) = delete;
FileHandle& operator=(const FileHandle&) = delete;
FileHandle(FileHandle&& other) noexcept
: handle_(std::exchange(other.handle_, nullptr)) {}
FileHandle& operator=(FileHandle&& other) noexcept {
if (this != &other) {
if (handle_) std::fclose(handle_);
handle_ = std::exchange(other.handle_, nullptr);
}
return *this;
}
private:
std::FILE* handle_;
};
```
### Anti-Patterns
- Naked `new`/`delete` (R.11)
- `malloc()`/`free()` in C++ code (R.10)
- Multiple resource allocations in a single expression (R.13 -- exception safety hazard)
- `shared_ptr` where `unique_ptr` suffices (R.21)
## Expressions & Statements (ES.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **ES.5** | Keep scopes small |
| **ES.20** | Always initialize an object |
| **ES.23** | Prefer `{}` initializer syntax |
| **ES.25** | Declare objects `const` or `constexpr` unless modification is intended |
| **ES.28** | Use lambdas for complex initialization of `const` variables |
| **ES.45** | Avoid magic constants; use symbolic constants |
| **ES.46** | Avoid narrowing/lossy arithmetic conversions |
| **ES.47** | Use `nullptr` rather than `0` or `NULL` |
| **ES.48** | Avoid casts |
| **ES.50** | Don't cast away `const` |
### Initialization
```cpp
// ES.20 + ES.23 + ES.25: Always initialize, prefer {}, default to const
const int max_retries{3};
const std::string name{"widget"};
const std::vector<int> primes{2, 3, 5, 7, 11};
// ES.28: Lambda for complex const initialization
const auto config = [&] {
Config c;
c.timeout = std::chrono::seconds{30};
c.retries = max_retries;
c.verbose = debug_mode;
return c;
}();
```
### Anti-Patterns
- Uninitialized variables (ES.20)
- Using `0` or `NULL` as pointer (ES.47 -- use `nullptr`)
- C-style casts (ES.48 -- use `static_cast`, `const_cast`, etc.)
- Casting away `const` (ES.50)
- Magic numbers without named constants (ES.45)
- Mixing signed and unsigned arithmetic (ES.100)
- Reusing names in nested scopes (ES.12)
## Error Handling (E.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **E.1** | Develop an error-handling strategy early in a design |
| **E.2** | Throw an exception to signal that a function can't perform its assigned task |
| **E.6** | Use RAII to prevent leaks |
| **E.12** | Use `noexcept` when throwing is impossible or unacceptable |
| **E.14** | Use purpose-designed user-defined types as exceptions |
| **E.15** | Throw by value, catch by reference |
| **E.16** | Destructors, deallocation, and swap must never fail |
| **E.17** | Don't try to catch every exception in every function |
### Exception Hierarchy
```cpp
// E.14 + E.15: Custom exception types, throw by value, catch by reference
class AppError : public std::runtime_error {
public:
using std::runtime_error::runtime_error;
};
class NetworkError : public AppError {
public:
NetworkError(const std::string& msg, int code)
: AppError(msg), status_code(code) {}
int status_code;
};
void fetch_data(const std::string& url) {
// E.2: Throw to signal failure
throw NetworkError("connection refused", 503);
}
void run() {
try {
fetch_data("https://api.example.com");
} catch (const NetworkError& e) {
log_error(e.what(), e.status_code);
} catch (const AppError& e) {
log_error(e.what());
}
// E.17: Don't catch everything here -- let unexpected errors propagate
}
```
### Anti-Patterns
- Throwing built-in types like `int` or string literals (E.14)
- Catching by value (slicing risk) (E.15)
- Empty catch blocks that silently swallow errors
- Using exceptions for flow control (E.3)
- Error handling based on global state like `errno` (E.28)
## Constants & Immutability (Con.*)
### All Rules
| Rule | Summary |
|------|---------|
| **Con.1** | By default, make objects immutable |
| **Con.2** | By default, make member functions `const` |
| **Con.3** | By default, pass pointers and references to `const` |
| **Con.4** | Use `const` for values that don't change after construction |
| **Con.5** | Use `constexpr` for values computable at compile time |
```cpp
// Con.1 through Con.5: Immutability by default
class Sensor {
public:
explicit Sensor(std::string id) : id_(std::move(id)) {}
// Con.2: const member functions by default
const std::string& id() const { return id_; }
double last_reading() const { return reading_; }
// Only non-const when mutation is required
void record(double value) { reading_ = value; }
private:
const std::string id_; // Con.4: never changes after construction
double reading_{0.0};
};
// Con.3: Pass by const reference
void display(const Sensor& s) {
std::cout << s.id() << ": " << s.last_reading() << '\n';
}
// Con.5: Compile-time constants
constexpr double PI = 3.14159265358979;
constexpr int MAX_SENSORS = 256;
```
## Concurrency & Parallelism (CP.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **CP.2** | Avoid data races |
| **CP.3** | Minimize explicit sharing of writable data |
| **CP.4** | Think in terms of tasks, rather than threads |
| **CP.8** | Don't use `volatile` for synchronization |
| **CP.20** | Use RAII, never plain `lock()`/`unlock()` |
| **CP.21** | Use `std::scoped_lock` to acquire multiple mutexes |
| **CP.22** | Never call unknown code while holding a lock |
| **CP.42** | Don't wait without a condition |
| **CP.44** | Remember to name your `lock_guard`s and `unique_lock`s |
| **CP.100** | Don't use lock-free programming unless you absolutely have to |
### Safe Locking
```cpp
// CP.20 + CP.44: RAII locks, always named
class ThreadSafeQueue {
public:
void push(int value) {
std::lock_guard<std::mutex> lock(mutex_); // CP.44: named!
queue_.push(value);
cv_.notify_one();
}
int pop() {
std::unique_lock<std::mutex> lock(mutex_);
// CP.42: Always wait with a condition
cv_.wait(lock, [this] { return !queue_.empty(); });
const int value = queue_.front();
queue_.pop();
return value;
}
private:
std::mutex mutex_; // CP.50: mutex with its data
std::condition_variable cv_;
std::queue<int> queue_;
};
```
### Multiple Mutexes
```cpp
// CP.21: std::scoped_lock for multiple mutexes (deadlock-free)
void transfer(Account& from, Account& to, double amount) {
std::scoped_lock lock(from.mutex_, to.mutex_);
from.balance_ -= amount;
to.balance_ += amount;
}
```
### Anti-Patterns
- `volatile` for synchronization (CP.8 -- it's for hardware I/O only)
- Detaching threads (CP.26 -- lifetime management becomes nearly impossible)
- Unnamed lock guards: `std::lock_guard<std::mutex>(m);` destroys immediately (CP.44)
- Holding locks while calling callbacks (CP.22 -- deadlock risk)
- Lock-free programming without deep expertise (CP.100)
## Templates & Generic Programming (T.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **T.1** | Use templates to raise the level of abstraction |
| **T.2** | Use templates to express algorithms for many argument types |
| **T.10** | Specify concepts for all template arguments |
| **T.11** | Use standard concepts whenever possible |
| **T.13** | Prefer shorthand notation for simple concepts |
| **T.43** | Prefer `using` over `typedef` |
| **T.120** | Use template metaprogramming only when you really need to |
| **T.144** | Don't specialize function templates (overload instead) |
### Concepts (C++20)
```cpp
#include <concepts>
// T.10 + T.11: Constrain templates with standard concepts
template<std::integral T>
T gcd(T a, T b) {
while (b != 0) {
a = std::exchange(b, a % b);
}
return a;
}
// T.13: Shorthand concept syntax
void sort(std::ranges::random_access_range auto& range) {
std::ranges::sort(range);
}
// Custom concept for domain-specific constraints
template<typename T>
concept Serializable = requires(const T& t) {
{ t.serialize() } -> std::convertible_to<std::string>;
};
template<Serializable T>
void save(const T& obj, const std::string& path);
```
### Anti-Patterns
- Unconstrained templates in visible namespaces (T.47)
- Specializing function templates instead of overloading (T.144)
- Template metaprogramming where `constexpr` suffices (T.120)
- `typedef` instead of `using` (T.43)
## Standard Library (SL.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **SL.1** | Use libraries wherever possible |
| **SL.2** | Prefer the standard library to other libraries |
| **SL.con.1** | Prefer `std::array` or `std::vector` over C arrays |
| **SL.con.2** | Prefer `std::vector` by default |
| **SL.str.1** | Use `std::string` to own character sequences |
| **SL.str.2** | Use `std::string_view` to refer to character sequences |
| **SL.io.50** | Avoid `endl` (use `'\n'` -- `endl` forces a flush) |
```cpp
// SL.con.1 + SL.con.2: Prefer vector/array over C arrays
const std::array<int, 4> fixed_data{1, 2, 3, 4};
std::vector<std::string> dynamic_data;
// SL.str.1 + SL.str.2: string owns, string_view observes
std::string build_greeting(std::string_view name) {
return "Hello, " + std::string(name) + "!";
}
// SL.io.50: Use '\n' not endl
std::cout << "result: " << value << '\n';
```
## Enumerations (Enum.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **Enum.1** | Prefer enumerations over macros |
| **Enum.3** | Prefer `enum class` over plain `enum` |
| **Enum.5** | Don't use ALL_CAPS for enumerators |
| **Enum.6** | Avoid unnamed enumerations |
```cpp
// Enum.3 + Enum.5: Scoped enum, no ALL_CAPS
enum class Color { red, green, blue };
enum class LogLevel { debug, info, warning, error };
// BAD: plain enum leaks names, ALL_CAPS clashes with macros
enum { RED, GREEN, BLUE }; // Enum.3 + Enum.5 + Enum.6 violation
#define MAX_SIZE 100 // Enum.1 violation -- use constexpr
```
## Source Files & Naming (SF.*, NL.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **SF.1** | Use `.cpp` for code files and `.h` for interface files |
| **SF.7** | Don't write `using namespace` at global scope in a header |
| **SF.8** | Use `#include` guards for all `.h` files |
| **SF.11** | Header files should be self-contained |
| **NL.5** | Avoid encoding type information in names (no Hungarian notation) |
| **NL.8** | Use a consistent naming style |
| **NL.9** | Use ALL_CAPS for macro names only |
| **NL.10** | Prefer `underscore_style` names |
### Header Guard
```cpp
// SF.8: Include guard (or #pragma once)
#ifndef PROJECT_MODULE_WIDGET_H
#define PROJECT_MODULE_WIDGET_H
// SF.11: Self-contained -- include everything this header needs
#include <string>
#include <vector>
namespace project::module {
class Widget {
public:
explicit Widget(std::string name);
const std::string& name() const;
private:
std::string name_;
};
} // namespace project::module
#endif // PROJECT_MODULE_WIDGET_H
```
### Naming Conventions
```cpp
// NL.8 + NL.10: Consistent underscore_style
namespace my_project {
constexpr int max_buffer_size = 4096; // NL.9: not ALL_CAPS (it's not a macro)
class tcp_connection { // underscore_style class
public:
void send_message(std::string_view msg);
bool is_connected() const;
private:
std::string host_; // trailing underscore for members
int port_;
};
} // namespace my_project
```
### Anti-Patterns
- `using namespace std;` in a header at global scope (SF.7)
- Headers that depend on inclusion order (SF.10, SF.11)
- Hungarian notation like `strName`, `iCount` (NL.5)
- ALL_CAPS for anything other than macros (NL.9)
## Performance (Per.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **Per.1** | Don't optimize without reason |
| **Per.2** | Don't optimize prematurely |
| **Per.6** | Don't make claims about performance without measurements |
| **Per.7** | Design to enable optimization |
| **Per.10** | Rely on the static type system |
| **Per.11** | Move computation from run time to compile time |
| **Per.19** | Access memory predictably |
### Guidelines
```cpp
// Per.11: Compile-time computation where possible
constexpr auto lookup_table = [] {
std::array<int, 256> table{};
for (int i = 0; i < 256; ++i) {
table[i] = i * i;
}
return table;
}();
// Per.19: Prefer contiguous data for cache-friendliness
std::vector<Point> points; // GOOD: contiguous
std::vector<std::unique_ptr<Point>> indirect_points; // BAD: pointer chasing
```
### Anti-Patterns
- Optimizing without profiling data (Per.1, Per.6)
- Choosing "clever" low-level code over clear abstractions (Per.4, Per.5)
- Ignoring data layout and cache behavior (Per.19)
## Quick Reference Checklist
Before marking C++ work complete:
- [ ] No raw `new`/`delete` -- use smart pointers or RAII (R.11)
- [ ] Objects initialized at declaration (ES.20)
- [ ] Variables are `const`/`constexpr` by default (Con.1, ES.25)
- [ ] Member functions are `const` where possible (Con.2)
- [ ] `enum class` instead of plain `enum` (Enum.3)
- [ ] `nullptr` instead of `0`/`NULL` (ES.47)
- [ ] No narrowing conversions (ES.46)
- [ ] No C-style casts (ES.48)
- [ ] Single-argument constructors are `explicit` (C.46)
- [ ] Rule of Zero or Rule of Five applied (C.20, C.21)
- [ ] Base class destructors are public virtual or protected non-virtual (C.35)
- [ ] Templates are constrained with concepts (T.10)
- [ ] No `using namespace` in headers at global scope (SF.7)
- [ ] Headers have include guards and are self-contained (SF.8, SF.11)
- [ ] Locks use RAII (`scoped_lock`/`lock_guard`) (CP.20)
- [ ] Exceptions are custom types, thrown by value, caught by reference (E.14, E.15)
- [ ] `'\n'` instead of `std::endl` (SL.io.50)
- [ ] No magic numbers (ES.45)

View File

@@ -0,0 +1,18 @@
steps:
- name: Setup Go environment
uses: actions/setup-go@v6.2.0
with:
# The Go version to download (if necessary) and use. Supports semver spec and ranges. Be sure to enclose this option in single quotation marks.
go-version: # optional
# Path to the go.mod, go.work, .go-version, or .tool-versions file.
go-version-file: # optional
# Set this option to true if you want the action to always check for the latest available version that satisfies the version spec
check-latest: # optional
# Used to pull Go distributions from go-versions. Since there's a default, this is typically not supplied by the user. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting.
token: # optional, default is ${{ github.server_url == 'https://github.com' && github.token || '' }}
# Used to specify whether caching is needed. Set to true, if you'd like to enable caching.
cache: # optional, default is true
# Used to specify the path to a dependency file - go.sum
cache-dependency-path: # optional
# Target architecture for Go to use. Examples: x86, x64. Will use system architecture by default.
architecture: # optional

View File

@@ -1,34 +0,0 @@
name: AgentShield Security Scan
on:
push:
branches: [main]
pull_request:
branches: [main]
# Prevent duplicate runs
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
# Minimal permissions
permissions:
contents: read
jobs:
agentshield:
name: AgentShield Scan
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Run AgentShield Security Scan
uses: affaan-m/agentshield@v1
with:
path: '.'
min-severity: 'medium'
format: 'terminal'
fail-on-findings: 'false'

60
CLAUDE.md Normal file
View File

@@ -0,0 +1,60 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
This is a **Claude Code plugin** - a collection of production-ready agents, skills, hooks, commands, rules, and MCP configurations. The project provides battle-tested workflows for software development using Claude Code.
## Running Tests
```bash
# Run all tests
node tests/run-all.js
# Run individual test files
node tests/lib/utils.test.js
node tests/lib/package-manager.test.js
node tests/hooks/hooks.test.js
```
## Architecture
The project is organized into several core components:
- **agents/** - Specialized subagents for delegation (planner, code-reviewer, tdd-guide, etc.)
- **skills/** - Workflow definitions and domain knowledge (coding standards, patterns, testing)
- **commands/** - Slash commands invoked by users (/tdd, /plan, /e2e, etc.)
- **hooks/** - Trigger-based automations (session persistence, pre/post-tool hooks)
- **rules/** - Always-follow guidelines (security, coding style, testing requirements)
- **mcp-configs/** - MCP server configurations for external integrations
- **scripts/** - Cross-platform Node.js utilities for hooks and setup
- **tests/** - Test suite for scripts and utilities
## Key Commands
- `/tdd` - Test-driven development workflow
- `/plan` - Implementation planning
- `/e2e` - Generate and run E2E tests
- `/code-review` - Quality review
- `/build-fix` - Fix build errors
- `/learn` - Extract patterns from sessions
- `/skill-create` - Generate skills from git history
## Development Notes
- Package manager detection: npm, pnpm, yarn, bun (configurable via `CLAUDE_PACKAGE_MANAGER` env var or project config)
- Cross-platform: Windows, macOS, Linux support via Node.js scripts
- Agent format: Markdown with YAML frontmatter (name, description, tools, model)
- Skill format: Markdown with clear sections for when to use, how it works, examples
- Hook format: JSON with matcher conditions and command/notification hooks
## Contributing
Follow the formats in CONTRIBUTING.md:
- Agents: Markdown with frontmatter (name, description, tools, model)
- Skills: Clear sections (When to Use, How It Works, Examples)
- Commands: Markdown with description frontmatter
- Hooks: JSON with matcher and hooks array
File naming: lowercase with hyphens (e.g., `python-reviewer.md`, `tdd-workflow.md`)

View File

@@ -13,7 +13,7 @@
![Java](https://img.shields.io/badge/-Java-ED8B00?logo=openjdk&logoColor=white)
![Markdown](https://img.shields.io/badge/-Markdown-000000?logo=markdown&logoColor=white)
> **42K+ stars** | **5K+ forks** | **24 contributors** | **6 languages supported** | **Anthropic Hackathon Winner**
> **50K+ stars** | **6K+ forks** | **30 contributors** | **6 languages supported** | **Anthropic Hackathon Winner**
---
@@ -143,7 +143,7 @@ For manual install instructions see the README in the `rules/` folder.
/plugin list everything-claude-code@everything-claude-code
```
**That's it!** You now have access to 13 agents, 37 skills, and 31 commands.
**That's it!** You now have access to 13 agents, 44 skills, and 32 commands.
---
@@ -222,6 +222,7 @@ everything-claude-code/
| |-- verification-loop/ # Continuous verification (Longform Guide)
| |-- golang-patterns/ # Go idioms and best practices
| |-- golang-testing/ # Go testing patterns, TDD, benchmarks
| |-- cpp-coding-standards/ # C++ coding standards from C++ Core Guidelines (NEW)
| |-- cpp-testing/ # C++ testing with GoogleTest, CMake/CTest (NEW)
| |-- django-patterns/ # Django patterns, models, views (NEW)
| |-- django-security/ # Django security best practices (NEW)
@@ -245,6 +246,12 @@ everything-claude-code/
| |-- deployment-patterns/ # CI/CD, Docker, health checks, rollbacks (NEW)
| |-- docker-patterns/ # Docker Compose, networking, volumes, container security (NEW)
| |-- e2e-testing/ # Playwright E2E patterns and Page Object Model (NEW)
| |-- content-hash-cache-pattern/ # SHA-256 content hash caching for file processing (NEW)
| |-- cost-aware-llm-pipeline/ # LLM cost optimization, model routing, budget tracking (NEW)
| |-- regex-vs-llm-structured-text/ # Decision framework: regex vs LLM for text parsing (NEW)
| |-- swift-actor-persistence/ # Thread-safe Swift data persistence with actors (NEW)
| |-- swift-protocol-di-testing/ # Protocol-based DI for testable Swift code (NEW)
| |-- search-first/ # Research-before-coding workflow (NEW)
|
|-- commands/ # Slash commands for quick execution
| |-- tdd.md # /tdd - Test-driven development
@@ -254,6 +261,7 @@ everything-claude-code/
| |-- build-fix.md # /build-fix - Fix build errors
| |-- refactor-clean.md # /refactor-clean - Dead code removal
| |-- learn.md # /learn - Extract patterns mid-session (Longform Guide)
| |-- learn-eval.md # /learn-eval - Extract, evaluate, and save patterns (NEW)
| |-- checkpoint.md # /checkpoint - Save verification state (Longform Guide)
| |-- verify.md # /verify - Run verification loop (Longform Guide)
| |-- setup-pm.md # /setup-pm - Configure package manager
@@ -486,6 +494,7 @@ This gives you instant access to all commands, agents, skills, and hooks.
> git clone https://github.com/affaan-m/everything-claude-code.git
>
> # Option A: User-level rules (applies to all projects)
> mkdir -p ~/.claude/rules
> cp -r everything-claude-code/rules/common/* ~/.claude/rules/
> cp -r everything-claude-code/rules/typescript/* ~/.claude/rules/ # pick your stack
> cp -r everything-claude-code/rules/python/* ~/.claude/rules/
@@ -691,11 +700,12 @@ Each component is fully independent.
</details>
<details>
<summary><b>Does this work with Cursor / OpenCode?</b></summary>
<summary><b>Does this work with Cursor / OpenCode / Codex?</b></summary>
Yes. ECC is cross-platform:
- **Cursor**: Pre-translated configs in `.cursor/`. See [Cursor IDE Support](#cursor-ide-support).
- **OpenCode**: Full plugin support in `.opencode/`. See [OpenCode Support](#-opencode-support).
- **Codex**: First-class support with adapter drift guards and SessionStart fallback. See PR [#257](https://github.com/affaan-m/everything-claude-code/pull/257).
- **Claude Code**: Native — this is the primary target.
</details>
@@ -800,8 +810,8 @@ The configuration is automatically detected from `.opencode/opencode.json`.
| Feature | Claude Code | OpenCode | Status |
|---------|-------------|----------|--------|
| Agents | ✅ 13 agents | ✅ 12 agents | **Claude Code leads** |
| Commands | ✅ 31 commands | ✅ 24 commands | **Claude Code leads** |
| Skills | ✅ 37 skills | ✅ 16 skills | **Claude Code leads** |
| Commands | ✅ 32 commands | ✅ 24 commands | **Claude Code leads** |
| Skills | ✅ 44 skills | ✅ 16 skills | **Claude Code leads** |
| Hooks | ✅ 3 phases | ✅ 20+ events | **OpenCode has more!** |
| Rules | ✅ 8 rules | ✅ 8 rules | **Full parity** |
| MCP Servers | ✅ Full | ✅ Full | **Full parity** |
@@ -821,7 +831,7 @@ OpenCode's plugin system is MORE sophisticated than Claude Code with 20+ event t
**Additional OpenCode events**: `file.edited`, `file.watcher.updated`, `message.updated`, `lsp.client.diagnostics`, `tui.toast.show`, and more.
### Available Commands (31)
### Available Commands (32)
| Command | Description |
|---------|-------------|
@@ -855,6 +865,7 @@ OpenCode's plugin system is MORE sophisticated than Claude Code with 20+ event t
| `/instinct-import` | Import instincts |
| `/instinct-export` | Export instincts |
| `/evolve` | Cluster instincts into skills |
| `/learn-eval` | Extract and evaluate patterns before saving |
| `/setup-pm` | Configure package manager |
### Plugin Installation

View File

@@ -95,7 +95,7 @@ cp -r everything-claude-code/rules/* ~/.claude/rules/
/plugin list everything-claude-code@everything-claude-code
```
**完成!** 你现在可以使用 13 个代理、37 个技能和 31 个命令。
**完成!** 你现在可以使用 13 个代理、43 个技能和 31 个命令。
---

91
commands/learn-eval.md Normal file
View File

@@ -0,0 +1,91 @@
---
description: Extract reusable patterns from the session, self-evaluate quality before saving, and determine the right save location (Global vs Project).
---
# /learn-eval - Extract, Evaluate, then Save
Extends `/learn` with a quality gate and save-location decision before writing any skill file.
## What to Extract
Look for:
1. **Error Resolution Patterns** — root cause + fix + reusability
2. **Debugging Techniques** — non-obvious steps, tool combinations
3. **Workarounds** — library quirks, API limitations, version-specific fixes
4. **Project-Specific Patterns** — conventions, architecture decisions, integration patterns
## Process
1. Review the session for extractable patterns
2. Identify the most valuable/reusable insight
3. **Determine save location:**
- Ask: "Would this pattern be useful in a different project?"
- **Global** (`~/.claude/skills/learned/`): Generic patterns usable across 2+ projects (bash compatibility, LLM API behavior, debugging techniques, etc.)
- **Project** (`.claude/skills/learned/` in current project): Project-specific knowledge (quirks of a particular config file, project-specific architecture decisions, etc.)
- When in doubt, choose Global (moving Global → Project is easier than the reverse)
4. Draft the skill file using this format:
```markdown
---
name: pattern-name
description: "Under 130 characters"
user-invocable: false
origin: auto-extracted
---
# [Descriptive Pattern Name]
**Extracted:** [Date]
**Context:** [Brief description of when this applies]
## Problem
[What problem this solves - be specific]
## Solution
[The pattern/technique/workaround - with code examples]
## When to Use
[Trigger conditions]
```
5. **Self-evaluate before saving** using this rubric:
| Dimension | 1 | 3 | 5 |
|-----------|---|---|---|
| Specificity | Abstract principles only, no code examples | Representative code example present | Rich examples covering all usage patterns |
| Actionability | Unclear what to do | Main steps are understandable | Immediately actionable, edge cases covered |
| Scope Fit | Too broad or too narrow | Mostly appropriate, some boundary ambiguity | Name, trigger, and content perfectly aligned |
| Non-redundancy | Nearly identical to another skill | Some overlap but unique perspective exists | Completely unique value |
| Coverage | Covers only a fraction of the target task | Main cases covered, common variants missing | Main cases, edge cases, and pitfalls covered |
- Score each dimension 15
- If any dimension scores 12, improve the draft and re-score until all dimensions are ≥ 3
- Show the user the scores table and the final draft
6. Ask user to confirm:
- Show: proposed save path + scores table + final draft
- Wait for explicit confirmation before writing
7. Save to the determined location
## Output Format for Step 5 (scores table)
| Dimension | Score | Rationale |
|-----------|-------|-----------|
| Specificity | N/5 | ... |
| Actionability | N/5 | ... |
| Scope Fit | N/5 | ... |
| Non-redundancy | N/5 | ... |
| Coverage | N/5 | ... |
| **Total** | **N/25** | |
## Notes
- Don't extract trivial fixes (typos, simple syntax errors)
- Don't extract one-time issues (specific API outages, etc.)
- Focus on patterns that will save time in future sessions
- Keep skills focused — one pattern per skill
- If Coverage score is low, add related variants before saving

View File

@@ -140,7 +140,7 @@ cp -r everything-claude-code/rules/golang/* ~/.claude/rules/
/plugin list everything-claude-code@everything-claude-code
```
**完了です!** これで13のエージェント、37のスキル、31のコマンドにアクセスできます。
**完了です!** これで13のエージェント、43のスキル、31のコマンドにアクセスできます。
---
@@ -454,6 +454,7 @@ Duplicate hooks file detected: ./hooks/hooks.json resolves to already-loaded fil
> git clone https://github.com/affaan-m/everything-claude-code.git
>
> # オプション Aユーザーレベルルールすべてのプロジェクトに適用
> mkdir -p ~/.claude/rules
> cp -r everything-claude-code/rules/common/* ~/.claude/rules/
> cp -r everything-claude-code/rules/typescript/* ~/.claude/rules/ # スタックを選択
> cp -r everything-claude-code/rules/python/* ~/.claude/rules/

View File

@@ -456,6 +456,7 @@ Duplicate hooks file detected: ./hooks/hooks.json resolves to already-loaded fil
> git clone https://github.com/affaan-m/everything-claude-code.git
>
> # 选项 A用户级规则适用于所有项目
> mkdir -p ~/.claude/rules
> cp -r everything-claude-code/rules/common/* ~/.claude/rules/
> cp -r everything-claude-code/rules/typescript/* ~/.claude/rules/ # 选择您的技术栈
> cp -r everything-claude-code/rules/python/* ~/.claude/rules/

View File

@@ -13,7 +13,7 @@
"C": "Cyan - username, todos",
"T": "Gray - model name"
},
"output_example": "affoon:~/projects/myapp main* ctx:73% sonnet-4.5 14:30 todos:3",
"output_example": "affoon:~/projects/myapp main* ctx:73% sonnet-4.6 14:30 todos:3",
"usage": "Copy the statusLine object to your ~/.claude/settings.json"
}
}

View File

@@ -37,7 +37,7 @@
"hooks": [
{
"type": "command",
"command": "node -e \"const fs=require('fs');let d='';process.stdin.on('data',c=>d+=c);process.stdin.on('end',()=>{try{const i=JSON.parse(d);const p=i.tool_input?.file_path||'';if(/\\.(md|txt)$/.test(p)&&!/(README|CLAUDE|AGENTS|CONTRIBUTING)\\.md$/.test(p)){console.error('[Hook] BLOCKED: Unnecessary documentation file creation');console.error('[Hook] File: '+p);console.error('[Hook] Use README.md for documentation instead');process.exit(2)}}catch{}console.log(d)})\""
"command": "node -e \"const fs=require('fs');let d='';process.stdin.on('data',c=>d+=c);process.stdin.on('end',()=>{try{const i=JSON.parse(d);const p=i.tool_input?.file_path||'';if(/\\.(md|txt)$/.test(p)&&!/(README|CLAUDE|AGENTS|CONTRIBUTING)\\.md$/.test(p)&&!/\\.claude\\/plans\\//.test(p)){console.error('[Hook] BLOCKED: Unnecessary documentation file creation');console.error('[Hook] File: '+p);console.error('[Hook] Use README.md for documentation instead');process.exit(2)}}catch{}console.log(d)})\""
}
],
"description": "Block creation of random .md files - keeps docs consolidated"

16
package-lock.json generated
View File

@@ -1,14 +1,25 @@
{
"name": "everything-claude-code",
"name": "ecc-universal",
"version": "1.4.1",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "ecc-universal",
"version": "1.4.1",
"hasInstallScript": true,
"license": "MIT",
"bin": {
"ecc-install": "install.sh"
},
"devDependencies": {
"@eslint/js": "^9.39.2",
"eslint": "^9.39.2",
"globals": "^17.1.0",
"markdownlint-cli": "^0.47.0"
},
"engines": {
"node": ">=18"
}
},
"node_modules/@eslint-community/eslint-utils": {
@@ -294,6 +305,7 @@
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
"dev": true,
"license": "MIT",
"peer": true,
"bin": {
"acorn": "bin/acorn"
},
@@ -599,6 +611,7 @@
"integrity": "sha512-LEyamqS7W5HB3ujJyvi0HQK/dtVINZvd5mAAp9eT5S/ujByGjiZLCzPcHVzuXbpJDJF/cxwHlfceVUDZ2lnSTw==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@eslint-community/eslint-utils": "^4.8.0",
"@eslint-community/regexpp": "^4.12.1",
@@ -1930,6 +1943,7 @@
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"dev": true,
"license": "MIT",
"peer": true,
"engines": {
"node": ">=12"
},

View File

@@ -7,7 +7,7 @@
- Pair programming and code generation
- Worker agents in multi-agent systems
**Sonnet 4.5** (Best coding model):
**Sonnet 4.6** (Best coding model):
- Main development work
- Orchestrating multi-agent workflows
- Complex coding tasks

View File

@@ -0,0 +1,330 @@
#!/usr/bin/env node
/**
* scripts/codemaps/generate.ts
*
* Codemap Generator for everything-claude-code (ECC)
*
* Scans the current working directory and generates architectural
* codemap documentation under docs/CODEMAPS/ as specified by the
* doc-updater agent.
*
* Usage:
* npx tsx scripts/codemaps/generate.ts [srcDir]
*
* Output:
* docs/CODEMAPS/INDEX.md
* docs/CODEMAPS/frontend.md
* docs/CODEMAPS/backend.md
* docs/CODEMAPS/database.md
* docs/CODEMAPS/integrations.md
* docs/CODEMAPS/workers.md
*/
import fs from 'fs';
import path from 'path';
// ---------------------------------------------------------------------------
// Config
// ---------------------------------------------------------------------------
const ROOT = process.cwd();
const SRC_DIR = process.argv[2] ? path.resolve(process.argv[2]) : ROOT;
const OUTPUT_DIR = path.join(ROOT, 'docs', 'CODEMAPS');
const TODAY = new Date().toISOString().split('T')[0];
// Patterns used to classify files into codemap areas
const AREA_PATTERNS: Record<string, RegExp[]> = {
frontend: [
/\/(app|pages|components|hooks|contexts|ui|views|layouts|styles)\//i,
/\.(tsx|jsx|css|scss|sass|less|vue|svelte)$/i,
],
backend: [
/\/(api|routes|controllers|middleware|server|services|handlers)\//i,
/\.(route|controller|handler|middleware|service)\.(ts|js)$/i,
],
database: [
/\/(models|schemas|migrations|prisma|drizzle|db|database|repositories)\//i,
/\.(model|schema|migration|seed)\.(ts|js)$/i,
/prisma\/schema\.prisma$/,
/schema\.sql$/,
],
integrations: [
/\/(integrations?|third-party|external|plugins?|adapters?|connectors?)\//i,
/\.(integration|adapter|connector)\.(ts|js)$/i,
],
workers: [
/\/(workers?|jobs?|queues?|tasks?|cron|background)\//i,
/\.(worker|job|queue|task|cron)\.(ts|js)$/i,
],
};
// ---------------------------------------------------------------------------
// File System Helpers
// ---------------------------------------------------------------------------
/** Recursively collect all files under a directory, skipping common noise dirs. */
function walkDir(dir: string, results: string[] = []): string[] {
const SKIP = new Set([
'node_modules', '.git', '.next', '.nuxt', 'dist', 'build', 'out',
'.turbo', 'coverage', '.cache', '__pycache__', '.venv', 'venv',
]);
let entries: fs.Dirent[];
try {
entries = fs.readdirSync(dir, { withFileTypes: true });
} catch {
return results;
}
for (const entry of entries) {
if (SKIP.has(entry.name)) continue;
const fullPath = path.join(dir, entry.name);
if (entry.isDirectory()) {
walkDir(fullPath, results);
} else if (entry.isFile()) {
results.push(fullPath);
}
}
return results;
}
/** Return path relative to ROOT, always using forward slashes. */
function rel(p: string): string {
return path.relative(ROOT, p).replace(/\\/g, '/');
}
// ---------------------------------------------------------------------------
// Analysis
// ---------------------------------------------------------------------------
interface AreaInfo {
name: string;
files: string[];
entryPoints: string[];
directories: string[];
}
function classifyFiles(allFiles: string[]): Record<string, AreaInfo> {
const areas: Record<string, AreaInfo> = {
frontend: { name: 'Frontend', files: [], entryPoints: [], directories: [] },
backend: { name: 'Backend/API', files: [], entryPoints: [], directories: [] },
database: { name: 'Database', files: [], entryPoints: [], directories: [] },
integrations: { name: 'Integrations', files: [], entryPoints: [], directories: [] },
workers: { name: 'Workers', files: [], entryPoints: [], directories: [] },
};
for (const file of allFiles) {
const relPath = rel(file);
for (const [area, patterns] of Object.entries(AREA_PATTERNS)) {
if (patterns.some((p) => p.test(relPath))) {
areas[area].files.push(relPath);
break;
}
}
}
// Derive unique directories and entry points per area
for (const area of Object.values(areas)) {
const dirs = new Set(area.files.map((f) => path.dirname(f)));
area.directories = [...dirs].sort();
area.entryPoints = area.files
.filter((f) => /index\.(ts|tsx|js|jsx)$/.test(f) || /main\.(ts|tsx|js|jsx)$/.test(f))
.slice(0, 10);
}
return areas;
}
/** Count lines in a file (returns 0 on error). */
function lineCount(p: string): number {
try {
const content = fs.readFileSync(p, 'utf8');
return content.split('\n').length;
} catch {
return 0;
}
}
/** Build a simple directory tree ASCII diagram (max 3 levels deep). */
function buildTree(dir: string, prefix = '', depth = 0): string {
if (depth > 2) return '';
const SKIP = new Set(['node_modules', '.git', 'dist', 'build', '.next', 'coverage']);
let entries: fs.Dirent[];
try {
entries = fs.readdirSync(dir, { withFileTypes: true });
} catch {
return '';
}
const dirs = entries.filter((e) => e.isDirectory() && !SKIP.has(e.name));
const files = entries.filter((e) => e.isFile());
let result = '';
const items = [...dirs, ...files];
items.forEach((entry, i) => {
const isLast = i === items.length - 1;
const connector = isLast ? '└── ' : '├── ';
result += `${prefix}${connector}${entry.name}\n`;
if (entry.isDirectory()) {
const newPrefix = prefix + (isLast ? ' ' : '│ ');
result += buildTree(path.join(dir, entry.name), newPrefix, depth + 1);
}
});
return result;
}
// ---------------------------------------------------------------------------
// Markdown Generators
// ---------------------------------------------------------------------------
function generateAreaDoc(areaKey: string, area: AreaInfo, allFiles: string[]): string {
const fileCount = area.files.length;
const totalLines = area.files.reduce((sum, f) => sum + lineCount(path.join(ROOT, f)), 0);
const entrySection = area.entryPoints.length > 0
? area.entryPoints.map((e) => `- \`${e}\``).join('\n')
: '- *(no index/main entry points detected)*';
const dirSection = area.directories.slice(0, 20)
.map((d) => `- \`${d}/\``)
.join('\n') || '- *(no dedicated directories detected)*';
const fileSection = area.files.slice(0, 30)
.map((f) => `| \`${f}\` | ${lineCount(path.join(ROOT, f))} |`)
.join('\n');
const moreFiles = area.files.length > 30
? `\n*...and ${area.files.length - 30} more files*`
: '';
return `# ${area.name} Codemap
**Last Updated:** ${TODAY}
**Total Files:** ${fileCount}
**Total Lines:** ${totalLines}
## Entry Points
${entrySection}
## Architecture
\`\`\`
${area.name} Directory Structure
${dirSection.replace(/- `/g, '').replace(/`\/$/gm, '/')}
\`\`\`
## Key Modules
| File | Lines |
|------|-------|
${fileSection}${moreFiles}
## Data Flow
> Detected from file patterns. Review individual files for detailed data flow.
## External Dependencies
> Run \`npx jsdoc2md src/**/*.ts\` to extract JSDoc and identify external dependencies.
## Related Areas
- [INDEX](./INDEX.md) — Full overview
- [Frontend](./frontend.md)
- [Backend/API](./backend.md)
- [Database](./database.md)
- [Integrations](./integrations.md)
- [Workers](./workers.md)
`;
}
function generateIndex(areas: Record<string, AreaInfo>, allFiles: string[]): string {
const totalFiles = allFiles.length;
const areaRows = Object.entries(areas)
.map(([key, area]) => `| [${area.name}](./${key}.md) | ${area.files.length} files | ${area.directories.slice(0, 3).map((d) => `\`${d}\``).join(', ') || '—'} |`)
.join('\n');
const topLevelTree = buildTree(SRC_DIR);
return `# Codebase Overview — CODEMAPS Index
**Last Updated:** ${TODAY}
**Root:** \`${rel(SRC_DIR) || '.'}\`
**Total Files Scanned:** ${totalFiles}
## Areas
| Area | Size | Key Directories |
|------|------|-----------------|
${areaRows}
## Repository Structure
\`\`\`
${rel(SRC_DIR) || path.basename(SRC_DIR)}/
${topLevelTree}\`\`\`
## How to Regenerate
\`\`\`bash
npx tsx scripts/codemaps/generate.ts # Regenerate codemaps
npx madge --image graph.svg src/ # Dependency graph (requires graphviz)
npx jsdoc2md src/**/*.ts # Extract JSDoc
\`\`\`
## Related Documentation
- [Frontend](./frontend.md) — UI components, pages, hooks
- [Backend/API](./backend.md) — API routes, controllers, middleware
- [Database](./database.md) — Models, schemas, migrations
- [Integrations](./integrations.md) — External services & adapters
- [Workers](./workers.md) — Background jobs, queues, cron tasks
`;
}
// ---------------------------------------------------------------------------
// Main
// ---------------------------------------------------------------------------
function main(): void {
console.log(`[generate.ts] Scanning: ${SRC_DIR}`);
console.log(`[generate.ts] Output: ${OUTPUT_DIR}`);
// Ensure output directory exists
fs.mkdirSync(OUTPUT_DIR, { recursive: true });
// Walk the directory tree
const allFiles = walkDir(SRC_DIR);
console.log(`[generate.ts] Found ${allFiles.length} files`);
// Classify files into areas
const areas = classifyFiles(allFiles);
// Generate INDEX.md
const indexContent = generateIndex(areas, allFiles);
const indexPath = path.join(OUTPUT_DIR, 'INDEX.md');
fs.writeFileSync(indexPath, indexContent, 'utf8');
console.log(`[generate.ts] Written: ${rel(indexPath)}`);
// Generate per-area codemaps
for (const [key, area] of Object.entries(areas)) {
const content = generateAreaDoc(key, area, allFiles);
const outPath = path.join(OUTPUT_DIR, `${key}.md`);
fs.writeFileSync(outPath, content, 'utf8');
console.log(`[generate.ts] Written: ${rel(outPath)} (${area.files.length} files)`);
}
console.log('\n[generate.ts] Done! Codemaps written to docs/CODEMAPS/');
console.log('[generate.ts] Files generated:');
console.log(' docs/CODEMAPS/INDEX.md');
console.log(' docs/CODEMAPS/frontend.md');
console.log(' docs/CODEMAPS/backend.md');
console.log(' docs/CODEMAPS/database.md');
console.log(' docs/CODEMAPS/integrations.md');
console.log(' docs/CODEMAPS/workers.md');
}
main();

View File

@@ -32,7 +32,8 @@ process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (data.length < MAX_STDIN) {
data += chunk;
const remaining = MAX_STDIN - data.length;
data += chunk.substring(0, remaining);
}
});

View File

@@ -29,7 +29,8 @@ process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (stdinData.length < MAX_STDIN) {
stdinData += chunk;
const remaining = MAX_STDIN - stdinData.length;
stdinData += chunk.substring(0, remaining);
}
});

View File

@@ -17,7 +17,8 @@ process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (data.length < MAX_STDIN) {
data += chunk;
const remaining = MAX_STDIN - data.length;
data += chunk.substring(0, remaining);
}
});
@@ -28,7 +29,7 @@ process.stdin.on('end', () => {
if (filePath && /\.(ts|tsx|js|jsx)$/.test(filePath)) {
const content = readFile(filePath);
if (!content) { process.stdout.write(data); return; }
if (!content) { process.stdout.write(data); process.exit(0); }
const lines = content.split('\n');
const matches = [];

View File

@@ -17,7 +17,8 @@ process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (data.length < MAX_STDIN) {
data += chunk;
const remaining = MAX_STDIN - data.length;
data += chunk.substring(0, remaining);
}
});

View File

@@ -19,7 +19,8 @@ process.stdin.setEncoding("utf8");
process.stdin.on("data", (chunk) => {
if (data.length < MAX_STDIN) {
data += chunk;
const remaining = MAX_STDIN - data.length;
data += chunk.substring(0, remaining);
}
});

View File

@@ -109,7 +109,8 @@ process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (stdinData.length < MAX_STDIN) {
stdinData += chunk;
const remaining = MAX_STDIN - stdinData.length;
stdinData += chunk.substring(0, remaining);
}
});

View File

@@ -282,7 +282,7 @@ function setProjectPackageManager(pmName, projectDir = process.cwd()) {
// Allowed characters in script/binary names: alphanumeric, dash, underscore, dot, slash, @
// This prevents shell metacharacter injection while allowing scoped packages (e.g., @scope/pkg)
const SAFE_NAME_REGEX = /^[@a-zA-Z0-9_.\/-]+$/;
const SAFE_NAME_REGEX = /^[@a-zA-Z0-9_./-]+$/;
/**
* Get the command to run a script
@@ -316,7 +316,7 @@ function getRunCommand(script, options = {}) {
// Allowed characters in arguments: alphanumeric, whitespace, dashes, dots, slashes,
// equals, colons, commas, quotes, @. Rejects shell metacharacters like ; | & ` $ ( ) { } < > !
const SAFE_ARGS_REGEX = /^[@a-zA-Z0-9\s_.\/:=,'"*+-]+$/;
const SAFE_ARGS_REGEX = /^[@a-zA-Z0-9\s_./:=,'"*+-]+$/;
/**
* Get the command to execute a package binary
@@ -370,28 +370,31 @@ function escapeRegex(str) {
function getCommandPattern(action) {
const patterns = [];
if (action === 'dev') {
// Trim spaces from action to handle leading/trailing whitespace gracefully
const trimmedAction = action.trim();
if (trimmedAction === 'dev') {
patterns.push(
'npm run dev',
'pnpm( run)? dev',
'yarn dev',
'bun run dev'
);
} else if (action === 'install') {
} else if (trimmedAction === 'install') {
patterns.push(
'npm install',
'pnpm install',
'yarn( install)?',
'bun install'
);
} else if (action === 'test') {
} else if (trimmedAction === 'test') {
patterns.push(
'npm test',
'pnpm test',
'yarn test',
'bun test'
);
} else if (action === 'build') {
} else if (trimmedAction === 'build') {
patterns.push(
'npm run build',
'pnpm( run)? build',
@@ -400,7 +403,7 @@ function getCommandPattern(action) {
);
} else {
// Generic run command — escape regex metacharacters in action
const escaped = escapeRegex(action);
const escaped = escapeRegex(trimmedAction);
patterns.push(
`npm run ${escaped}`,
`pnpm( run)? ${escaped}`,

View File

@@ -0,0 +1,160 @@
---
name: content-hash-cache-pattern
description: Cache expensive file processing results using SHA-256 content hashes — path-independent, auto-invalidating, with service layer separation.
---
# Content-Hash File Cache Pattern
Cache expensive file processing results (PDF parsing, text extraction, image analysis) using SHA-256 content hashes as cache keys. Unlike path-based caching, this approach survives file moves/renames and auto-invalidates when content changes.
## When to Activate
- Building file processing pipelines (PDF, images, text extraction)
- Processing cost is high and same files are processed repeatedly
- Need a `--cache/--no-cache` CLI option
- Want to add caching to existing pure functions without modifying them
## Core Pattern
### 1. Content-Hash Based Cache Key
Use file content (not path) as the cache key:
```python
import hashlib
from pathlib import Path
_HASH_CHUNK_SIZE = 65536 # 64KB chunks for large files
def compute_file_hash(path: Path) -> str:
"""SHA-256 of file contents (chunked for large files)."""
if not path.is_file():
raise FileNotFoundError(f"File not found: {path}")
sha256 = hashlib.sha256()
with open(path, "rb") as f:
while True:
chunk = f.read(_HASH_CHUNK_SIZE)
if not chunk:
break
sha256.update(chunk)
return sha256.hexdigest()
```
**Why content hash?** File rename/move = cache hit. Content change = automatic invalidation. No index file needed.
### 2. Frozen Dataclass for Cache Entry
```python
from dataclasses import dataclass
@dataclass(frozen=True, slots=True)
class CacheEntry:
file_hash: str
source_path: str
document: ExtractedDocument # The cached result
```
### 3. File-Based Cache Storage
Each cache entry is stored as `{hash}.json` — O(1) lookup by hash, no index file required.
```python
import json
from typing import Any
def write_cache(cache_dir: Path, entry: CacheEntry) -> None:
cache_dir.mkdir(parents=True, exist_ok=True)
cache_file = cache_dir / f"{entry.file_hash}.json"
data = serialize_entry(entry)
cache_file.write_text(json.dumps(data, ensure_ascii=False), encoding="utf-8")
def read_cache(cache_dir: Path, file_hash: str) -> CacheEntry | None:
cache_file = cache_dir / f"{file_hash}.json"
if not cache_file.is_file():
return None
try:
raw = cache_file.read_text(encoding="utf-8")
data = json.loads(raw)
return deserialize_entry(data)
except (json.JSONDecodeError, ValueError, KeyError):
return None # Treat corruption as cache miss
```
### 4. Service Layer Wrapper (SRP)
Keep the processing function pure. Add caching as a separate service layer.
```python
def extract_with_cache(
file_path: Path,
*,
cache_enabled: bool = True,
cache_dir: Path = Path(".cache"),
) -> ExtractedDocument:
"""Service layer: cache check -> extraction -> cache write."""
if not cache_enabled:
return extract_text(file_path) # Pure function, no cache knowledge
file_hash = compute_file_hash(file_path)
# Check cache
cached = read_cache(cache_dir, file_hash)
if cached is not None:
logger.info("Cache hit: %s (hash=%s)", file_path.name, file_hash[:12])
return cached.document
# Cache miss -> extract -> store
logger.info("Cache miss: %s (hash=%s)", file_path.name, file_hash[:12])
doc = extract_text(file_path)
entry = CacheEntry(file_hash=file_hash, source_path=str(file_path), document=doc)
write_cache(cache_dir, entry)
return doc
```
## Key Design Decisions
| Decision | Rationale |
|----------|-----------|
| SHA-256 content hash | Path-independent, auto-invalidates on content change |
| `{hash}.json` file naming | O(1) lookup, no index file needed |
| Service layer wrapper | SRP: extraction stays pure, cache is a separate concern |
| Manual JSON serialization | Full control over frozen dataclass serialization |
| Corruption returns `None` | Graceful degradation, re-processes on next run |
| `cache_dir.mkdir(parents=True)` | Lazy directory creation on first write |
## Best Practices
- **Hash content, not paths** — paths change, content identity doesn't
- **Chunk large files** when hashing — avoid loading entire files into memory
- **Keep processing functions pure** — they should know nothing about caching
- **Log cache hit/miss** with truncated hashes for debugging
- **Handle corruption gracefully** — treat invalid cache entries as misses, never crash
## Anti-Patterns to Avoid
```python
# BAD: Path-based caching (breaks on file move/rename)
cache = {"/path/to/file.pdf": result}
# BAD: Adding cache logic inside the processing function (SRP violation)
def extract_text(path, *, cache_enabled=False, cache_dir=None):
if cache_enabled: # Now this function has two responsibilities
...
# BAD: Using dataclasses.asdict() with nested frozen dataclasses
# (can cause issues with complex nested types)
data = dataclasses.asdict(entry) # Use manual serialization instead
```
## When to Use
- File processing pipelines (PDF parsing, OCR, text extraction, image analysis)
- CLI tools that benefit from `--cache/--no-cache` options
- Batch processing where the same files appear across runs
- Adding caching to existing pure functions without modifying them
## When NOT to Use
- Data that must always be fresh (real-time feeds)
- Cache entries that would be extremely large (consider streaming instead)
- Results that depend on parameters beyond file content (e.g., different extraction configs)

View File

@@ -103,7 +103,8 @@ PARSED_OK=$(echo "$PARSED" | python3 -c "import json,sys; print(json.load(sys.st
if [ "$PARSED_OK" != "True" ]; then
# Fallback: log raw input for debugging
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
TIMESTAMP="$timestamp" echo "$INPUT_JSON" | python3 -c "
export TIMESTAMP="$timestamp"
echo "$INPUT_JSON" | python3 -c "
import json, sys, os
raw = sys.stdin.read()[:2000]
print(json.dumps({'timestamp': os.environ['TIMESTAMP'], 'event': 'parse_error', 'raw': raw}))
@@ -124,7 +125,8 @@ fi
# Build and write observation
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
TIMESTAMP="$timestamp" echo "$PARSED" | python3 -c "
export TIMESTAMP="$timestamp"
echo "$PARSED" | python3 -c "
import json, sys, os
parsed = json.load(sys.stdin)

View File

@@ -0,0 +1,182 @@
---
name: cost-aware-llm-pipeline
description: Cost optimization patterns for LLM API usage — model routing by task complexity, budget tracking, retry logic, and prompt caching.
---
# Cost-Aware LLM Pipeline
Patterns for controlling LLM API costs while maintaining quality. Combines model routing, budget tracking, retry logic, and prompt caching into a composable pipeline.
## When to Activate
- Building applications that call LLM APIs (Claude, GPT, etc.)
- Processing batches of items with varying complexity
- Need to stay within a budget for API spend
- Optimizing cost without sacrificing quality on complex tasks
## Core Concepts
### 1. Model Routing by Task Complexity
Automatically select cheaper models for simple tasks, reserving expensive models for complex ones.
```python
MODEL_SONNET = "claude-sonnet-4-6"
MODEL_HAIKU = "claude-haiku-4-5-20251001"
_SONNET_TEXT_THRESHOLD = 10_000 # chars
_SONNET_ITEM_THRESHOLD = 30 # items
def select_model(
text_length: int,
item_count: int,
force_model: str | None = None,
) -> str:
"""Select model based on task complexity."""
if force_model is not None:
return force_model
if text_length >= _SONNET_TEXT_THRESHOLD or item_count >= _SONNET_ITEM_THRESHOLD:
return MODEL_SONNET # Complex task
return MODEL_HAIKU # Simple task (3-4x cheaper)
```
### 2. Immutable Cost Tracking
Track cumulative spend with frozen dataclasses. Each API call returns a new tracker — never mutates state.
```python
from dataclasses import dataclass
@dataclass(frozen=True, slots=True)
class CostRecord:
model: str
input_tokens: int
output_tokens: int
cost_usd: float
@dataclass(frozen=True, slots=True)
class CostTracker:
budget_limit: float = 1.00
records: tuple[CostRecord, ...] = ()
def add(self, record: CostRecord) -> "CostTracker":
"""Return new tracker with added record (never mutates self)."""
return CostTracker(
budget_limit=self.budget_limit,
records=(*self.records, record),
)
@property
def total_cost(self) -> float:
return sum(r.cost_usd for r in self.records)
@property
def over_budget(self) -> bool:
return self.total_cost > self.budget_limit
```
### 3. Narrow Retry Logic
Retry only on transient errors. Fail fast on authentication or bad request errors.
```python
from anthropic import (
APIConnectionError,
InternalServerError,
RateLimitError,
)
_RETRYABLE_ERRORS = (APIConnectionError, RateLimitError, InternalServerError)
_MAX_RETRIES = 3
def call_with_retry(func, *, max_retries: int = _MAX_RETRIES):
"""Retry only on transient errors, fail fast on others."""
for attempt in range(max_retries):
try:
return func()
except _RETRYABLE_ERRORS:
if attempt == max_retries - 1:
raise
time.sleep(2 ** attempt) # Exponential backoff
# AuthenticationError, BadRequestError etc. → raise immediately
```
### 4. Prompt Caching
Cache long system prompts to avoid resending them on every request.
```python
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": system_prompt,
"cache_control": {"type": "ephemeral"}, # Cache this
},
{
"type": "text",
"text": user_input, # Variable part
},
],
}
]
```
## Composition
Combine all four techniques in a single pipeline function:
```python
def process(text: str, config: Config, tracker: CostTracker) -> tuple[Result, CostTracker]:
# 1. Route model
model = select_model(len(text), estimated_items, config.force_model)
# 2. Check budget
if tracker.over_budget:
raise BudgetExceededError(tracker.total_cost, tracker.budget_limit)
# 3. Call with retry + caching
response = call_with_retry(lambda: client.messages.create(
model=model,
messages=build_cached_messages(system_prompt, text),
))
# 4. Track cost (immutable)
record = CostRecord(model=model, input_tokens=..., output_tokens=..., cost_usd=...)
tracker = tracker.add(record)
return parse_result(response), tracker
```
## Pricing Reference (2025-2026)
| Model | Input ($/1M tokens) | Output ($/1M tokens) | Relative Cost |
|-------|---------------------|----------------------|---------------|
| Haiku 4.5 | $0.80 | $4.00 | 1x |
| Sonnet 4.6 | $3.00 | $15.00 | ~4x |
| Opus 4.5 | $15.00 | $75.00 | ~19x |
## Best Practices
- **Start with the cheapest model** and only route to expensive models when complexity thresholds are met
- **Set explicit budget limits** before processing batches — fail early rather than overspend
- **Log model selection decisions** so you can tune thresholds based on real data
- **Use prompt caching** for system prompts over 1024 tokens — saves both cost and latency
- **Never retry on authentication or validation errors** — only transient failures (network, rate limit, server error)
## Anti-Patterns to Avoid
- Using the most expensive model for all requests regardless of complexity
- Retrying on all errors (wastes budget on permanent failures)
- Mutating cost tracking state (makes debugging and auditing difficult)
- Hardcoding model names throughout the codebase (use constants or config)
- Ignoring prompt caching for repetitive system prompts
## When to Use
- Any application calling Claude, OpenAI, or similar LLM APIs
- Batch processing pipelines where cost adds up quickly
- Multi-model architectures that need intelligent routing
- Production systems that need budget guardrails

View File

@@ -0,0 +1,722 @@
---
name: cpp-coding-standards
description: C++ coding standards based on the C++ Core Guidelines (isocpp.github.io). Use when writing, reviewing, or refactoring C++ code to enforce modern, safe, and idiomatic practices.
---
# C++ Coding Standards (C++ Core Guidelines)
Comprehensive coding standards for modern C++ (C++17/20/23) derived from the [C++ Core Guidelines](https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines). Enforces type safety, resource safety, immutability, and clarity.
## When to Use
- Writing new C++ code (classes, functions, templates)
- Reviewing or refactoring existing C++ code
- Making architectural decisions in C++ projects
- Enforcing consistent style across a C++ codebase
- Choosing between language features (e.g., `enum` vs `enum class`, raw pointer vs smart pointer)
### When NOT to Use
- Non-C++ projects
- Legacy C codebases that cannot adopt modern C++ features
- Embedded/bare-metal contexts where specific guidelines conflict with hardware constraints (adapt selectively)
## Cross-Cutting Principles
These themes recur across the entire guidelines and form the foundation:
1. **RAII everywhere** (P.8, R.1, E.6, CP.20): Bind resource lifetime to object lifetime
2. **Immutability by default** (P.10, Con.1-5, ES.25): Start with `const`/`constexpr`; mutability is the exception
3. **Type safety** (P.4, I.4, ES.46-49, Enum.3): Use the type system to prevent errors at compile time
4. **Express intent** (P.3, F.1, NL.1-2, T.10): Names, types, and concepts should communicate purpose
5. **Minimize complexity** (F.2-3, ES.5, Per.4-5): Simple code is correct code
6. **Value semantics over pointer semantics** (C.10, R.3-5, F.20, CP.31): Prefer returning by value and scoped objects
## Philosophy & Interfaces (P.*, I.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **P.1** | Express ideas directly in code |
| **P.3** | Express intent |
| **P.4** | Ideally, a program should be statically type safe |
| **P.5** | Prefer compile-time checking to run-time checking |
| **P.8** | Don't leak any resources |
| **P.10** | Prefer immutable data to mutable data |
| **I.1** | Make interfaces explicit |
| **I.2** | Avoid non-const global variables |
| **I.4** | Make interfaces precisely and strongly typed |
| **I.11** | Never transfer ownership by a raw pointer or reference |
| **I.23** | Keep the number of function arguments low |
### DO
```cpp
// P.10 + I.4: Immutable, strongly typed interface
struct Temperature {
double kelvin;
};
Temperature boil(const Temperature& water);
```
### DON'T
```cpp
// Weak interface: unclear ownership, unclear units
double boil(double* temp);
// Non-const global variable
int g_counter = 0; // I.2 violation
```
## Functions (F.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **F.1** | Package meaningful operations as carefully named functions |
| **F.2** | A function should perform a single logical operation |
| **F.3** | Keep functions short and simple |
| **F.4** | If a function might be evaluated at compile time, declare it `constexpr` |
| **F.6** | If your function must not throw, declare it `noexcept` |
| **F.8** | Prefer pure functions |
| **F.16** | For "in" parameters, pass cheaply-copied types by value and others by `const&` |
| **F.20** | For "out" values, prefer return values to output parameters |
| **F.21** | To return multiple "out" values, prefer returning a struct |
| **F.43** | Never return a pointer or reference to a local object |
### Parameter Passing
```cpp
// F.16: Cheap types by value, others by const&
void print(int x); // cheap: by value
void analyze(const std::string& data); // expensive: by const&
void transform(std::string s); // sink: by value (will move)
// F.20 + F.21: Return values, not output parameters
struct ParseResult {
std::string token;
int position;
};
ParseResult parse(std::string_view input); // GOOD: return struct
// BAD: output parameters
void parse(std::string_view input,
std::string& token, int& pos); // avoid this
```
### Pure Functions and constexpr
```cpp
// F.4 + F.8: Pure, constexpr where possible
constexpr int factorial(int n) noexcept {
return (n <= 1) ? 1 : n * factorial(n - 1);
}
static_assert(factorial(5) == 120);
```
### Anti-Patterns
- Returning `T&&` from functions (F.45)
- Using `va_arg` / C-style variadics (F.55)
- Capturing by reference in lambdas passed to other threads (F.53)
- Returning `const T` which inhibits move semantics (F.49)
## Classes & Class Hierarchies (C.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **C.2** | Use `class` if invariant exists; `struct` if data members vary independently |
| **C.9** | Minimize exposure of members |
| **C.20** | If you can avoid defining default operations, do (Rule of Zero) |
| **C.21** | If you define or `=delete` any copy/move/destructor, handle them all (Rule of Five) |
| **C.35** | Base class destructor: public virtual or protected non-virtual |
| **C.41** | A constructor should create a fully initialized object |
| **C.46** | Declare single-argument constructors `explicit` |
| **C.67** | A polymorphic class should suppress public copy/move |
| **C.128** | Virtual functions: specify exactly one of `virtual`, `override`, or `final` |
### Rule of Zero
```cpp
// C.20: Let the compiler generate special members
struct Employee {
std::string name;
std::string department;
int id;
// No destructor, copy/move constructors, or assignment operators needed
};
```
### Rule of Five
```cpp
// C.21: If you must manage a resource, define all five
class Buffer {
public:
explicit Buffer(std::size_t size)
: data_(std::make_unique<char[]>(size)), size_(size) {}
~Buffer() = default;
Buffer(const Buffer& other)
: data_(std::make_unique<char[]>(other.size_)), size_(other.size_) {
std::copy_n(other.data_.get(), size_, data_.get());
}
Buffer& operator=(const Buffer& other) {
if (this != &other) {
auto new_data = std::make_unique<char[]>(other.size_);
std::copy_n(other.data_.get(), other.size_, new_data.get());
data_ = std::move(new_data);
size_ = other.size_;
}
return *this;
}
Buffer(Buffer&&) noexcept = default;
Buffer& operator=(Buffer&&) noexcept = default;
private:
std::unique_ptr<char[]> data_;
std::size_t size_;
};
```
### Class Hierarchy
```cpp
// C.35 + C.128: Virtual destructor, use override
class Shape {
public:
virtual ~Shape() = default;
virtual double area() const = 0; // C.121: pure interface
};
class Circle : public Shape {
public:
explicit Circle(double r) : radius_(r) {}
double area() const override { return 3.14159 * radius_ * radius_; }
private:
double radius_;
};
```
### Anti-Patterns
- Calling virtual functions in constructors/destructors (C.82)
- Using `memset`/`memcpy` on non-trivial types (C.90)
- Providing different default arguments for virtual function and overrider (C.140)
- Making data members `const` or references, which suppresses move/copy (C.12)
## Resource Management (R.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **R.1** | Manage resources automatically using RAII |
| **R.3** | A raw pointer (`T*`) is non-owning |
| **R.5** | Prefer scoped objects; don't heap-allocate unnecessarily |
| **R.10** | Avoid `malloc()`/`free()` |
| **R.11** | Avoid calling `new` and `delete` explicitly |
| **R.20** | Use `unique_ptr` or `shared_ptr` to represent ownership |
| **R.21** | Prefer `unique_ptr` over `shared_ptr` unless sharing ownership |
| **R.22** | Use `make_shared()` to make `shared_ptr`s |
### Smart Pointer Usage
```cpp
// R.11 + R.20 + R.21: RAII with smart pointers
auto widget = std::make_unique<Widget>("config"); // unique ownership
auto cache = std::make_shared<Cache>(1024); // shared ownership
// R.3: Raw pointer = non-owning observer
void render(const Widget* w) { // does NOT own w
if (w) w->draw();
}
render(widget.get());
```
### RAII Pattern
```cpp
// R.1: Resource acquisition is initialization
class FileHandle {
public:
explicit FileHandle(const std::string& path)
: handle_(std::fopen(path.c_str(), "r")) {
if (!handle_) throw std::runtime_error("Failed to open: " + path);
}
~FileHandle() {
if (handle_) std::fclose(handle_);
}
FileHandle(const FileHandle&) = delete;
FileHandle& operator=(const FileHandle&) = delete;
FileHandle(FileHandle&& other) noexcept
: handle_(std::exchange(other.handle_, nullptr)) {}
FileHandle& operator=(FileHandle&& other) noexcept {
if (this != &other) {
if (handle_) std::fclose(handle_);
handle_ = std::exchange(other.handle_, nullptr);
}
return *this;
}
private:
std::FILE* handle_;
};
```
### Anti-Patterns
- Naked `new`/`delete` (R.11)
- `malloc()`/`free()` in C++ code (R.10)
- Multiple resource allocations in a single expression (R.13 -- exception safety hazard)
- `shared_ptr` where `unique_ptr` suffices (R.21)
## Expressions & Statements (ES.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **ES.5** | Keep scopes small |
| **ES.20** | Always initialize an object |
| **ES.23** | Prefer `{}` initializer syntax |
| **ES.25** | Declare objects `const` or `constexpr` unless modification is intended |
| **ES.28** | Use lambdas for complex initialization of `const` variables |
| **ES.45** | Avoid magic constants; use symbolic constants |
| **ES.46** | Avoid narrowing/lossy arithmetic conversions |
| **ES.47** | Use `nullptr` rather than `0` or `NULL` |
| **ES.48** | Avoid casts |
| **ES.50** | Don't cast away `const` |
### Initialization
```cpp
// ES.20 + ES.23 + ES.25: Always initialize, prefer {}, default to const
const int max_retries{3};
const std::string name{"widget"};
const std::vector<int> primes{2, 3, 5, 7, 11};
// ES.28: Lambda for complex const initialization
const auto config = [&] {
Config c;
c.timeout = std::chrono::seconds{30};
c.retries = max_retries;
c.verbose = debug_mode;
return c;
}();
```
### Anti-Patterns
- Uninitialized variables (ES.20)
- Using `0` or `NULL` as pointer (ES.47 -- use `nullptr`)
- C-style casts (ES.48 -- use `static_cast`, `const_cast`, etc.)
- Casting away `const` (ES.50)
- Magic numbers without named constants (ES.45)
- Mixing signed and unsigned arithmetic (ES.100)
- Reusing names in nested scopes (ES.12)
## Error Handling (E.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **E.1** | Develop an error-handling strategy early in a design |
| **E.2** | Throw an exception to signal that a function can't perform its assigned task |
| **E.6** | Use RAII to prevent leaks |
| **E.12** | Use `noexcept` when throwing is impossible or unacceptable |
| **E.14** | Use purpose-designed user-defined types as exceptions |
| **E.15** | Throw by value, catch by reference |
| **E.16** | Destructors, deallocation, and swap must never fail |
| **E.17** | Don't try to catch every exception in every function |
### Exception Hierarchy
```cpp
// E.14 + E.15: Custom exception types, throw by value, catch by reference
class AppError : public std::runtime_error {
public:
using std::runtime_error::runtime_error;
};
class NetworkError : public AppError {
public:
NetworkError(const std::string& msg, int code)
: AppError(msg), status_code(code) {}
int status_code;
};
void fetch_data(const std::string& url) {
// E.2: Throw to signal failure
throw NetworkError("connection refused", 503);
}
void run() {
try {
fetch_data("https://api.example.com");
} catch (const NetworkError& e) {
log_error(e.what(), e.status_code);
} catch (const AppError& e) {
log_error(e.what());
}
// E.17: Don't catch everything here -- let unexpected errors propagate
}
```
### Anti-Patterns
- Throwing built-in types like `int` or string literals (E.14)
- Catching by value (slicing risk) (E.15)
- Empty catch blocks that silently swallow errors
- Using exceptions for flow control (E.3)
- Error handling based on global state like `errno` (E.28)
## Constants & Immutability (Con.*)
### All Rules
| Rule | Summary |
|------|---------|
| **Con.1** | By default, make objects immutable |
| **Con.2** | By default, make member functions `const` |
| **Con.3** | By default, pass pointers and references to `const` |
| **Con.4** | Use `const` for values that don't change after construction |
| **Con.5** | Use `constexpr` for values computable at compile time |
```cpp
// Con.1 through Con.5: Immutability by default
class Sensor {
public:
explicit Sensor(std::string id) : id_(std::move(id)) {}
// Con.2: const member functions by default
const std::string& id() const { return id_; }
double last_reading() const { return reading_; }
// Only non-const when mutation is required
void record(double value) { reading_ = value; }
private:
const std::string id_; // Con.4: never changes after construction
double reading_{0.0};
};
// Con.3: Pass by const reference
void display(const Sensor& s) {
std::cout << s.id() << ": " << s.last_reading() << '\n';
}
// Con.5: Compile-time constants
constexpr double PI = 3.14159265358979;
constexpr int MAX_SENSORS = 256;
```
## Concurrency & Parallelism (CP.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **CP.2** | Avoid data races |
| **CP.3** | Minimize explicit sharing of writable data |
| **CP.4** | Think in terms of tasks, rather than threads |
| **CP.8** | Don't use `volatile` for synchronization |
| **CP.20** | Use RAII, never plain `lock()`/`unlock()` |
| **CP.21** | Use `std::scoped_lock` to acquire multiple mutexes |
| **CP.22** | Never call unknown code while holding a lock |
| **CP.42** | Don't wait without a condition |
| **CP.44** | Remember to name your `lock_guard`s and `unique_lock`s |
| **CP.100** | Don't use lock-free programming unless you absolutely have to |
### Safe Locking
```cpp
// CP.20 + CP.44: RAII locks, always named
class ThreadSafeQueue {
public:
void push(int value) {
std::lock_guard<std::mutex> lock(mutex_); // CP.44: named!
queue_.push(value);
cv_.notify_one();
}
int pop() {
std::unique_lock<std::mutex> lock(mutex_);
// CP.42: Always wait with a condition
cv_.wait(lock, [this] { return !queue_.empty(); });
const int value = queue_.front();
queue_.pop();
return value;
}
private:
std::mutex mutex_; // CP.50: mutex with its data
std::condition_variable cv_;
std::queue<int> queue_;
};
```
### Multiple Mutexes
```cpp
// CP.21: std::scoped_lock for multiple mutexes (deadlock-free)
void transfer(Account& from, Account& to, double amount) {
std::scoped_lock lock(from.mutex_, to.mutex_);
from.balance_ -= amount;
to.balance_ += amount;
}
```
### Anti-Patterns
- `volatile` for synchronization (CP.8 -- it's for hardware I/O only)
- Detaching threads (CP.26 -- lifetime management becomes nearly impossible)
- Unnamed lock guards: `std::lock_guard<std::mutex>(m);` destroys immediately (CP.44)
- Holding locks while calling callbacks (CP.22 -- deadlock risk)
- Lock-free programming without deep expertise (CP.100)
## Templates & Generic Programming (T.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **T.1** | Use templates to raise the level of abstraction |
| **T.2** | Use templates to express algorithms for many argument types |
| **T.10** | Specify concepts for all template arguments |
| **T.11** | Use standard concepts whenever possible |
| **T.13** | Prefer shorthand notation for simple concepts |
| **T.43** | Prefer `using` over `typedef` |
| **T.120** | Use template metaprogramming only when you really need to |
| **T.144** | Don't specialize function templates (overload instead) |
### Concepts (C++20)
```cpp
#include <concepts>
// T.10 + T.11: Constrain templates with standard concepts
template<std::integral T>
T gcd(T a, T b) {
while (b != 0) {
a = std::exchange(b, a % b);
}
return a;
}
// T.13: Shorthand concept syntax
void sort(std::ranges::random_access_range auto& range) {
std::ranges::sort(range);
}
// Custom concept for domain-specific constraints
template<typename T>
concept Serializable = requires(const T& t) {
{ t.serialize() } -> std::convertible_to<std::string>;
};
template<Serializable T>
void save(const T& obj, const std::string& path);
```
### Anti-Patterns
- Unconstrained templates in visible namespaces (T.47)
- Specializing function templates instead of overloading (T.144)
- Template metaprogramming where `constexpr` suffices (T.120)
- `typedef` instead of `using` (T.43)
## Standard Library (SL.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **SL.1** | Use libraries wherever possible |
| **SL.2** | Prefer the standard library to other libraries |
| **SL.con.1** | Prefer `std::array` or `std::vector` over C arrays |
| **SL.con.2** | Prefer `std::vector` by default |
| **SL.str.1** | Use `std::string` to own character sequences |
| **SL.str.2** | Use `std::string_view` to refer to character sequences |
| **SL.io.50** | Avoid `endl` (use `'\n'` -- `endl` forces a flush) |
```cpp
// SL.con.1 + SL.con.2: Prefer vector/array over C arrays
const std::array<int, 4> fixed_data{1, 2, 3, 4};
std::vector<std::string> dynamic_data;
// SL.str.1 + SL.str.2: string owns, string_view observes
std::string build_greeting(std::string_view name) {
return "Hello, " + std::string(name) + "!";
}
// SL.io.50: Use '\n' not endl
std::cout << "result: " << value << '\n';
```
## Enumerations (Enum.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **Enum.1** | Prefer enumerations over macros |
| **Enum.3** | Prefer `enum class` over plain `enum` |
| **Enum.5** | Don't use ALL_CAPS for enumerators |
| **Enum.6** | Avoid unnamed enumerations |
```cpp
// Enum.3 + Enum.5: Scoped enum, no ALL_CAPS
enum class Color { red, green, blue };
enum class LogLevel { debug, info, warning, error };
// BAD: plain enum leaks names, ALL_CAPS clashes with macros
enum { RED, GREEN, BLUE }; // Enum.3 + Enum.5 + Enum.6 violation
#define MAX_SIZE 100 // Enum.1 violation -- use constexpr
```
## Source Files & Naming (SF.*, NL.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **SF.1** | Use `.cpp` for code files and `.h` for interface files |
| **SF.7** | Don't write `using namespace` at global scope in a header |
| **SF.8** | Use `#include` guards for all `.h` files |
| **SF.11** | Header files should be self-contained |
| **NL.5** | Avoid encoding type information in names (no Hungarian notation) |
| **NL.8** | Use a consistent naming style |
| **NL.9** | Use ALL_CAPS for macro names only |
| **NL.10** | Prefer `underscore_style` names |
### Header Guard
```cpp
// SF.8: Include guard (or #pragma once)
#ifndef PROJECT_MODULE_WIDGET_H
#define PROJECT_MODULE_WIDGET_H
// SF.11: Self-contained -- include everything this header needs
#include <string>
#include <vector>
namespace project::module {
class Widget {
public:
explicit Widget(std::string name);
const std::string& name() const;
private:
std::string name_;
};
} // namespace project::module
#endif // PROJECT_MODULE_WIDGET_H
```
### Naming Conventions
```cpp
// NL.8 + NL.10: Consistent underscore_style
namespace my_project {
constexpr int max_buffer_size = 4096; // NL.9: not ALL_CAPS (it's not a macro)
class tcp_connection { // underscore_style class
public:
void send_message(std::string_view msg);
bool is_connected() const;
private:
std::string host_; // trailing underscore for members
int port_;
};
} // namespace my_project
```
### Anti-Patterns
- `using namespace std;` in a header at global scope (SF.7)
- Headers that depend on inclusion order (SF.10, SF.11)
- Hungarian notation like `strName`, `iCount` (NL.5)
- ALL_CAPS for anything other than macros (NL.9)
## Performance (Per.*)
### Key Rules
| Rule | Summary |
|------|---------|
| **Per.1** | Don't optimize without reason |
| **Per.2** | Don't optimize prematurely |
| **Per.6** | Don't make claims about performance without measurements |
| **Per.7** | Design to enable optimization |
| **Per.10** | Rely on the static type system |
| **Per.11** | Move computation from run time to compile time |
| **Per.19** | Access memory predictably |
### Guidelines
```cpp
// Per.11: Compile-time computation where possible
constexpr auto lookup_table = [] {
std::array<int, 256> table{};
for (int i = 0; i < 256; ++i) {
table[i] = i * i;
}
return table;
}();
// Per.19: Prefer contiguous data for cache-friendliness
std::vector<Point> points; // GOOD: contiguous
std::vector<std::unique_ptr<Point>> indirect_points; // BAD: pointer chasing
```
### Anti-Patterns
- Optimizing without profiling data (Per.1, Per.6)
- Choosing "clever" low-level code over clear abstractions (Per.4, Per.5)
- Ignoring data layout and cache behavior (Per.19)
## Quick Reference Checklist
Before marking C++ work complete:
- [ ] No raw `new`/`delete` -- use smart pointers or RAII (R.11)
- [ ] Objects initialized at declaration (ES.20)
- [ ] Variables are `const`/`constexpr` by default (Con.1, ES.25)
- [ ] Member functions are `const` where possible (Con.2)
- [ ] `enum class` instead of plain `enum` (Enum.3)
- [ ] `nullptr` instead of `0`/`NULL` (ES.47)
- [ ] No narrowing conversions (ES.46)
- [ ] No C-style casts (ES.48)
- [ ] Single-argument constructors are `explicit` (C.46)
- [ ] Rule of Zero or Rule of Five applied (C.20, C.21)
- [ ] Base class destructors are public virtual or protected non-virtual (C.35)
- [ ] Templates are constrained with concepts (T.10)
- [ ] No `using namespace` in headers at global scope (SF.7)
- [ ] Headers have include guards and are self-contained (SF.8, SF.11)
- [ ] Locks use RAII (`scoped_lock`/`lock_guard`) (CP.20)
- [ ] Exceptions are custom types, thrown by value, caught by reference (E.14, E.15)
- [ ] `'\n'` instead of `std::endl` (SL.io.50)
- [ ] No magic numbers (ES.45)

View File

@@ -0,0 +1,219 @@
---
name: regex-vs-llm-structured-text
description: Decision framework for choosing between regex and LLM when parsing structured text — start with regex, add LLM only for low-confidence edge cases.
---
# Regex vs LLM for Structured Text Parsing
A practical decision framework for parsing structured text (quizzes, forms, invoices, documents). The key insight: regex handles 95-98% of cases cheaply and deterministically. Reserve expensive LLM calls for the remaining edge cases.
## When to Activate
- Parsing structured text with repeating patterns (questions, forms, tables)
- Deciding between regex and LLM for text extraction
- Building hybrid pipelines that combine both approaches
- Optimizing cost/accuracy tradeoffs in text processing
## Decision Framework
```
Is the text format consistent and repeating?
├── Yes (>90% follows a pattern) → Start with Regex
│ ├── Regex handles 95%+ → Done, no LLM needed
│ └── Regex handles <95% → Add LLM for edge cases only
└── No (free-form, highly variable) → Use LLM directly
```
## Architecture Pattern
```
Source Text
[Regex Parser] ─── Extracts structure (95-98% accuracy)
[Text Cleaner] ─── Removes noise (markers, page numbers, artifacts)
[Confidence Scorer] ─── Flags low-confidence extractions
├── High confidence (≥0.95) → Direct output
└── Low confidence (<0.95) → [LLM Validator] → Output
```
## Implementation
### 1. Regex Parser (Handles the Majority)
```python
import re
from dataclasses import dataclass
@dataclass(frozen=True)
class ParsedItem:
id: str
text: str
choices: tuple[str, ...]
answer: str
confidence: float = 1.0
def parse_structured_text(content: str) -> list[ParsedItem]:
"""Parse structured text using regex patterns."""
pattern = re.compile(
r"(?P<id>\d+)\.\s*(?P<text>.+?)\n"
r"(?P<choices>(?:[A-D]\..+?\n)+)"
r"Answer:\s*(?P<answer>[A-D])",
re.MULTILINE | re.DOTALL,
)
items = []
for match in pattern.finditer(content):
choices = tuple(
c.strip() for c in re.findall(r"[A-D]\.\s*(.+)", match.group("choices"))
)
items.append(ParsedItem(
id=match.group("id"),
text=match.group("text").strip(),
choices=choices,
answer=match.group("answer"),
))
return items
```
### 2. Confidence Scoring
Flag items that may need LLM review:
```python
@dataclass(frozen=True)
class ConfidenceFlag:
item_id: str
score: float
reasons: tuple[str, ...]
def score_confidence(item: ParsedItem) -> ConfidenceFlag:
"""Score extraction confidence and flag issues."""
reasons = []
score = 1.0
if len(item.choices) < 3:
reasons.append("few_choices")
score -= 0.3
if not item.answer:
reasons.append("missing_answer")
score -= 0.5
if len(item.text) < 10:
reasons.append("short_text")
score -= 0.2
return ConfidenceFlag(
item_id=item.id,
score=max(0.0, score),
reasons=tuple(reasons),
)
def identify_low_confidence(
items: list[ParsedItem],
threshold: float = 0.95,
) -> list[ConfidenceFlag]:
"""Return items below confidence threshold."""
flags = [score_confidence(item) for item in items]
return [f for f in flags if f.score < threshold]
```
### 3. LLM Validator (Edge Cases Only)
```python
def validate_with_llm(
item: ParsedItem,
original_text: str,
client,
) -> ParsedItem:
"""Use LLM to fix low-confidence extractions."""
response = client.messages.create(
model="claude-haiku-4-5-20251001", # Cheapest model for validation
max_tokens=500,
messages=[{
"role": "user",
"content": (
f"Extract the question, choices, and answer from this text.\n\n"
f"Text: {original_text}\n\n"
f"Current extraction: {item}\n\n"
f"Return corrected JSON if needed, or 'CORRECT' if accurate."
),
}],
)
# Parse LLM response and return corrected item...
return corrected_item
```
### 4. Hybrid Pipeline
```python
def process_document(
content: str,
*,
llm_client=None,
confidence_threshold: float = 0.95,
) -> list[ParsedItem]:
"""Full pipeline: regex -> confidence check -> LLM for edge cases."""
# Step 1: Regex extraction (handles 95-98%)
items = parse_structured_text(content)
# Step 2: Confidence scoring
low_confidence = identify_low_confidence(items, confidence_threshold)
if not low_confidence or llm_client is None:
return items
# Step 3: LLM validation (only for flagged items)
low_conf_ids = {f.item_id for f in low_confidence}
result = []
for item in items:
if item.id in low_conf_ids:
result.append(validate_with_llm(item, content, llm_client))
else:
result.append(item)
return result
```
## Real-World Metrics
From a production quiz parsing pipeline (410 items):
| Metric | Value |
|--------|-------|
| Regex success rate | 98.0% |
| Low confidence items | 8 (2.0%) |
| LLM calls needed | ~5 |
| Cost savings vs all-LLM | ~95% |
| Test coverage | 93% |
## Best Practices
- **Start with regex** — even imperfect regex gives you a baseline to improve
- **Use confidence scoring** to programmatically identify what needs LLM help
- **Use the cheapest LLM** for validation (Haiku-class models are sufficient)
- **Never mutate** parsed items — return new instances from cleaning/validation steps
- **TDD works well** for parsers — write tests for known patterns first, then edge cases
- **Log metrics** (regex success rate, LLM call count) to track pipeline health
## Anti-Patterns to Avoid
- Sending all text to an LLM when regex handles 95%+ of cases (expensive and slow)
- Using regex for free-form, highly variable text (LLM is better here)
- Skipping confidence scoring and hoping regex "just works"
- Mutating parsed objects during cleaning/validation steps
- Not testing edge cases (malformed input, missing fields, encoding issues)
## When to Use
- Quiz/exam question parsing
- Form data extraction
- Invoice/receipt processing
- Document structure parsing (headers, sections, tables)
- Any structured text with repeating patterns where cost matters

View File

@@ -0,0 +1,159 @@
---
name: search-first
description: Research-before-coding workflow. Search for existing tools, libraries, and patterns before writing custom code. Invokes the researcher agent.
---
# /search-first — Research Before You Code
Systematizes the "search for existing solutions before implementing" workflow.
## Trigger
Use this skill when:
- Starting a new feature that likely has existing solutions
- Adding a dependency or integration
- The user asks "add X functionality" and you're about to write code
- Before creating a new utility, helper, or abstraction
## Workflow
```
┌─────────────────────────────────────────────┐
│ 1. NEED ANALYSIS │
│ Define what functionality is needed │
│ Identify language/framework constraints │
├─────────────────────────────────────────────┤
│ 2. PARALLEL SEARCH (researcher agent) │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ npm / │ │ MCP / │ │ GitHub / │ │
│ │ PyPI │ │ Skills │ │ Web │ │
│ └──────────┘ └──────────┘ └──────────┘ │
├─────────────────────────────────────────────┤
│ 3. EVALUATE │
│ Score candidates (functionality, maint, │
│ community, docs, license, deps) │
├─────────────────────────────────────────────┤
│ 4. DECIDE │
│ ┌─────────┐ ┌──────────┐ ┌─────────┐ │
│ │ Adopt │ │ Extend │ │ Build │ │
│ │ as-is │ │ /Wrap │ │ Custom │ │
│ └─────────┘ └──────────┘ └─────────┘ │
├─────────────────────────────────────────────┤
│ 5. IMPLEMENT │
│ Install package / Configure MCP / │
│ Write minimal custom code │
└─────────────────────────────────────────────┘
```
## Decision Matrix
| Signal | Action |
|--------|--------|
| Exact match, well-maintained, MIT/Apache | **Adopt** — install and use directly |
| Partial match, good foundation | **Extend** — install + write thin wrapper |
| Multiple weak matches | **Compose** — combine 2-3 small packages |
| Nothing suitable found | **Build** — write custom, but informed by research |
## How to Use
### Quick Mode (inline)
Before writing a utility or adding functionality, mentally run through:
1. Is this a common problem? → Search npm/PyPI
2. Is there an MCP for this? → Check `~/.claude/settings.json` and search
3. Is there a skill for this? → Check `~/.claude/skills/`
4. Is there a GitHub template? → Search GitHub
### Full Mode (agent)
For non-trivial functionality, launch the researcher agent:
```
Task(subagent_type="general-purpose", prompt="
Research existing tools for: [DESCRIPTION]
Language/framework: [LANG]
Constraints: [ANY]
Search: npm/PyPI, MCP servers, Claude Code skills, GitHub
Return: Structured comparison with recommendation
")
```
## Search Shortcuts by Category
### Development Tooling
- Linting → `eslint`, `ruff`, `textlint`, `markdownlint`
- Formatting → `prettier`, `black`, `gofmt`
- Testing → `jest`, `pytest`, `go test`
- Pre-commit → `husky`, `lint-staged`, `pre-commit`
### AI/LLM Integration
- Claude SDK → Context7 for latest docs
- Prompt management → Check MCP servers
- Document processing → `unstructured`, `pdfplumber`, `mammoth`
### Data & APIs
- HTTP clients → `httpx` (Python), `ky`/`got` (Node)
- Validation → `zod` (TS), `pydantic` (Python)
- Database → Check for MCP servers first
### Content & Publishing
- Markdown processing → `remark`, `unified`, `markdown-it`
- Image optimization → `sharp`, `imagemin`
## Integration Points
### With planner agent
The planner should invoke researcher before Phase 1 (Architecture Review):
- Researcher identifies available tools
- Planner incorporates them into the implementation plan
- Avoids "reinventing the wheel" in the plan
### With architect agent
The architect should consult researcher for:
- Technology stack decisions
- Integration pattern discovery
- Existing reference architectures
### With iterative-retrieval skill
Combine for progressive discovery:
- Cycle 1: Broad search (npm, PyPI, MCP)
- Cycle 2: Evaluate top candidates in detail
- Cycle 3: Test compatibility with project constraints
## Examples
### Example 1: "Add dead link checking"
```
Need: Check markdown files for broken links
Search: npm "markdown dead link checker"
Found: textlint-rule-no-dead-link (score: 9/10)
Action: ADOPT — npm install textlint-rule-no-dead-link
Result: Zero custom code, battle-tested solution
```
### Example 2: "Add HTTP client wrapper"
```
Need: Resilient HTTP client with retries and timeout handling
Search: npm "http client retry", PyPI "httpx retry"
Found: got (Node) with retry plugin, httpx (Python) with built-in retry
Action: ADOPT — use got/httpx directly with retry config
Result: Zero custom code, production-proven libraries
```
### Example 3: "Add config file linter"
```
Need: Validate project config files against a schema
Search: npm "config linter schema", "json schema validator cli"
Found: ajv-cli (score: 8/10)
Action: ADOPT + EXTEND — install ajv-cli, write project-specific schema
Result: 1 package + 1 schema file, no custom validation logic
```
## Anti-Patterns
- **Jumping to code**: Writing a utility without checking if one exists
- **Ignoring MCP**: Not checking if an MCP server already provides the capability
- **Over-customizing**: Wrapping a library so heavily it loses its benefits
- **Dependency bloat**: Installing a massive package for one small feature

View File

@@ -0,0 +1,175 @@
---
description: "Use when auditing Claude skills and commands for quality. Supports Quick Scan (changed skills only) and Full Stocktake modes with sequential subagent batch evaluation."
---
# skill-stocktake
Slash command (`/skill-stocktake`) that audits all Claude skills and commands using a quality checklist + AI holistic judgment. Supports two modes: Quick Scan for recently changed skills, and Full Stocktake for a complete review.
## Scope
The command targets the following paths **relative to the directory where it is invoked**:
| Path | Description |
|------|-------------|
| `~/.claude/skills/` | Global skills (all projects) |
| `{cwd}/.claude/skills/` | Project-level skills (if the directory exists) |
**At the start of Phase 1, the command explicitly lists which paths were found and scanned.**
### Targeting a specific project
To include project-level skills, run from that project's root directory:
```bash
cd ~/path/to/my-project
/skill-stocktake
```
If the project has no `.claude/skills/` directory, only global skills and commands are evaluated.
## Modes
| Mode | Trigger | Duration |
|------|---------|---------|
| Quick Scan | `results.json` exists (default) | 510 min |
| Full Stocktake | `results.json` absent, or `/skill-stocktake full` | 2030 min |
**Results cache:** `~/.claude/skills/skill-stocktake/results.json`
## Quick Scan Flow
Re-evaluate only skills that have changed since the last run (510 min).
1. Read `~/.claude/skills/skill-stocktake/results.json`
2. Run: `bash ~/.claude/skills/skill-stocktake/scripts/quick-diff.sh \
~/.claude/skills/skill-stocktake/results.json`
(Project dir is auto-detected from `$PWD/.claude/skills`; pass it explicitly only if needed)
3. If output is `[]`: report "No changes since last run." and stop
4. Re-evaluate only those changed files using the same Phase 2 criteria
5. Carry forward unchanged skills from previous results
6. Output only the diff
7. Run: `bash ~/.claude/skills/skill-stocktake/scripts/save-results.sh \
~/.claude/skills/skill-stocktake/results.json <<< "$EVAL_RESULTS"`
## Full Stocktake Flow
### Phase 1 — Inventory
Run: `bash ~/.claude/skills/skill-stocktake/scripts/scan.sh`
The script enumerates skill files, extracts frontmatter, and collects UTC mtimes.
Project dir is auto-detected from `$PWD/.claude/skills`; pass it explicitly only if needed.
Present the scan summary and inventory table from the script output:
```
Scanning:
✓ ~/.claude/skills/ (17 files)
✗ {cwd}/.claude/skills/ (not found — global skills only)
```
| Skill | 7d use | 30d use | Description |
|-------|--------|---------|-------------|
### Phase 2 — Quality Evaluation
Launch a Task tool subagent (**Explore agent, model: opus**) with the full inventory and checklist.
The subagent reads each skill, applies the checklist, and returns per-skill JSON:
`{ "verdict": "Keep"|"Improve"|"Update"|"Retire"|"Merge into [X]", "reason": "..." }`
**Chunk guidance:** Process ~20 skills per subagent invocation to keep context manageable. Save intermediate results to `results.json` (`status: "in_progress"`) after each chunk.
After all skills are evaluated: set `status: "completed"`, proceed to Phase 3.
**Resume detection:** If `status: "in_progress"` is found on startup, resume from the first unevaluated skill.
Each skill is evaluated against this checklist:
```
- [ ] Content overlap with other skills checked
- [ ] Overlap with MEMORY.md / CLAUDE.md checked
- [ ] Freshness of technical references verified (use WebSearch if tool names / CLI flags / APIs are present)
- [ ] Usage frequency considered
```
Verdict criteria:
| Verdict | Meaning |
|---------|---------|
| Keep | Useful and current |
| Improve | Worth keeping, but specific improvements needed |
| Update | Referenced technology is outdated (verify with WebSearch) |
| Retire | Low quality, stale, or cost-asymmetric |
| Merge into [X] | Substantial overlap with another skill; name the merge target |
Evaluation is **holistic AI judgment** — not a numeric rubric. Guiding dimensions:
- **Actionability**: code examples, commands, or steps that let you act immediately
- **Scope fit**: name, trigger, and content are aligned; not too broad or narrow
- **Uniqueness**: value not replaceable by MEMORY.md / CLAUDE.md / another skill
- **Currency**: technical references work in the current environment
**Reason quality requirements** — the `reason` field must be self-contained and decision-enabling:
- Do NOT write "unchanged" alone — always restate the core evidence
- For **Retire**: state (1) what specific defect was found, (2) what covers the same need instead
- Bad: `"Superseded"`
- Good: `"disable-model-invocation: true already set; superseded by continuous-learning-v2 which covers all the same patterns plus confidence scoring. No unique content remains."`
- For **Merge**: name the target and describe what content to integrate
- Bad: `"Overlaps with X"`
- Good: `"42-line thin content; Step 4 of chatlog-to-article already covers the same workflow. Integrate the 'article angle' tip as a note in that skill."`
- For **Improve**: describe the specific change needed (what section, what action, target size if relevant)
- Bad: `"Too long"`
- Good: `"276 lines; Section 'Framework Comparison' (L80140) duplicates ai-era-architecture-principles; delete it to reach ~150 lines."`
- For **Keep** (mtime-only change in Quick Scan): restate the original verdict rationale, do not write "unchanged"
- Bad: `"Unchanged"`
- Good: `"mtime updated but content unchanged. Unique Python reference explicitly imported by rules/python/; no overlap found."`
### Phase 3 — Summary Table
| Skill | 7d use | Verdict | Reason |
|-------|--------|---------|--------|
### Phase 4 — Consolidation
1. **Retire / Merge**: present detailed justification per file before confirming with user:
- What specific problem was found (overlap, staleness, broken references, etc.)
- What alternative covers the same functionality (for Retire: which existing skill/rule; for Merge: the target file and what content to integrate)
- Impact of removal (any dependent skills, MEMORY.md references, or workflows affected)
2. **Improve**: present specific improvement suggestions with rationale:
- What to change and why (e.g., "trim 430→200 lines because sections X/Y duplicate python-patterns")
- User decides whether to act
3. **Update**: present updated content with sources checked
4. Check MEMORY.md line count; propose compression if >100 lines
## Results File Schema
`~/.claude/skills/skill-stocktake/results.json`:
**`evaluated_at`**: Must be set to the actual UTC time of evaluation completion.
Obtain via Bash: `date -u +%Y-%m-%dT%H:%M:%SZ`. Never use a date-only approximation like `T00:00:00Z`.
```json
{
"evaluated_at": "2026-02-21T10:00:00Z",
"mode": "full",
"batch_progress": {
"total": 80,
"evaluated": 80,
"status": "completed"
},
"skills": {
"skill-name": {
"path": "~/.claude/skills/skill-name/SKILL.md",
"verdict": "Keep",
"reason": "Concrete, actionable, unique value for X workflow",
"mtime": "2026-01-15T08:30:00Z"
}
}
}
```
## Notes
- Evaluation is blind: the same checklist applies to all skills regardless of origin (ECC, self-authored, auto-extracted)
- Archive / delete operations always require explicit user confirmation
- No verdict branching by skill origin

View File

@@ -0,0 +1,87 @@
#!/usr/bin/env bash
# quick-diff.sh — compare skill file mtimes against results.json evaluated_at
# Usage: quick-diff.sh RESULTS_JSON [CWD_SKILLS_DIR]
# Output: JSON array of changed/new files to stdout (empty [] if no changes)
#
# When CWD_SKILLS_DIR is omitted, defaults to $PWD/.claude/skills so the
# script always picks up project-level skills without relying on the caller.
#
# Environment:
# SKILL_STOCKTAKE_GLOBAL_DIR Override ~/.claude/skills (for testing only;
# do not set in production — intended for bats tests)
# SKILL_STOCKTAKE_PROJECT_DIR Override project dir detection (for testing only)
set -euo pipefail
RESULTS_JSON="${1:-}"
CWD_SKILLS_DIR="${SKILL_STOCKTAKE_PROJECT_DIR:-${2:-$PWD/.claude/skills}}"
GLOBAL_DIR="${SKILL_STOCKTAKE_GLOBAL_DIR:-$HOME/.claude/skills}"
if [[ -z "$RESULTS_JSON" || ! -f "$RESULTS_JSON" ]]; then
echo "Error: RESULTS_JSON not found: ${RESULTS_JSON:-<empty>}" >&2
exit 1
fi
# Validate CWD_SKILLS_DIR looks like a .claude/skills path (defense-in-depth).
# Only warn when the path exists — a nonexistent path poses no traversal risk.
if [[ -n "$CWD_SKILLS_DIR" && -d "$CWD_SKILLS_DIR" && "$CWD_SKILLS_DIR" != */.claude/skills* ]]; then
echo "Warning: CWD_SKILLS_DIR does not look like a .claude/skills path: $CWD_SKILLS_DIR" >&2
fi
evaluated_at=$(jq -r '.evaluated_at' "$RESULTS_JSON")
# Fail fast on a missing or malformed evaluated_at rather than producing
# unpredictable results from ISO 8601 string comparison against "null".
if [[ ! "$evaluated_at" =~ ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z$ ]]; then
echo "Error: invalid or missing evaluated_at in $RESULTS_JSON: $evaluated_at" >&2
exit 1
fi
# Pre-extract known paths from results.json once (O(1) lookup per file instead of O(n*m))
known_paths=$(jq -r '.skills[].path' "$RESULTS_JSON" 2>/dev/null)
tmpdir=$(mktemp -d)
# Use a function to avoid embedding $tmpdir in a quoted string (prevents injection
# if TMPDIR were crafted to contain shell metacharacters).
_cleanup() { rm -rf "$tmpdir"; }
trap _cleanup EXIT
# Shared counter across process_dir calls — intentionally NOT local
i=0
process_dir() {
local dir="$1"
while IFS= read -r file; do
local mtime dp is_new
mtime=$(date -u -r "$file" +%Y-%m-%dT%H:%M:%SZ)
dp="${file/#$HOME/~}"
# Check if this file is known to results.json (exact whole-line match to
# avoid substring false-positives, e.g. "python-patterns" matching "python-patterns-v2").
if echo "$known_paths" | grep -qxF "$dp"; then
is_new="false"
# Known file: only emit if mtime changed (ISO 8601 string comparison is safe)
[[ "$mtime" > "$evaluated_at" ]] || continue
else
is_new="true"
# New file: always emit regardless of mtime
fi
jq -n \
--arg path "$dp" \
--arg mtime "$mtime" \
--argjson is_new "$is_new" \
'{path:$path,mtime:$mtime,is_new:$is_new}' \
> "$tmpdir/$i.json"
i=$((i+1))
done < <(find "$dir" -name "*.md" -type f 2>/dev/null | sort)
}
[[ -d "$GLOBAL_DIR" ]] && process_dir "$GLOBAL_DIR"
[[ -n "$CWD_SKILLS_DIR" && -d "$CWD_SKILLS_DIR" ]] && process_dir "$CWD_SKILLS_DIR"
if [[ $i -eq 0 ]]; then
echo "[]"
else
jq -s '.' "$tmpdir"/*.json
fi

View File

@@ -0,0 +1,56 @@
#!/usr/bin/env bash
# save-results.sh — merge evaluated skills into results.json with correct UTC timestamp
# Usage: save-results.sh RESULTS_JSON <<< "$EVAL_JSON"
#
# stdin format:
# { "skills": {...}, "mode"?: "full"|"quick", "batch_progress"?: {...} }
#
# Always sets evaluated_at to current UTC time via `date -u`.
# Merges stdin .skills into existing results.json (new entries override old).
# Optionally updates .mode and .batch_progress if present in stdin.
set -euo pipefail
RESULTS_JSON="${1:-}"
if [[ -z "$RESULTS_JSON" ]]; then
echo "Error: RESULTS_JSON argument required" >&2
echo "Usage: save-results.sh RESULTS_JSON <<< \"\$EVAL_JSON\"" >&2
exit 1
fi
EVALUATED_AT=$(date -u +%Y-%m-%dT%H:%M:%SZ)
# Read eval results from stdin and validate JSON before touching the results file
input_json=$(cat)
if ! echo "$input_json" | jq empty 2>/dev/null; then
echo "Error: stdin is not valid JSON" >&2
exit 1
fi
if [[ ! -f "$RESULTS_JSON" ]]; then
# Bootstrap: create new results.json from stdin JSON + current UTC timestamp
echo "$input_json" | jq --arg ea "$EVALUATED_AT" \
'. + { evaluated_at: $ea }' > "$RESULTS_JSON"
exit 0
fi
# Merge: new .skills override existing ones; old skills not in input_json are kept.
# Optionally update .mode and .batch_progress if provided.
#
# Use mktemp for a collision-safe temp file (concurrent runs on the same RESULTS_JSON
# would race on a predictable ".tmp" suffix; random suffix prevents silent overwrites).
tmp=$(mktemp "${RESULTS_JSON}.XXXXXX")
trap 'rm -f "$tmp"' EXIT
jq -s \
--arg ea "$EVALUATED_AT" \
'.[0] as $existing | .[1] as $new |
$existing |
.evaluated_at = $ea |
.skills = ($existing.skills + ($new.skills // {})) |
if ($new | has("mode")) then .mode = $new.mode else . end |
if ($new | has("batch_progress")) then .batch_progress = $new.batch_progress else . end' \
"$RESULTS_JSON" <(echo "$input_json") > "$tmp"
mv "$tmp" "$RESULTS_JSON"

View File

@@ -0,0 +1,170 @@
#!/usr/bin/env bash
# scan.sh — enumerate skill files, extract frontmatter and UTC mtime
# Usage: scan.sh [CWD_SKILLS_DIR]
# Output: JSON to stdout
#
# When CWD_SKILLS_DIR is omitted, defaults to $PWD/.claude/skills so the
# script always picks up project-level skills without relying on the caller.
#
# Environment:
# SKILL_STOCKTAKE_GLOBAL_DIR Override ~/.claude/skills (for testing only;
# do not set in production — intended for bats tests)
# SKILL_STOCKTAKE_PROJECT_DIR Override project dir detection (for testing only)
set -euo pipefail
GLOBAL_DIR="${SKILL_STOCKTAKE_GLOBAL_DIR:-$HOME/.claude/skills}"
CWD_SKILLS_DIR="${SKILL_STOCKTAKE_PROJECT_DIR:-${1:-$PWD/.claude/skills}}"
# Path to JSONL file containing tool-use observations (optional; used for usage frequency counts).
# Override via SKILL_STOCKTAKE_OBSERVATIONS env var if your setup uses a different path.
OBSERVATIONS="${SKILL_STOCKTAKE_OBSERVATIONS:-$HOME/.claude/observations.jsonl}"
# Validate CWD_SKILLS_DIR looks like a .claude/skills path (defense-in-depth).
# Only warn when the path exists — a nonexistent path poses no traversal risk.
if [[ -n "$CWD_SKILLS_DIR" && -d "$CWD_SKILLS_DIR" && "$CWD_SKILLS_DIR" != */.claude/skills* ]]; then
echo "Warning: CWD_SKILLS_DIR does not look like a .claude/skills path: $CWD_SKILLS_DIR" >&2
fi
# Extract a frontmatter field (handles both quoted and unquoted single-line values).
# Does NOT support multi-line YAML blocks (| or >) or nested YAML keys.
extract_field() {
local file="$1" field="$2"
awk -v f="$field" '
BEGIN { fm=0 }
/^---$/ { fm++; next }
fm==1 {
n = length(f) + 2
if (substr($0, 1, n) == f ": ") {
val = substr($0, n+1)
gsub(/^"/, "", val)
gsub(/"$/, "", val)
print val
exit
}
}
fm>=2 { exit }
' "$file"
}
# Get UTC timestamp N days ago (supports both macOS and GNU date)
date_ago() {
local n="$1"
date -u -v-"${n}d" +%Y-%m-%dT%H:%M:%SZ 2>/dev/null ||
date -u -d "${n} days ago" +%Y-%m-%dT%H:%M:%SZ
}
# Count observations matching a file path since a cutoff timestamp
count_obs() {
local file="$1" cutoff="$2"
if [[ ! -f "$OBSERVATIONS" ]]; then
echo 0
return
fi
jq -r --arg p "$file" --arg c "$cutoff" \
'select(.tool=="Read" and .path==$p and .timestamp>=$c) | 1' \
"$OBSERVATIONS" 2>/dev/null | wc -l | tr -d ' '
}
# Scan a directory and produce a JSON array of skill objects
scan_dir_to_json() {
local dir="$1"
local c7 c30
c7=$(date_ago 7)
c30=$(date_ago 30)
local tmpdir
tmpdir=$(mktemp -d)
# Use a function to avoid embedding $tmpdir in a quoted string (prevents injection
# if TMPDIR were crafted to contain shell metacharacters).
local _scan_tmpdir="$tmpdir"
_scan_cleanup() { rm -rf "$_scan_tmpdir"; }
trap _scan_cleanup RETURN
# Pre-aggregate observation counts in two passes (one per window) instead of
# calling jq per-file — reduces from O(n*m) to O(n+m) jq invocations.
local obs_7d_counts obs_30d_counts
obs_7d_counts=""
obs_30d_counts=""
if [[ -f "$OBSERVATIONS" ]]; then
obs_7d_counts=$(jq -r --arg c "$c7" \
'select(.tool=="Read" and .timestamp>=$c) | .path' \
"$OBSERVATIONS" 2>/dev/null | sort | uniq -c)
obs_30d_counts=$(jq -r --arg c "$c30" \
'select(.tool=="Read" and .timestamp>=$c) | .path' \
"$OBSERVATIONS" 2>/dev/null | sort | uniq -c)
fi
local i=0
while IFS= read -r file; do
local name desc mtime u7 u30 dp
name=$(extract_field "$file" "name")
desc=$(extract_field "$file" "description")
mtime=$(date -u -r "$file" +%Y-%m-%dT%H:%M:%SZ)
# Use awk exact field match to avoid substring false-positives from grep -F.
# uniq -c output format: " N /path/to/file" — path is always field 2.
u7=$(echo "$obs_7d_counts" | awk -v f="$file" '$2 == f {print $1}' | head -1)
u7="${u7:-0}"
u30=$(echo "$obs_30d_counts" | awk -v f="$file" '$2 == f {print $1}' | head -1)
u30="${u30:-0}"
dp="${file/#$HOME/~}"
jq -n \
--arg path "$dp" \
--arg name "$name" \
--arg description "$desc" \
--arg mtime "$mtime" \
--argjson use_7d "$u7" \
--argjson use_30d "$u30" \
'{path:$path,name:$name,description:$description,use_7d:$use_7d,use_30d:$use_30d,mtime:$mtime}' \
> "$tmpdir/$i.json"
i=$((i+1))
done < <(find "$dir" -name "*.md" -type f 2>/dev/null | sort)
if [[ $i -eq 0 ]]; then
echo "[]"
else
jq -s '.' "$tmpdir"/*.json
fi
}
# --- Main ---
global_found="false"
global_count=0
global_skills="[]"
if [[ -d "$GLOBAL_DIR" ]]; then
global_found="true"
global_skills=$(scan_dir_to_json "$GLOBAL_DIR")
global_count=$(echo "$global_skills" | jq 'length')
fi
project_found="false"
project_path=""
project_count=0
project_skills="[]"
if [[ -n "$CWD_SKILLS_DIR" && -d "$CWD_SKILLS_DIR" ]]; then
project_found="true"
project_path="$CWD_SKILLS_DIR"
project_skills=$(scan_dir_to_json "$CWD_SKILLS_DIR")
project_count=$(echo "$project_skills" | jq 'length')
fi
# Merge global + project skills into one array
all_skills=$(jq -s 'add' <(echo "$global_skills") <(echo "$project_skills"))
jq -n \
--arg global_found "$global_found" \
--argjson global_count "$global_count" \
--arg project_found "$project_found" \
--arg project_path "$project_path" \
--argjson project_count "$project_count" \
--argjson skills "$all_skills" \
'{
scan_summary: {
global: { found: ($global_found == "true"), count: $global_count },
project: { found: ($project_found == "true"), path: $project_path, count: $project_count }
},
skills: $skills
}'

View File

@@ -0,0 +1,142 @@
---
name: swift-actor-persistence
description: Thread-safe data persistence in Swift using actors — in-memory cache with file-backed storage, eliminating data races by design.
---
# Swift Actors for Thread-Safe Persistence
Patterns for building thread-safe data persistence layers using Swift actors. Combines in-memory caching with file-backed storage, leveraging the actor model to eliminate data races at compile time.
## When to Activate
- Building a data persistence layer in Swift 5.5+
- Need thread-safe access to shared mutable state
- Want to eliminate manual synchronization (locks, DispatchQueues)
- Building offline-first apps with local storage
## Core Pattern
### Actor-Based Repository
The actor model guarantees serialized access — no data races, enforced by the compiler.
```swift
public actor LocalRepository<T: Codable & Identifiable> where T.ID == String {
private var cache: [String: T] = [:]
private let fileURL: URL
public init(directory: URL = .documentsDirectory, filename: String = "data.json") {
self.fileURL = directory.appendingPathComponent(filename)
// Synchronous load during init (actor isolation not yet active)
self.cache = Self.loadSynchronously(from: fileURL)
}
// MARK: - Public API
public func save(_ item: T) throws {
cache[item.id] = item
try persistToFile()
}
public func delete(_ id: String) throws {
cache[id] = nil
try persistToFile()
}
public func find(by id: String) -> T? {
cache[id]
}
public func loadAll() -> [T] {
Array(cache.values)
}
// MARK: - Private
private func persistToFile() throws {
let data = try JSONEncoder().encode(Array(cache.values))
try data.write(to: fileURL, options: .atomic)
}
private static func loadSynchronously(from url: URL) -> [String: T] {
guard let data = try? Data(contentsOf: url),
let items = try? JSONDecoder().decode([T].self, from: data) else {
return [:]
}
return Dictionary(uniqueKeysWithValues: items.map { ($0.id, $0) })
}
}
```
### Usage
All calls are automatically async due to actor isolation:
```swift
let repository = LocalRepository<Question>()
// Read fast O(1) lookup from in-memory cache
let question = await repository.find(by: "q-001")
let allQuestions = await repository.loadAll()
// Write updates cache and persists to file atomically
try await repository.save(newQuestion)
try await repository.delete("q-001")
```
### Combining with @Observable ViewModel
```swift
@Observable
final class QuestionListViewModel {
private(set) var questions: [Question] = []
private let repository: LocalRepository<Question>
init(repository: LocalRepository<Question> = LocalRepository()) {
self.repository = repository
}
func load() async {
questions = await repository.loadAll()
}
func add(_ question: Question) async throws {
try await repository.save(question)
questions = await repository.loadAll()
}
}
```
## Key Design Decisions
| Decision | Rationale |
|----------|-----------|
| Actor (not class + lock) | Compiler-enforced thread safety, no manual synchronization |
| In-memory cache + file persistence | Fast reads from cache, durable writes to disk |
| Synchronous init loading | Avoids async initialization complexity |
| Dictionary keyed by ID | O(1) lookups by identifier |
| Generic over `Codable & Identifiable` | Reusable across any model type |
| Atomic file writes (`.atomic`) | Prevents partial writes on crash |
## Best Practices
- **Use `Sendable` types** for all data crossing actor boundaries
- **Keep the actor's public API minimal** — only expose domain operations, not persistence details
- **Use `.atomic` writes** to prevent data corruption if the app crashes mid-write
- **Load synchronously in `init`** — async initializers add complexity with minimal benefit for local files
- **Combine with `@Observable`** ViewModels for reactive UI updates
## Anti-Patterns to Avoid
- Using `DispatchQueue` or `NSLock` instead of actors for new Swift concurrency code
- Exposing the internal cache dictionary to external callers
- Making the file URL configurable without validation
- Forgetting that all actor method calls are `await` — callers must handle async context
- Using `nonisolated` to bypass actor isolation (defeats the purpose)
## When to Use
- Local data storage in iOS/macOS apps (user data, settings, cached content)
- Offline-first architectures that sync to a server later
- Any shared mutable state that multiple parts of the app access concurrently
- Replacing legacy `DispatchQueue`-based thread safety with modern Swift concurrency

View File

@@ -0,0 +1,189 @@
---
name: swift-protocol-di-testing
description: Protocol-based dependency injection for testable Swift code — mock file system, network, and external APIs using focused protocols and Swift Testing.
---
# Swift Protocol-Based Dependency Injection for Testing
Patterns for making Swift code testable by abstracting external dependencies (file system, network, iCloud) behind small, focused protocols. Enables deterministic tests without I/O.
## When to Activate
- Writing Swift code that accesses file system, network, or external APIs
- Need to test error handling paths without triggering real failures
- Building modules that work across environments (app, test, SwiftUI preview)
- Designing testable architecture with Swift concurrency (actors, Sendable)
## Core Pattern
### 1. Define Small, Focused Protocols
Each protocol handles exactly one external concern.
```swift
// File system access
public protocol FileSystemProviding: Sendable {
func containerURL(for purpose: Purpose) -> URL?
}
// File read/write operations
public protocol FileAccessorProviding: Sendable {
func read(from url: URL) throws -> Data
func write(_ data: Data, to url: URL) throws
func fileExists(at url: URL) -> Bool
}
// Bookmark storage (e.g., for sandboxed apps)
public protocol BookmarkStorageProviding: Sendable {
func saveBookmark(_ data: Data, for key: String) throws
func loadBookmark(for key: String) throws -> Data?
}
```
### 2. Create Default (Production) Implementations
```swift
public struct DefaultFileSystemProvider: FileSystemProviding {
public init() {}
public func containerURL(for purpose: Purpose) -> URL? {
FileManager.default.url(forUbiquityContainerIdentifier: nil)
}
}
public struct DefaultFileAccessor: FileAccessorProviding {
public init() {}
public func read(from url: URL) throws -> Data {
try Data(contentsOf: url)
}
public func write(_ data: Data, to url: URL) throws {
try data.write(to: url, options: .atomic)
}
public func fileExists(at url: URL) -> Bool {
FileManager.default.fileExists(atPath: url.path)
}
}
```
### 3. Create Mock Implementations for Testing
```swift
public final class MockFileAccessor: FileAccessorProviding, @unchecked Sendable {
public var files: [URL: Data] = [:]
public var readError: Error?
public var writeError: Error?
public init() {}
public func read(from url: URL) throws -> Data {
if let error = readError { throw error }
guard let data = files[url] else {
throw CocoaError(.fileReadNoSuchFile)
}
return data
}
public func write(_ data: Data, to url: URL) throws {
if let error = writeError { throw error }
files[url] = data
}
public func fileExists(at url: URL) -> Bool {
files[url] != nil
}
}
```
### 4. Inject Dependencies with Default Parameters
Production code uses defaults; tests inject mocks.
```swift
public actor SyncManager {
private let fileSystem: FileSystemProviding
private let fileAccessor: FileAccessorProviding
public init(
fileSystem: FileSystemProviding = DefaultFileSystemProvider(),
fileAccessor: FileAccessorProviding = DefaultFileAccessor()
) {
self.fileSystem = fileSystem
self.fileAccessor = fileAccessor
}
public func sync() async throws {
guard let containerURL = fileSystem.containerURL(for: .sync) else {
throw SyncError.containerNotAvailable
}
let data = try fileAccessor.read(
from: containerURL.appendingPathComponent("data.json")
)
// Process data...
}
}
```
### 5. Write Tests with Swift Testing
```swift
import Testing
@Test("Sync manager handles missing container")
func testMissingContainer() async {
let mockFileSystem = MockFileSystemProvider(containerURL: nil)
let manager = SyncManager(fileSystem: mockFileSystem)
await #expect(throws: SyncError.containerNotAvailable) {
try await manager.sync()
}
}
@Test("Sync manager reads data correctly")
func testReadData() async throws {
let mockFileAccessor = MockFileAccessor()
mockFileAccessor.files[testURL] = testData
let manager = SyncManager(fileAccessor: mockFileAccessor)
let result = try await manager.loadData()
#expect(result == expectedData)
}
@Test("Sync manager handles read errors gracefully")
func testReadError() async {
let mockFileAccessor = MockFileAccessor()
mockFileAccessor.readError = CocoaError(.fileReadCorruptFile)
let manager = SyncManager(fileAccessor: mockFileAccessor)
await #expect(throws: SyncError.self) {
try await manager.sync()
}
}
```
## Best Practices
- **Single Responsibility**: Each protocol should handle one concern — don't create "god protocols" with many methods
- **Sendable conformance**: Required when protocols are used across actor boundaries
- **Default parameters**: Let production code use real implementations by default; only tests need to specify mocks
- **Error simulation**: Design mocks with configurable error properties for testing failure paths
- **Only mock boundaries**: Mock external dependencies (file system, network, APIs), not internal types
## Anti-Patterns to Avoid
- Creating a single large protocol that covers all external access
- Mocking internal types that have no external dependencies
- Using `#if DEBUG` conditionals instead of proper dependency injection
- Forgetting `Sendable` conformance when used with actors
- Over-engineering: if a type has no external dependencies, it doesn't need a protocol
## When to Use
- Any Swift code that touches file system, network, or external APIs
- Testing error handling paths that are hard to trigger in real environments
- Building modules that need to work in app, test, and SwiftUI preview contexts
- Apps using Swift concurrency (actors, structured concurrency) that need testable architecture

View File

@@ -11,7 +11,7 @@ const assert = require('assert');
const path = require('path');
const fs = require('fs');
const os = require('os');
const { spawnSync, execFileSync } = require('child_process');
const { spawnSync } = require('child_process');
const evaluateScript = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'evaluate-session.js');

View File

@@ -1324,7 +1324,7 @@ async function runTests() {
val = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(val, 2, 'Second call should write count 2');
} finally {
try { fs.unlinkSync(counterFile); } catch {}
try { fs.unlinkSync(counterFile); } catch { /* ignore */ }
}
})) passed++; else failed++;
@@ -1341,7 +1341,7 @@ async function runTests() {
assert.strictEqual(result.code, 0);
assert.ok(result.stderr.includes('5 tool calls reached'), 'Should suggest compact at threshold');
} finally {
try { fs.unlinkSync(counterFile); } catch {}
try { fs.unlinkSync(counterFile); } catch { /* ignore */ }
}
})) passed++; else failed++;
@@ -1359,7 +1359,7 @@ async function runTests() {
assert.strictEqual(result.code, 0);
assert.ok(result.stderr.includes('30 tool calls'), 'Should suggest at threshold + 25n intervals');
} finally {
try { fs.unlinkSync(counterFile); } catch {}
try { fs.unlinkSync(counterFile); } catch { /* ignore */ }
}
})) passed++; else failed++;
@@ -1376,7 +1376,7 @@ async function runTests() {
assert.ok(!result.stderr.includes('tool calls reached'), 'Should not suggest below threshold');
assert.ok(!result.stderr.includes('checkpoint'), 'Should not suggest checkpoint');
} finally {
try { fs.unlinkSync(counterFile); } catch {}
try { fs.unlinkSync(counterFile); } catch { /* ignore */ }
}
})) passed++; else failed++;
@@ -1394,7 +1394,7 @@ async function runTests() {
const newCount = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(newCount, 1, 'Should reset to 1 on overflow value');
} finally {
try { fs.unlinkSync(counterFile); } catch {}
try { fs.unlinkSync(counterFile); } catch { /* ignore */ }
}
})) passed++; else failed++;
@@ -1410,7 +1410,7 @@ async function runTests() {
const newCount = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(newCount, 1, 'Should reset to 1 on negative value');
} finally {
try { fs.unlinkSync(counterFile); } catch {}
try { fs.unlinkSync(counterFile); } catch { /* ignore */ }
}
})) passed++; else failed++;
@@ -1426,7 +1426,7 @@ async function runTests() {
assert.strictEqual(result.code, 0);
assert.ok(result.stderr.includes('50 tool calls reached'), 'Zero threshold should fall back to 50');
} finally {
try { fs.unlinkSync(counterFile); } catch {}
try { fs.unlinkSync(counterFile); } catch { /* ignore */ }
}
})) passed++; else failed++;
@@ -1443,7 +1443,7 @@ async function runTests() {
assert.strictEqual(result.code, 0);
assert.ok(result.stderr.includes('50 tool calls reached'), 'Should use default threshold of 50');
} finally {
try { fs.unlinkSync(counterFile); } catch {}
try { fs.unlinkSync(counterFile); } catch { /* ignore */ }
}
})) passed++; else failed++;
@@ -1883,7 +1883,7 @@ async function runTests() {
assert.strictEqual(result.code, 0);
assert.ok(result.stderr.includes('38 tool calls'), 'Should suggest at threshold(13) + 25 = 38');
} finally {
try { fs.unlinkSync(counterFile); } catch {}
try { fs.unlinkSync(counterFile); } catch { /* ignore */ }
}
})) passed++; else failed++;
@@ -1901,7 +1901,7 @@ async function runTests() {
assert.strictEqual(result.code, 0);
assert.ok(!result.stderr.includes('checkpoint'), 'Should NOT suggest at count=50 with threshold=13');
} finally {
try { fs.unlinkSync(counterFile); } catch {}
try { fs.unlinkSync(counterFile); } catch { /* ignore */ }
}
})) passed++; else failed++;
@@ -1918,7 +1918,7 @@ async function runTests() {
const newCount = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(newCount, 1, 'Should reset to 1 on corrupted file content');
} finally {
try { fs.unlinkSync(counterFile); } catch {}
try { fs.unlinkSync(counterFile); } catch { /* ignore */ }
}
})) passed++; else failed++;
@@ -1935,7 +1935,7 @@ async function runTests() {
const newCount = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(newCount, 1000001, 'Should increment from exactly 1000000');
} finally {
try { fs.unlinkSync(counterFile); } catch {}
try { fs.unlinkSync(counterFile); } catch { /* ignore */ }
}
})) passed++; else failed++;

View File

@@ -19,11 +19,11 @@ const compactScript = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'sugg
function test(name, fn) {
try {
fn();
console.log(` \u2713 ${name}`);
console.log(` \u2713 ${name}`);
return true;
} catch (err) {
console.log(` \u2717 ${name}`);
console.log(` Error: ${err.message}`);
} catch (_err) {
console.log(` \u2717 ${name}`);
console.log(` Error: ${_err.message}`);
return false;
}
}
@@ -66,7 +66,11 @@ function runTests() {
// Cleanup helper
function cleanupCounter() {
try { fs.unlinkSync(counterFile); } catch {}
try {
fs.unlinkSync(counterFile);
} catch (_err) {
// Ignore error
}
}
// Basic functionality
@@ -80,7 +84,8 @@ function runTests() {
const count = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 1, 'Counter should be 1 after first run');
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
if (test('increments counter on subsequent runs', () => {
cleanupCounter();
@@ -90,7 +95,8 @@ function runTests() {
const count = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 3, 'Counter should be 3 after three runs');
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
// Threshold suggestion
console.log('\nThreshold suggestion:');
@@ -106,7 +112,8 @@ function runTests() {
`Should suggest compact at threshold. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
if (test('does NOT suggest compact before threshold', () => {
cleanupCounter();
@@ -117,7 +124,8 @@ function runTests() {
'Should NOT suggest compact before threshold'
);
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
// Interval suggestion (every 25 calls after threshold)
console.log('\nInterval suggestion:');
@@ -135,7 +143,8 @@ function runTests() {
`Should suggest at threshold+25 interval. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
// Environment variable handling
console.log('\nEnvironment variable handling:');
@@ -151,7 +160,8 @@ function runTests() {
`Should use default threshold of 50. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
if (test('ignores invalid COMPACT_THRESHOLD (negative)', () => {
cleanupCounter();
@@ -163,7 +173,8 @@ function runTests() {
`Should fallback to 50 for negative threshold. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
if (test('ignores non-numeric COMPACT_THRESHOLD', () => {
cleanupCounter();
@@ -175,7 +186,8 @@ function runTests() {
`Should fallback to 50 for non-numeric threshold. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
// Corrupted counter file
console.log('\nCorrupted counter file:');
@@ -189,7 +201,8 @@ function runTests() {
const count = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 1, 'Should reset to 1 on corrupted file');
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
if (test('resets counter on extremely large value', () => {
cleanupCounter();
@@ -200,7 +213,8 @@ function runTests() {
const count = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 1, 'Should reset to 1 for value > 1000000');
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
if (test('handles empty counter file', () => {
cleanupCounter();
@@ -211,7 +225,8 @@ function runTests() {
const count = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 1, 'Should start at 1 for empty file');
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
// Session isolation
console.log('\nSession isolation:');
@@ -230,10 +245,11 @@ function runTests() {
assert.strictEqual(countA, 2, 'Session A should have count 2');
assert.strictEqual(countB, 1, 'Session B should have count 1');
} finally {
try { fs.unlinkSync(fileA); } catch {}
try { fs.unlinkSync(fileB); } catch {}
try { fs.unlinkSync(fileA); } catch (_err) { /* ignore */ }
try { fs.unlinkSync(fileB); } catch (_err) { /* ignore */ }
}
})) passed++; else failed++;
})) passed++;
else failed++;
// Always exits 0
console.log('\nExit code:');
@@ -243,7 +259,8 @@ function runTests() {
const result = runCompact({ CLAUDE_SESSION_ID: testSession });
assert.strictEqual(result.code, 0, 'Should always exit 0');
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
// ── Round 29: threshold boundary values ──
console.log('\nThreshold boundary values:');
@@ -258,7 +275,8 @@ function runTests() {
`Should fallback to 50 for threshold=0. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
if (test('accepts COMPACT_THRESHOLD=10000 (boundary max)', () => {
cleanupCounter();
@@ -270,7 +288,8 @@ function runTests() {
`Should accept threshold=10000. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
if (test('rejects COMPACT_THRESHOLD=10001 (falls back to 50)', () => {
cleanupCounter();
@@ -282,7 +301,8 @@ function runTests() {
`Should fallback to 50 for threshold=10001. Got stderr: ${result.stderr}`
);
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
if (test('rejects float COMPACT_THRESHOLD (e.g. 3.5)', () => {
cleanupCounter();
@@ -297,33 +317,36 @@ function runTests() {
'Float threshold should be parseInt-ed to 3, no suggestion at count=50'
);
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
if (test('counter value at exact boundary 1000000 is valid', () => {
cleanupCounter();
fs.writeFileSync(counterFile, '999999');
const result = runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: '3' });
runCompact({ CLAUDE_SESSION_ID: testSession, COMPACT_THRESHOLD: '3' });
// 999999 is valid (> 0, <= 1000000), count becomes 1000000
const count = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 1000000, 'Counter at 1000000 boundary should be valid');
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
if (test('counter value at 1000001 is clamped (reset to 1)', () => {
cleanupCounter();
fs.writeFileSync(counterFile, '1000001');
const result = runCompact({ CLAUDE_SESSION_ID: testSession });
runCompact({ CLAUDE_SESSION_ID: testSession });
const count = parseInt(fs.readFileSync(counterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 1, 'Counter > 1000000 should be reset to 1');
cleanupCounter();
})) passed++; else failed++;
})) passed++;
else failed++;
// ── Round 64: default session ID fallback ──
console.log('\nDefault session ID fallback (Round 64):');
if (test('uses "default" session ID when CLAUDE_SESSION_ID is empty', () => {
const defaultCounterFile = getCounterFilePath('default');
try { fs.unlinkSync(defaultCounterFile); } catch {}
try { fs.unlinkSync(defaultCounterFile); } catch (_err) { /* ignore */ }
try {
// Pass empty CLAUDE_SESSION_ID — falsy, so script uses 'default'
const env = { ...process.env, CLAUDE_SESSION_ID: '' };
@@ -338,12 +361,14 @@ function runTests() {
const count = parseInt(fs.readFileSync(defaultCounterFile, 'utf8').trim(), 10);
assert.strictEqual(count, 1, 'Counter should be 1 for first run with default session');
} finally {
try { fs.unlinkSync(defaultCounterFile); } catch {}
try { fs.unlinkSync(defaultCounterFile); } catch (_err) { /* ignore */ }
}
})) passed++; else failed++;
})) passed++;
else failed++;
// Summary
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
console.log(`
Results: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}

View File

@@ -262,8 +262,13 @@ async function runTests() {
});
});
assert.ok(stderr.includes('BLOCKED'), 'Blocking hook should output BLOCKED');
assert.strictEqual(code, 2, 'Blocking hook should exit with code 2');
// Hook only blocks on non-Windows platforms (tmux is Unix-only)
if (process.platform === 'win32') {
assert.strictEqual(code, 0, 'On Windows, hook should not block (exit 0)');
} else {
assert.ok(stderr.includes('BLOCKED'), 'Blocking hook should output BLOCKED');
assert.strictEqual(code, 2, 'Blocking hook should exit with code 2');
}
})) passed++; else failed++;
// ==========================================
@@ -298,7 +303,12 @@ async function runTests() {
});
});
assert.strictEqual(code, 2, 'Blocking hook should exit 2');
// Hook only blocks on non-Windows platforms (tmux is Unix-only)
if (process.platform === 'win32') {
assert.strictEqual(code, 0, 'On Windows, hook should not block (exit 0)');
} else {
assert.strictEqual(code, 2, 'Blocking hook should exit 2');
}
})) passed++; else failed++;
if (await asyncTest('hooks handle missing files gracefully', async () => {

File diff suppressed because it is too large Load Diff

View File

@@ -787,7 +787,6 @@ function runTests() {
// Verify the file exists
const aliasesPath = path.join(tmpHome, '.claude', 'session-aliases.json');
assert.ok(fs.existsSync(aliasesPath), 'Aliases file should exist');
const contentBefore = fs.readFileSync(aliasesPath, 'utf8');
// Attempt to save circular data — will fail
const circular = { aliases: {}, metadata: {} };

View File

@@ -1124,7 +1124,7 @@ src/main.ts
} else {
delete process.env.USERPROFILE;
}
try { fs.rmSync(r33Home, { recursive: true, force: true }); } catch {}
try { fs.rmSync(r33Home, { recursive: true, force: true }); } catch (_e) { /* ignore cleanup errors */ }
// ── Round 46: path heuristic and checklist edge cases ──
console.log('\ngetSessionStats Windows path heuristic (Round 46):');
@@ -1488,6 +1488,27 @@ src/main.ts
'Content without session items should have 0 totalItems');
})) passed++; else failed++;
// Re-establish test environment for Rounds 95-98 (these tests need sessions to exist)
const tmpHome2 = path.join(os.tmpdir(), `ecc-session-mgr-test-2-${Date.now()}`);
const tmpSessionsDir2 = path.join(tmpHome2, '.claude', 'sessions');
fs.mkdirSync(tmpSessionsDir2, { recursive: true });
const origHome2 = process.env.HOME;
const origUserProfile2 = process.env.USERPROFILE;
// Create test session files for these tests
const testSessions2 = [
{ name: '2026-01-15-aaaa1111-session.tmp', content: '# Test Session 1' },
{ name: '2026-02-01-bbbb2222-session.tmp', content: '# Test Session 2' },
{ name: '2026-02-10-cccc3333-session.tmp', content: '# Test Session 3' },
];
for (const session of testSessions2) {
const filePath = path.join(tmpSessionsDir2, session.name);
fs.writeFileSync(filePath, session.content);
}
process.env.HOME = tmpHome2;
process.env.USERPROFILE = tmpHome2;
// ── Round 95: getAllSessions with both negative offset AND negative limit ──
console.log('\nRound 95: getAllSessions (both negative offset and negative limit):');
@@ -1579,6 +1600,20 @@ src/main.ts
);
})) passed++; else failed++;
// Cleanup test environment for Rounds 95-98 that needed sessions
// (Round 98: parseSessionFilename below doesn't need sessions)
process.env.HOME = origHome2;
if (origUserProfile2 !== undefined) {
process.env.USERPROFILE = origUserProfile2;
} else {
delete process.env.USERPROFILE;
}
try {
fs.rmSync(tmpHome2, { recursive: true, force: true });
} catch {
// best-effort
}
// ── Round 98: parseSessionFilename with null input throws TypeError ──
console.log('\nRound 98: parseSessionFilename (null input — crashes at line 30):');
@@ -1985,7 +2020,7 @@ file.ts
assert.ok(!afterContent.includes('Appended data'),
'Original content should be unchanged');
} finally {
try { fs.chmodSync(readOnlyFile, 0o644); } catch {}
try { fs.chmodSync(readOnlyFile, 0o644); } catch (_e) { /* ignore permission errors */ }
fs.rmSync(tmpDir, { recursive: true, force: true });
}
})) passed++; else failed++;
@@ -2329,6 +2364,7 @@ file.ts
if (test('getSessionById matches old format YYYY-MM-DD-session.tmp via noIdMatch path', () => {
const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'r122-old-format-'));
const origHome = process.env.HOME;
const origUserProfile = process.env.USERPROFILE;
const origDir = process.env.CLAUDE_DIR;
try {
// Set up isolated environment
@@ -2336,6 +2372,7 @@ file.ts
const sessionsDir = path.join(claudeDir, 'sessions');
fs.mkdirSync(sessionsDir, { recursive: true });
process.env.HOME = tmpDir;
process.env.USERPROFILE = tmpDir; // Windows: os.homedir() uses USERPROFILE
delete process.env.CLAUDE_DIR;
// Clear require cache for fresh module with new HOME
@@ -2361,6 +2398,8 @@ file.ts
'Non-matching date should return null');
} finally {
process.env.HOME = origHome;
if (origUserProfile !== undefined) process.env.USERPROFILE = origUserProfile;
else delete process.env.USERPROFILE;
if (origDir) process.env.CLAUDE_DIR = origDir;
delete require.cache[require.resolve('../../scripts/lib/utils')];
delete require.cache[require.resolve('../../scripts/lib/session-manager')];
@@ -2450,6 +2489,7 @@ file.ts
// "2026/01/15" or "Jan 15 2026" will never match, silently returning empty.
// No validation or normalization occurs on the date parameter.
const origHome = process.env.HOME;
const origUserProfile = process.env.USERPROFILE;
const origDir = process.env.CLAUDE_DIR;
const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'r124-date-format-'));
const homeDir = path.join(tmpDir, 'home');
@@ -2457,6 +2497,7 @@ file.ts
try {
process.env.HOME = homeDir;
process.env.USERPROFILE = homeDir; // Windows: os.homedir() uses USERPROFILE
delete process.env.CLAUDE_DIR;
delete require.cache[require.resolve('../../scripts/lib/utils')];
delete require.cache[require.resolve('../../scripts/lib/session-manager')];
@@ -2495,6 +2536,8 @@ file.ts
'null date skips filter and returns all sessions');
} finally {
process.env.HOME = origHome;
if (origUserProfile !== undefined) process.env.USERPROFILE = origUserProfile;
else delete process.env.USERPROFILE;
if (origDir) process.env.CLAUDE_DIR = origDir;
delete require.cache[require.resolve('../../scripts/lib/utils')];
delete require.cache[require.resolve('../../scripts/lib/session-manager')];

View File

@@ -836,7 +836,8 @@ function runTests() {
console.log('\nrunCommand Edge Cases:');
if (test('runCommand returns trimmed output', () => {
const result = utils.runCommand('echo " hello "');
// Windows echo includes quotes in output, use node to ensure consistent behavior
const result = utils.runCommand('node -e "process.stdout.write(\' hello \')"');
assert.strictEqual(result.success, true);
assert.strictEqual(result.output, 'hello', 'Should trim leading/trailing whitespace');
})) passed++; else failed++;
@@ -884,6 +885,10 @@ function runTests() {
console.log('\nreadStdinJson maxSize truncation:');
if (test('readStdinJson maxSize stops accumulating after threshold (chunk-level guard)', () => {
if (process.platform === 'win32') {
console.log(' (skipped — stdin chunking behavior differs on Windows)');
return true;
}
const { execFileSync } = require('child_process');
// maxSize is a chunk-level guard: once data.length >= maxSize, no MORE chunks are added.
// A single small chunk that arrives when data.length < maxSize is added in full.
@@ -1678,6 +1683,10 @@ function runTests() {
// ── Round 110: findFiles root directory unreadable — silent empty return (not throw) ──
console.log('\nRound 110: findFiles (root directory unreadable — EACCES on readdirSync caught silently):');
if (test('findFiles returns empty array when root directory exists but is unreadable', () => {
if (process.platform === 'win32' || process.getuid?.() === 0) {
console.log(' (skipped — chmod ineffective on Windows/root)');
return true;
}
const tmpDir = fs.mkdtempSync(path.join(utils.getTempDir(), 'r110-unreadable-root-'));
const unreadableDir = path.join(tmpDir, 'no-read');
fs.mkdirSync(unreadableDir);
@@ -1697,7 +1706,7 @@ function runTests() {
'Recursive search on unreadable root should also return empty array');
} finally {
// Restore permissions before cleanup
try { fs.chmodSync(unreadableDir, 0o755); } catch {}
try { fs.chmodSync(unreadableDir, 0o755); } catch (_e) { /* ignore permission errors */ }
fs.rmSync(tmpDir, { recursive: true, force: true });
}
})) passed++; else failed++;