mirror of
https://github.com/affaan-m/everything-claude-code.git
synced 2026-04-11 03:43:30 +08:00
Compare commits
17 Commits
47f508ec21
...
7ccfda9e25
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7ccfda9e25 | ||
|
|
2643e0c72f | ||
|
|
1975a576c5 | ||
|
|
f563fe2a3b | ||
|
|
e8495aa3fc | ||
|
|
35071150b7 | ||
|
|
40f18885b1 | ||
|
|
b77f49569b | ||
|
|
bea68549c5 | ||
|
|
b981c765ae | ||
|
|
b61f549444 | ||
|
|
162236f463 | ||
|
|
04ad4737de | ||
|
|
8ebb47bdd1 | ||
|
|
e70c43bcd4 | ||
|
|
cbccb7fdc0 | ||
|
|
a2df9397ff |
442
.agents/skills/everything-claude-code/SKILL.md
Normal file
442
.agents/skills/everything-claude-code/SKILL.md
Normal file
@@ -0,0 +1,442 @@
|
||||
---
|
||||
name: everything-claude-code-conventions
|
||||
description: Development conventions and patterns for everything-claude-code. JavaScript project with conventional commits.
|
||||
---
|
||||
|
||||
# Everything Claude Code Conventions
|
||||
|
||||
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-03-20
|
||||
|
||||
## Overview
|
||||
|
||||
This skill teaches Claude the development patterns and conventions used in everything-claude-code.
|
||||
|
||||
## Tech Stack
|
||||
|
||||
- **Primary Language**: JavaScript
|
||||
- **Architecture**: hybrid module organization
|
||||
- **Test Location**: separate
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Activate this skill when:
|
||||
- Making changes to this repository
|
||||
- Adding new features following established patterns
|
||||
- Writing tests that match project conventions
|
||||
- Creating commits with proper message format
|
||||
|
||||
## Commit Conventions
|
||||
|
||||
Follow these commit message conventions based on 500 analyzed commits.
|
||||
|
||||
### Commit Style: Conventional Commits
|
||||
|
||||
### Prefixes Used
|
||||
|
||||
- `fix`
|
||||
- `test`
|
||||
- `feat`
|
||||
- `docs`
|
||||
|
||||
### Message Guidelines
|
||||
|
||||
- Average message length: ~65 characters
|
||||
- Keep first line concise and descriptive
|
||||
- Use imperative mood ("Add feature" not "Added feature")
|
||||
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat(rules): add C# language support
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
chore(deps-dev): bump flatted (#675)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
fix: auto-detect ECC root from plugin cache when CLAUDE_PLUGIN_ROOT is unset (#547) (#691)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
docs: add Antigravity setup and usage guide (#552)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
merge: PR #529 — feat(skills): add documentation-lookup, bun-runtime, nextjs-turbopack; feat(agents): add rust-reviewer
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
Revert "Add Kiro IDE support (.kiro/) (#548)"
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
Add Kiro IDE support (.kiro/) (#548)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add block-no-verify hook for Claude Code and Cursor (#649)
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### Project Structure: Single Package
|
||||
|
||||
This project uses **hybrid** module organization.
|
||||
|
||||
### Configuration Files
|
||||
|
||||
- `.github/workflows/ci.yml`
|
||||
- `.github/workflows/maintenance.yml`
|
||||
- `.github/workflows/monthly-metrics.yml`
|
||||
- `.github/workflows/release.yml`
|
||||
- `.github/workflows/reusable-release.yml`
|
||||
- `.github/workflows/reusable-test.yml`
|
||||
- `.github/workflows/reusable-validate.yml`
|
||||
- `.opencode/package.json`
|
||||
- `.opencode/tsconfig.json`
|
||||
- `.prettierrc`
|
||||
- `eslint.config.js`
|
||||
- `package.json`
|
||||
|
||||
### Guidelines
|
||||
|
||||
- This project uses a hybrid organization
|
||||
- Follow existing patterns when adding new code
|
||||
|
||||
## Code Style
|
||||
|
||||
### Language: JavaScript
|
||||
|
||||
### Naming Conventions
|
||||
|
||||
| Element | Convention |
|
||||
|---------|------------|
|
||||
| Files | camelCase |
|
||||
| Functions | camelCase |
|
||||
| Classes | PascalCase |
|
||||
| Constants | SCREAMING_SNAKE_CASE |
|
||||
|
||||
### Import Style: Relative Imports
|
||||
|
||||
### Export Style: Mixed Style
|
||||
|
||||
|
||||
*Preferred import style*
|
||||
|
||||
```typescript
|
||||
// Use relative imports
|
||||
import { Button } from '../components/Button'
|
||||
import { useAuth } from './hooks/useAuth'
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Test Framework
|
||||
|
||||
No specific test framework detected — use the repository's existing test patterns.
|
||||
|
||||
### File Pattern: `*.test.js`
|
||||
|
||||
### Test Types
|
||||
|
||||
- **Unit tests**: Test individual functions and components in isolation
|
||||
- **Integration tests**: Test interactions between multiple components/services
|
||||
|
||||
### Coverage
|
||||
|
||||
This project has coverage reporting configured. Aim for 80%+ coverage.
|
||||
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Error Handling Style: Try-Catch Blocks
|
||||
|
||||
|
||||
*Standard error handling pattern*
|
||||
|
||||
```typescript
|
||||
try {
|
||||
const result = await riskyOperation()
|
||||
return result
|
||||
} catch (error) {
|
||||
console.error('Operation failed:', error)
|
||||
throw new Error('User-friendly message')
|
||||
}
|
||||
```
|
||||
|
||||
## Common Workflows
|
||||
|
||||
These workflows were detected from analyzing commit patterns.
|
||||
|
||||
### Database Migration
|
||||
|
||||
Database schema changes with migration files
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create migration file
|
||||
2. Update schema definitions
|
||||
3. Generate/update types
|
||||
|
||||
**Files typically involved**:
|
||||
- `**/schema.*`
|
||||
- `migrations/*`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
feat: implement --with/--without selective install flags (#679)
|
||||
fix: sync catalog counts with filesystem (27 agents, 113 skills, 58 commands) (#693)
|
||||
feat(rules): add Rust language rules (rebased #660) (#686)
|
||||
```
|
||||
|
||||
### Feature Development
|
||||
|
||||
Standard feature implementation workflow
|
||||
|
||||
**Frequency**: ~22 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Add feature implementation
|
||||
2. Add tests for feature
|
||||
3. Update documentation
|
||||
|
||||
**Files typically involved**:
|
||||
- `manifests/*`
|
||||
- `schemas/*`
|
||||
- `**/*.test.*`
|
||||
- `**/api/**`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
feat(skills): add documentation-lookup, bun-runtime, nextjs-turbopack; feat(agents): add rust-reviewer
|
||||
docs(skills): align documentation-lookup with CONTRIBUTING template; add cross-harness (Codex/Cursor) skill copies
|
||||
fix: address PR review — skill template (When to use, How it works, Examples), bun.lock, next build note, rust-reviewer CI note, doc-lookup privacy/uncertainty
|
||||
```
|
||||
|
||||
### Add Language Rules
|
||||
|
||||
Adds a new programming language to the rules system, including coding style, hooks, patterns, security, and testing guidelines.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create a new directory under rules/{language}/
|
||||
2. Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
|
||||
3. Optionally reference or link to related skills
|
||||
|
||||
**Files typically involved**:
|
||||
- `rules/*/coding-style.md`
|
||||
- `rules/*/hooks.md`
|
||||
- `rules/*/patterns.md`
|
||||
- `rules/*/security.md`
|
||||
- `rules/*/testing.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Create a new directory under rules/{language}/
|
||||
Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
|
||||
Optionally reference or link to related skills
|
||||
```
|
||||
|
||||
### Add New Skill
|
||||
|
||||
Adds a new skill to the system, documenting its workflow, triggers, and usage, often with supporting scripts.
|
||||
|
||||
**Frequency**: ~4 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create a new directory under skills/{skill-name}/
|
||||
2. Add SKILL.md with documentation (When to Use, How It Works, Examples, etc.)
|
||||
3. Optionally add scripts or supporting files under skills/{skill-name}/scripts/
|
||||
4. Address review feedback and iterate on documentation
|
||||
|
||||
**Files typically involved**:
|
||||
- `skills/*/SKILL.md`
|
||||
- `skills/*/scripts/*.sh`
|
||||
- `skills/*/scripts/*.js`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Create a new directory under skills/{skill-name}/
|
||||
Add SKILL.md with documentation (When to Use, How It Works, Examples, etc.)
|
||||
Optionally add scripts or supporting files under skills/{skill-name}/scripts/
|
||||
Address review feedback and iterate on documentation
|
||||
```
|
||||
|
||||
### Add New Agent
|
||||
|
||||
Adds a new agent to the system for code review, build resolution, or other automated tasks.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create a new agent markdown file under agents/{agent-name}.md
|
||||
2. Register the agent in AGENTS.md
|
||||
3. Optionally update README.md and docs/COMMAND-AGENT-MAP.md
|
||||
|
||||
**Files typically involved**:
|
||||
- `agents/*.md`
|
||||
- `AGENTS.md`
|
||||
- `README.md`
|
||||
- `docs/COMMAND-AGENT-MAP.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Create a new agent markdown file under agents/{agent-name}.md
|
||||
Register the agent in AGENTS.md
|
||||
Optionally update README.md and docs/COMMAND-AGENT-MAP.md
|
||||
```
|
||||
|
||||
### Add New Command
|
||||
|
||||
Adds a new command to the system, often paired with a backing skill.
|
||||
|
||||
**Frequency**: ~1 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create a new markdown file under commands/{command-name}.md
|
||||
2. Optionally add or update a backing skill under skills/{skill-name}/SKILL.md
|
||||
|
||||
**Files typically involved**:
|
||||
- `commands/*.md`
|
||||
- `skills/*/SKILL.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Create a new markdown file under commands/{command-name}.md
|
||||
Optionally add or update a backing skill under skills/{skill-name}/SKILL.md
|
||||
```
|
||||
|
||||
### Sync Catalog Counts
|
||||
|
||||
Synchronizes the documented counts of agents, skills, and commands in AGENTS.md and README.md with the actual repository state.
|
||||
|
||||
**Frequency**: ~3 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Update agent, skill, and command counts in AGENTS.md
|
||||
2. Update the same counts in README.md (quick-start, comparison table, etc.)
|
||||
3. Optionally update other documentation files
|
||||
|
||||
**Files typically involved**:
|
||||
- `AGENTS.md`
|
||||
- `README.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Update agent, skill, and command counts in AGENTS.md
|
||||
Update the same counts in README.md (quick-start, comparison table, etc.)
|
||||
Optionally update other documentation files
|
||||
```
|
||||
|
||||
### Add Cross Harness Skill Copies
|
||||
|
||||
Adds skill copies for different agent harnesses (e.g., Codex, Cursor, Antigravity) to ensure compatibility across platforms.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Copy or adapt SKILL.md to .agents/skills/{skill}/SKILL.md and/or .cursor/skills/{skill}/SKILL.md
|
||||
2. Optionally add harness-specific openai.yaml or config files
|
||||
3. Address review feedback to align with CONTRIBUTING template
|
||||
|
||||
**Files typically involved**:
|
||||
- `.agents/skills/*/SKILL.md`
|
||||
- `.cursor/skills/*/SKILL.md`
|
||||
- `.agents/skills/*/agents/openai.yaml`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Copy or adapt SKILL.md to .agents/skills/{skill}/SKILL.md and/or .cursor/skills/{skill}/SKILL.md
|
||||
Optionally add harness-specific openai.yaml or config files
|
||||
Address review feedback to align with CONTRIBUTING template
|
||||
```
|
||||
|
||||
### Add Or Update Hook
|
||||
|
||||
Adds or updates git or bash hooks to enforce workflow, quality, or security policies.
|
||||
|
||||
**Frequency**: ~1 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Add or update hook scripts in hooks/ or scripts/hooks/
|
||||
2. Register the hook in hooks/hooks.json or similar config
|
||||
3. Optionally add or update tests in tests/hooks/
|
||||
|
||||
**Files typically involved**:
|
||||
- `hooks/*.hook`
|
||||
- `hooks/hooks.json`
|
||||
- `scripts/hooks/*.js`
|
||||
- `tests/hooks/*.test.js`
|
||||
- `.cursor/hooks.json`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Add or update hook scripts in hooks/ or scripts/hooks/
|
||||
Register the hook in hooks/hooks.json or similar config
|
||||
Optionally add or update tests in tests/hooks/
|
||||
```
|
||||
|
||||
### Address Review Feedback
|
||||
|
||||
Addresses code review feedback by updating documentation, scripts, or configuration for clarity, correctness, or convention alignment.
|
||||
|
||||
**Frequency**: ~4 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Edit SKILL.md, agent, or command files to address reviewer comments
|
||||
2. Update examples, headings, or configuration as requested
|
||||
3. Iterate until all review feedback is resolved
|
||||
|
||||
**Files typically involved**:
|
||||
- `skills/*/SKILL.md`
|
||||
- `agents/*.md`
|
||||
- `commands/*.md`
|
||||
- `.agents/skills/*/SKILL.md`
|
||||
- `.cursor/skills/*/SKILL.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Edit SKILL.md, agent, or command files to address reviewer comments
|
||||
Update examples, headings, or configuration as requested
|
||||
Iterate until all review feedback is resolved
|
||||
```
|
||||
|
||||
|
||||
## Best Practices
|
||||
|
||||
Based on analysis of the codebase, follow these practices:
|
||||
|
||||
### Do
|
||||
|
||||
- Use conventional commit format (feat:, fix:, etc.)
|
||||
- Follow *.test.js naming pattern
|
||||
- Use camelCase for file names
|
||||
- Prefer mixed exports
|
||||
|
||||
### Don't
|
||||
|
||||
- Don't write vague commit messages
|
||||
- Don't skip tests for new features
|
||||
- Don't deviate from established patterns without discussion
|
||||
|
||||
---
|
||||
|
||||
*This skill was auto-generated by [ECC Tools](https://ecc.tools). Review and customize as needed for your team.*
|
||||
6
.agents/skills/everything-claude-code/agents/openai.yaml
Normal file
6
.agents/skills/everything-claude-code/agents/openai.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
interface:
|
||||
display_name: "Everything Claude Code"
|
||||
short_description: "Repo-specific patterns and workflows for everything-claude-code"
|
||||
default_prompt: "Use the everything-claude-code repo skill to follow existing architecture, testing, and workflow conventions."
|
||||
policy:
|
||||
allow_implicit_invocation: true
|
||||
39
.claude/commands/add-language-rules.md
Normal file
39
.claude/commands/add-language-rules.md
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
name: add-language-rules
|
||||
description: Workflow command scaffold for add-language-rules in everything-claude-code.
|
||||
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
|
||||
---
|
||||
|
||||
# /add-language-rules
|
||||
|
||||
Use this workflow when working on **add-language-rules** in `everything-claude-code`.
|
||||
|
||||
## Goal
|
||||
|
||||
Adds a new programming language to the rules system, including coding style, hooks, patterns, security, and testing guidelines.
|
||||
|
||||
## Common Files
|
||||
|
||||
- `rules/*/coding-style.md`
|
||||
- `rules/*/hooks.md`
|
||||
- `rules/*/patterns.md`
|
||||
- `rules/*/security.md`
|
||||
- `rules/*/testing.md`
|
||||
|
||||
## Suggested Sequence
|
||||
|
||||
1. Understand the current state and failure mode before editing.
|
||||
2. Make the smallest coherent change that satisfies the workflow goal.
|
||||
3. Run the most relevant verification for touched files.
|
||||
4. Summarize what changed and what still needs review.
|
||||
|
||||
## Typical Commit Signals
|
||||
|
||||
- Create a new directory under rules/{language}/
|
||||
- Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
|
||||
- Optionally reference or link to related skills
|
||||
|
||||
## Notes
|
||||
|
||||
- Treat this as a scaffold, not a hard-coded script.
|
||||
- Update the command if the workflow evolves materially.
|
||||
36
.claude/commands/database-migration.md
Normal file
36
.claude/commands/database-migration.md
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
name: database-migration
|
||||
description: Workflow command scaffold for database-migration in everything-claude-code.
|
||||
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
|
||||
---
|
||||
|
||||
# /database-migration
|
||||
|
||||
Use this workflow when working on **database-migration** in `everything-claude-code`.
|
||||
|
||||
## Goal
|
||||
|
||||
Database schema changes with migration files
|
||||
|
||||
## Common Files
|
||||
|
||||
- `**/schema.*`
|
||||
- `migrations/*`
|
||||
|
||||
## Suggested Sequence
|
||||
|
||||
1. Understand the current state and failure mode before editing.
|
||||
2. Make the smallest coherent change that satisfies the workflow goal.
|
||||
3. Run the most relevant verification for touched files.
|
||||
4. Summarize what changed and what still needs review.
|
||||
|
||||
## Typical Commit Signals
|
||||
|
||||
- Create migration file
|
||||
- Update schema definitions
|
||||
- Generate/update types
|
||||
|
||||
## Notes
|
||||
|
||||
- Treat this as a scaffold, not a hard-coded script.
|
||||
- Update the command if the workflow evolves materially.
|
||||
38
.claude/commands/feature-development.md
Normal file
38
.claude/commands/feature-development.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
name: feature-development
|
||||
description: Workflow command scaffold for feature-development in everything-claude-code.
|
||||
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
|
||||
---
|
||||
|
||||
# /feature-development
|
||||
|
||||
Use this workflow when working on **feature-development** in `everything-claude-code`.
|
||||
|
||||
## Goal
|
||||
|
||||
Standard feature implementation workflow
|
||||
|
||||
## Common Files
|
||||
|
||||
- `manifests/*`
|
||||
- `schemas/*`
|
||||
- `**/*.test.*`
|
||||
- `**/api/**`
|
||||
|
||||
## Suggested Sequence
|
||||
|
||||
1. Understand the current state and failure mode before editing.
|
||||
2. Make the smallest coherent change that satisfies the workflow goal.
|
||||
3. Run the most relevant verification for touched files.
|
||||
4. Summarize what changed and what still needs review.
|
||||
|
||||
## Typical Commit Signals
|
||||
|
||||
- Add feature implementation
|
||||
- Add tests for feature
|
||||
- Update documentation
|
||||
|
||||
## Notes
|
||||
|
||||
- Treat this as a scaffold, not a hard-coded script.
|
||||
- Update the command if the workflow evolves materially.
|
||||
334
.claude/ecc-tools.json
Normal file
334
.claude/ecc-tools.json
Normal file
@@ -0,0 +1,334 @@
|
||||
{
|
||||
"version": "1.3",
|
||||
"schemaVersion": "1.0",
|
||||
"generatedBy": "ecc-tools",
|
||||
"generatedAt": "2026-03-20T12:07:36.496Z",
|
||||
"repo": "https://github.com/affaan-m/everything-claude-code",
|
||||
"profiles": {
|
||||
"requested": "full",
|
||||
"recommended": "full",
|
||||
"effective": "full",
|
||||
"requestedAlias": "full",
|
||||
"recommendedAlias": "full",
|
||||
"effectiveAlias": "full"
|
||||
},
|
||||
"requestedProfile": "full",
|
||||
"profile": "full",
|
||||
"recommendedProfile": "full",
|
||||
"effectiveProfile": "full",
|
||||
"tier": "enterprise",
|
||||
"requestedComponents": [
|
||||
"repo-baseline",
|
||||
"workflow-automation",
|
||||
"security-audits",
|
||||
"research-tooling",
|
||||
"team-rollout",
|
||||
"governance-controls"
|
||||
],
|
||||
"selectedComponents": [
|
||||
"repo-baseline",
|
||||
"workflow-automation",
|
||||
"security-audits",
|
||||
"research-tooling",
|
||||
"team-rollout",
|
||||
"governance-controls"
|
||||
],
|
||||
"requestedAddComponents": [],
|
||||
"requestedRemoveComponents": [],
|
||||
"blockedRemovalComponents": [],
|
||||
"tierFilteredComponents": [],
|
||||
"requestedRootPackages": [
|
||||
"runtime-core",
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"selectedRootPackages": [
|
||||
"runtime-core",
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"requestedPackages": [
|
||||
"runtime-core",
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"requestedAddPackages": [],
|
||||
"requestedRemovePackages": [],
|
||||
"selectedPackages": [
|
||||
"runtime-core",
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"packages": [
|
||||
"runtime-core",
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"blockedRemovalPackages": [],
|
||||
"tierFilteredRootPackages": [],
|
||||
"tierFilteredPackages": [],
|
||||
"conflictingPackages": [],
|
||||
"dependencyGraph": {
|
||||
"runtime-core": [],
|
||||
"workflow-pack": [
|
||||
"runtime-core"
|
||||
],
|
||||
"agentshield-pack": [
|
||||
"workflow-pack"
|
||||
],
|
||||
"research-pack": [
|
||||
"workflow-pack"
|
||||
],
|
||||
"team-config-sync": [
|
||||
"runtime-core"
|
||||
],
|
||||
"enterprise-controls": [
|
||||
"team-config-sync"
|
||||
]
|
||||
},
|
||||
"resolutionOrder": [
|
||||
"runtime-core",
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"requestedModules": [
|
||||
"runtime-core",
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"selectedModules": [
|
||||
"runtime-core",
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"modules": [
|
||||
"runtime-core",
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"managedFiles": [
|
||||
".claude/skills/everything-claude-code/SKILL.md",
|
||||
".agents/skills/everything-claude-code/SKILL.md",
|
||||
".agents/skills/everything-claude-code/agents/openai.yaml",
|
||||
".claude/identity.json",
|
||||
".codex/config.toml",
|
||||
".codex/AGENTS.md",
|
||||
".codex/agents/explorer.toml",
|
||||
".codex/agents/reviewer.toml",
|
||||
".codex/agents/docs-researcher.toml",
|
||||
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml",
|
||||
".claude/rules/everything-claude-code-guardrails.md",
|
||||
".claude/research/everything-claude-code-research-playbook.md",
|
||||
".claude/team/everything-claude-code-team-config.json",
|
||||
".claude/enterprise/controls.md",
|
||||
".claude/commands/database-migration.md",
|
||||
".claude/commands/feature-development.md",
|
||||
".claude/commands/add-language-rules.md"
|
||||
],
|
||||
"packageFiles": {
|
||||
"runtime-core": [
|
||||
".claude/skills/everything-claude-code/SKILL.md",
|
||||
".agents/skills/everything-claude-code/SKILL.md",
|
||||
".agents/skills/everything-claude-code/agents/openai.yaml",
|
||||
".claude/identity.json",
|
||||
".codex/config.toml",
|
||||
".codex/AGENTS.md",
|
||||
".codex/agents/explorer.toml",
|
||||
".codex/agents/reviewer.toml",
|
||||
".codex/agents/docs-researcher.toml",
|
||||
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml"
|
||||
],
|
||||
"agentshield-pack": [
|
||||
".claude/rules/everything-claude-code-guardrails.md"
|
||||
],
|
||||
"research-pack": [
|
||||
".claude/research/everything-claude-code-research-playbook.md"
|
||||
],
|
||||
"team-config-sync": [
|
||||
".claude/team/everything-claude-code-team-config.json"
|
||||
],
|
||||
"enterprise-controls": [
|
||||
".claude/enterprise/controls.md"
|
||||
],
|
||||
"workflow-pack": [
|
||||
".claude/commands/database-migration.md",
|
||||
".claude/commands/feature-development.md",
|
||||
".claude/commands/add-language-rules.md"
|
||||
]
|
||||
},
|
||||
"moduleFiles": {
|
||||
"runtime-core": [
|
||||
".claude/skills/everything-claude-code/SKILL.md",
|
||||
".agents/skills/everything-claude-code/SKILL.md",
|
||||
".agents/skills/everything-claude-code/agents/openai.yaml",
|
||||
".claude/identity.json",
|
||||
".codex/config.toml",
|
||||
".codex/AGENTS.md",
|
||||
".codex/agents/explorer.toml",
|
||||
".codex/agents/reviewer.toml",
|
||||
".codex/agents/docs-researcher.toml",
|
||||
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml"
|
||||
],
|
||||
"agentshield-pack": [
|
||||
".claude/rules/everything-claude-code-guardrails.md"
|
||||
],
|
||||
"research-pack": [
|
||||
".claude/research/everything-claude-code-research-playbook.md"
|
||||
],
|
||||
"team-config-sync": [
|
||||
".claude/team/everything-claude-code-team-config.json"
|
||||
],
|
||||
"enterprise-controls": [
|
||||
".claude/enterprise/controls.md"
|
||||
],
|
||||
"workflow-pack": [
|
||||
".claude/commands/database-migration.md",
|
||||
".claude/commands/feature-development.md",
|
||||
".claude/commands/add-language-rules.md"
|
||||
]
|
||||
},
|
||||
"files": [
|
||||
{
|
||||
"moduleId": "runtime-core",
|
||||
"path": ".claude/skills/everything-claude-code/SKILL.md",
|
||||
"description": "Repository-specific Claude Code skill generated from git history."
|
||||
},
|
||||
{
|
||||
"moduleId": "runtime-core",
|
||||
"path": ".agents/skills/everything-claude-code/SKILL.md",
|
||||
"description": "Codex-facing copy of the generated repository skill."
|
||||
},
|
||||
{
|
||||
"moduleId": "runtime-core",
|
||||
"path": ".agents/skills/everything-claude-code/agents/openai.yaml",
|
||||
"description": "Codex skill metadata so the repo skill appears cleanly in the skill interface."
|
||||
},
|
||||
{
|
||||
"moduleId": "runtime-core",
|
||||
"path": ".claude/identity.json",
|
||||
"description": "Suggested identity.json baseline derived from repository conventions."
|
||||
},
|
||||
{
|
||||
"moduleId": "runtime-core",
|
||||
"path": ".codex/config.toml",
|
||||
"description": "Repo-local Codex MCP and multi-agent baseline aligned with ECC defaults."
|
||||
},
|
||||
{
|
||||
"moduleId": "runtime-core",
|
||||
"path": ".codex/AGENTS.md",
|
||||
"description": "Codex usage guide that points at the generated repo skill and workflow bundle."
|
||||
},
|
||||
{
|
||||
"moduleId": "runtime-core",
|
||||
"path": ".codex/agents/explorer.toml",
|
||||
"description": "Read-only explorer role config for Codex multi-agent work."
|
||||
},
|
||||
{
|
||||
"moduleId": "runtime-core",
|
||||
"path": ".codex/agents/reviewer.toml",
|
||||
"description": "Read-only reviewer role config focused on correctness and security."
|
||||
},
|
||||
{
|
||||
"moduleId": "runtime-core",
|
||||
"path": ".codex/agents/docs-researcher.toml",
|
||||
"description": "Read-only docs researcher role config for API verification."
|
||||
},
|
||||
{
|
||||
"moduleId": "runtime-core",
|
||||
"path": ".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml",
|
||||
"description": "Continuous-learning instincts derived from repository patterns."
|
||||
},
|
||||
{
|
||||
"moduleId": "agentshield-pack",
|
||||
"path": ".claude/rules/everything-claude-code-guardrails.md",
|
||||
"description": "Repository guardrails distilled from analysis for security and workflow review."
|
||||
},
|
||||
{
|
||||
"moduleId": "research-pack",
|
||||
"path": ".claude/research/everything-claude-code-research-playbook.md",
|
||||
"description": "Research workflow playbook for source attribution and long-context tasks."
|
||||
},
|
||||
{
|
||||
"moduleId": "team-config-sync",
|
||||
"path": ".claude/team/everything-claude-code-team-config.json",
|
||||
"description": "Team config scaffold that points collaborators at the shared ECC bundle."
|
||||
},
|
||||
{
|
||||
"moduleId": "enterprise-controls",
|
||||
"path": ".claude/enterprise/controls.md",
|
||||
"description": "Enterprise governance scaffold for approvals, audit posture, and escalation."
|
||||
},
|
||||
{
|
||||
"moduleId": "workflow-pack",
|
||||
"path": ".claude/commands/database-migration.md",
|
||||
"description": "Workflow command scaffold for database-migration."
|
||||
},
|
||||
{
|
||||
"moduleId": "workflow-pack",
|
||||
"path": ".claude/commands/feature-development.md",
|
||||
"description": "Workflow command scaffold for feature-development."
|
||||
},
|
||||
{
|
||||
"moduleId": "workflow-pack",
|
||||
"path": ".claude/commands/add-language-rules.md",
|
||||
"description": "Workflow command scaffold for add-language-rules."
|
||||
}
|
||||
],
|
||||
"workflows": [
|
||||
{
|
||||
"command": "database-migration",
|
||||
"path": ".claude/commands/database-migration.md"
|
||||
},
|
||||
{
|
||||
"command": "feature-development",
|
||||
"path": ".claude/commands/feature-development.md"
|
||||
},
|
||||
{
|
||||
"command": "add-language-rules",
|
||||
"path": ".claude/commands/add-language-rules.md"
|
||||
}
|
||||
],
|
||||
"adapters": {
|
||||
"claudeCode": {
|
||||
"skillPath": ".claude/skills/everything-claude-code/SKILL.md",
|
||||
"identityPath": ".claude/identity.json",
|
||||
"commandPaths": [
|
||||
".claude/commands/database-migration.md",
|
||||
".claude/commands/feature-development.md",
|
||||
".claude/commands/add-language-rules.md"
|
||||
]
|
||||
},
|
||||
"codex": {
|
||||
"configPath": ".codex/config.toml",
|
||||
"agentsGuidePath": ".codex/AGENTS.md",
|
||||
"skillPath": ".agents/skills/everything-claude-code/SKILL.md"
|
||||
}
|
||||
}
|
||||
}
|
||||
15
.claude/enterprise/controls.md
Normal file
15
.claude/enterprise/controls.md
Normal file
@@ -0,0 +1,15 @@
|
||||
# Enterprise Controls
|
||||
|
||||
This is a starter governance file for enterprise ECC deployments.
|
||||
|
||||
## Baseline
|
||||
|
||||
- Repository: https://github.com/affaan-m/everything-claude-code
|
||||
- Recommended profile: full
|
||||
- Keep install manifests, audit allowlists, and Codex baselines under review.
|
||||
|
||||
## Approval Expectations
|
||||
|
||||
- Security-sensitive workflow changes require explicit reviewer acknowledgement.
|
||||
- Audit suppressions must include a reason and the narrowest viable matcher.
|
||||
- Generated skills should be reviewed before broad rollout to teams.
|
||||
14
.claude/identity.json
Normal file
14
.claude/identity.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"version": "2.0",
|
||||
"technicalLevel": "technical",
|
||||
"preferredStyle": {
|
||||
"verbosity": "minimal",
|
||||
"codeComments": true,
|
||||
"explanations": true
|
||||
},
|
||||
"domains": [
|
||||
"javascript"
|
||||
],
|
||||
"suggestedBy": "ecc-tools-repo-analysis",
|
||||
"createdAt": "2026-03-20T12:07:57.119Z"
|
||||
}
|
||||
21
.claude/research/everything-claude-code-research-playbook.md
Normal file
21
.claude/research/everything-claude-code-research-playbook.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Everything Claude Code Research Playbook
|
||||
|
||||
Use this when the task is documentation-heavy, source-sensitive, or requires broad repository context.
|
||||
|
||||
## Defaults
|
||||
|
||||
- Prefer primary documentation and direct source links.
|
||||
- Include concrete dates when facts may change over time.
|
||||
- Keep a short evidence trail for each recommendation or conclusion.
|
||||
|
||||
## Suggested Flow
|
||||
|
||||
1. Inspect local code and docs first.
|
||||
2. Browse only for unstable or external facts.
|
||||
3. Summarize findings with file paths, commands, or links.
|
||||
|
||||
## Repo Signals
|
||||
|
||||
- Primary language: JavaScript
|
||||
- Framework: Not detected
|
||||
- Workflows detected: 10
|
||||
34
.claude/rules/everything-claude-code-guardrails.md
Normal file
34
.claude/rules/everything-claude-code-guardrails.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# Everything Claude Code Guardrails
|
||||
|
||||
Generated by ECC Tools from repository history. Review before treating it as a hard policy file.
|
||||
|
||||
## Commit Workflow
|
||||
|
||||
- Prefer `conventional` commit messaging with prefixes such as fix, test, feat, docs.
|
||||
- Keep new changes aligned with the existing pull-request and review flow already present in the repo.
|
||||
|
||||
## Architecture
|
||||
|
||||
- Preserve the current `hybrid` module organization.
|
||||
- Respect the current test layout: `separate`.
|
||||
|
||||
## Code Style
|
||||
|
||||
- Use `camelCase` file naming.
|
||||
- Prefer `relative` imports and `mixed` exports.
|
||||
|
||||
## ECC Defaults
|
||||
|
||||
- Current recommended install profile: `full`.
|
||||
- Validate risky config changes in PRs and keep the install manifest in source control.
|
||||
|
||||
## Detected Workflows
|
||||
|
||||
- database-migration: Database schema changes with migration files
|
||||
- feature-development: Standard feature implementation workflow
|
||||
- add-language-rules: Adds a new programming language to the rules system, including coding style, hooks, patterns, security, and testing guidelines.
|
||||
|
||||
## Review Reminder
|
||||
|
||||
- Regenerate this bundle when repository conventions materially change.
|
||||
- Keep suppressions narrow and auditable.
|
||||
@@ -1,97 +1,442 @@
|
||||
# Everything Claude Code
|
||||
---
|
||||
name: everything-claude-code-conventions
|
||||
description: Development conventions and patterns for everything-claude-code. JavaScript project with conventional commits.
|
||||
---
|
||||
|
||||
Use this skill when working inside the `everything-claude-code` repository and you need repo-specific guidance instead of generic coding advice.
|
||||
# Everything Claude Code Conventions
|
||||
|
||||
Optional companion instincts live at `.claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml` for teams using `continuous-learning-v2`.
|
||||
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-03-20
|
||||
|
||||
## When to Use
|
||||
## Overview
|
||||
|
||||
Activate this skill when the task touches one or more of these areas:
|
||||
- cross-platform parity across Claude Code, Cursor, Codex, and OpenCode
|
||||
- hook scripts, hook docs, or hook tests
|
||||
- skills, commands, agents, or rules that must stay synchronized across surfaces
|
||||
- release work such as version bumps, changelog updates, or plugin metadata updates
|
||||
- continuous-learning or instinct workflows inside this repository
|
||||
This skill teaches Claude the development patterns and conventions used in everything-claude-code.
|
||||
|
||||
## How It Works
|
||||
## Tech Stack
|
||||
|
||||
### 1. Follow the repo's development contract
|
||||
- **Primary Language**: JavaScript
|
||||
- **Architecture**: hybrid module organization
|
||||
- **Test Location**: separate
|
||||
|
||||
- Use conventional commits such as `feat:`, `fix:`, `docs:`, `test:`, `chore:`.
|
||||
- Keep commit subjects concise and close to the repo norm of about 70 characters.
|
||||
- Prefer camelCase for JavaScript and TypeScript module filenames.
|
||||
- Use kebab-case for skill directories and command filenames.
|
||||
- Keep test files on the existing `*.test.js` pattern.
|
||||
## When to Use This Skill
|
||||
|
||||
### 2. Treat the root repo as the source of truth
|
||||
Activate this skill when:
|
||||
- Making changes to this repository
|
||||
- Adding new features following established patterns
|
||||
- Writing tests that match project conventions
|
||||
- Creating commits with proper message format
|
||||
|
||||
Start from the root implementation, then mirror changes where they are intentionally shipped.
|
||||
## Commit Conventions
|
||||
|
||||
Typical mirror targets:
|
||||
- `.cursor/`
|
||||
- `.codex/`
|
||||
- `.opencode/`
|
||||
- `.agents/`
|
||||
Follow these commit message conventions based on 500 analyzed commits.
|
||||
|
||||
Do not assume every `.claude/` artifact needs a cross-platform copy. Only mirror files that are part of the shipped multi-platform surface.
|
||||
### Commit Style: Conventional Commits
|
||||
|
||||
### 3. Update hooks with tests and docs together
|
||||
### Prefixes Used
|
||||
|
||||
When changing hook behavior:
|
||||
1. update `hooks/hooks.json` or the relevant script in `scripts/hooks/`
|
||||
2. update matching tests in `tests/hooks/` or `tests/integration/`
|
||||
3. update `hooks/README.md` if behavior or configuration changed
|
||||
4. verify parity for `.cursor/hooks/` and `.opencode/plugins/` when applicable
|
||||
- `fix`
|
||||
- `test`
|
||||
- `feat`
|
||||
- `docs`
|
||||
|
||||
### 4. Keep release metadata in sync
|
||||
### Message Guidelines
|
||||
|
||||
When preparing a release, verify the same version is reflected anywhere it is surfaced:
|
||||
- `package.json`
|
||||
- `.claude-plugin/plugin.json`
|
||||
- `.claude-plugin/marketplace.json`
|
||||
- Average message length: ~65 characters
|
||||
- Keep first line concise and descriptive
|
||||
- Use imperative mood ("Add feature" not "Added feature")
|
||||
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat(rules): add C# language support
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
chore(deps-dev): bump flatted (#675)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
fix: auto-detect ECC root from plugin cache when CLAUDE_PLUGIN_ROOT is unset (#547) (#691)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
docs: add Antigravity setup and usage guide (#552)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
merge: PR #529 — feat(skills): add documentation-lookup, bun-runtime, nextjs-turbopack; feat(agents): add rust-reviewer
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
Revert "Add Kiro IDE support (.kiro/) (#548)"
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
Add Kiro IDE support (.kiro/) (#548)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add block-no-verify hook for Claude Code and Cursor (#649)
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### Project Structure: Single Package
|
||||
|
||||
This project uses **hybrid** module organization.
|
||||
|
||||
### Configuration Files
|
||||
|
||||
- `.github/workflows/ci.yml`
|
||||
- `.github/workflows/maintenance.yml`
|
||||
- `.github/workflows/monthly-metrics.yml`
|
||||
- `.github/workflows/release.yml`
|
||||
- `.github/workflows/reusable-release.yml`
|
||||
- `.github/workflows/reusable-test.yml`
|
||||
- `.github/workflows/reusable-validate.yml`
|
||||
- `.opencode/package.json`
|
||||
- release notes or changelog entries when the release process expects them
|
||||
- `.opencode/tsconfig.json`
|
||||
- `.prettierrc`
|
||||
- `eslint.config.js`
|
||||
- `package.json`
|
||||
|
||||
### 5. Be explicit about continuous-learning changes
|
||||
### Guidelines
|
||||
|
||||
If the task touches `skills/continuous-learning-v2/` or imported instincts:
|
||||
- prefer accurate, low-noise instincts over auto-generated bulk output
|
||||
- keep instinct files importable by `instinct-cli.py`
|
||||
- remove duplicated or contradictory instincts instead of layering more guidance on top
|
||||
- This project uses a hybrid organization
|
||||
- Follow existing patterns when adding new code
|
||||
|
||||
## Examples
|
||||
## Code Style
|
||||
|
||||
### Naming examples
|
||||
### Language: JavaScript
|
||||
|
||||
```text
|
||||
skills/continuous-learning-v2/SKILL.md
|
||||
commands/update-docs.md
|
||||
scripts/hooks/session-start.js
|
||||
tests/hooks/hooks.test.js
|
||||
### Naming Conventions
|
||||
|
||||
| Element | Convention |
|
||||
|---------|------------|
|
||||
| Files | camelCase |
|
||||
| Functions | camelCase |
|
||||
| Classes | PascalCase |
|
||||
| Constants | SCREAMING_SNAKE_CASE |
|
||||
|
||||
### Import Style: Relative Imports
|
||||
|
||||
### Export Style: Mixed Style
|
||||
|
||||
|
||||
*Preferred import style*
|
||||
|
||||
```typescript
|
||||
// Use relative imports
|
||||
import { Button } from '../components/Button'
|
||||
import { useAuth } from './hooks/useAuth'
|
||||
```
|
||||
|
||||
### Commit examples
|
||||
## Testing
|
||||
|
||||
```text
|
||||
fix: harden session summary extraction on Stop hook
|
||||
docs: align Codex config examples with current schema
|
||||
test: cover Windows formatter fallback behavior
|
||||
### Test Framework
|
||||
|
||||
No specific test framework detected — use the repository's existing test patterns.
|
||||
|
||||
### File Pattern: `*.test.js`
|
||||
|
||||
### Test Types
|
||||
|
||||
- **Unit tests**: Test individual functions and components in isolation
|
||||
- **Integration tests**: Test interactions between multiple components/services
|
||||
|
||||
### Coverage
|
||||
|
||||
This project has coverage reporting configured. Aim for 80%+ coverage.
|
||||
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Error Handling Style: Try-Catch Blocks
|
||||
|
||||
|
||||
*Standard error handling pattern*
|
||||
|
||||
```typescript
|
||||
try {
|
||||
const result = await riskyOperation()
|
||||
return result
|
||||
} catch (error) {
|
||||
console.error('Operation failed:', error)
|
||||
throw new Error('User-friendly message')
|
||||
}
|
||||
```
|
||||
|
||||
### Skill update checklist
|
||||
## Common Workflows
|
||||
|
||||
```text
|
||||
1. Update the root skill or command.
|
||||
2. Mirror it only where that surface is shipped.
|
||||
3. Run targeted tests first, then the broader suite if behavior changed.
|
||||
4. Review docs and release notes for user-visible changes.
|
||||
These workflows were detected from analyzing commit patterns.
|
||||
|
||||
### Database Migration
|
||||
|
||||
Database schema changes with migration files
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create migration file
|
||||
2. Update schema definitions
|
||||
3. Generate/update types
|
||||
|
||||
**Files typically involved**:
|
||||
- `**/schema.*`
|
||||
- `migrations/*`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
feat: implement --with/--without selective install flags (#679)
|
||||
fix: sync catalog counts with filesystem (27 agents, 113 skills, 58 commands) (#693)
|
||||
feat(rules): add Rust language rules (rebased #660) (#686)
|
||||
```
|
||||
|
||||
### Release checklist
|
||||
### Feature Development
|
||||
|
||||
```text
|
||||
1. Bump package and plugin versions.
|
||||
2. Run npm test.
|
||||
3. Verify platform-specific manifests.
|
||||
4. Publish the release notes with a human-readable summary.
|
||||
Standard feature implementation workflow
|
||||
|
||||
**Frequency**: ~22 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Add feature implementation
|
||||
2. Add tests for feature
|
||||
3. Update documentation
|
||||
|
||||
**Files typically involved**:
|
||||
- `manifests/*`
|
||||
- `schemas/*`
|
||||
- `**/*.test.*`
|
||||
- `**/api/**`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
feat(skills): add documentation-lookup, bun-runtime, nextjs-turbopack; feat(agents): add rust-reviewer
|
||||
docs(skills): align documentation-lookup with CONTRIBUTING template; add cross-harness (Codex/Cursor) skill copies
|
||||
fix: address PR review — skill template (When to use, How it works, Examples), bun.lock, next build note, rust-reviewer CI note, doc-lookup privacy/uncertainty
|
||||
```
|
||||
|
||||
### Add Language Rules
|
||||
|
||||
Adds a new programming language to the rules system, including coding style, hooks, patterns, security, and testing guidelines.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create a new directory under rules/{language}/
|
||||
2. Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
|
||||
3. Optionally reference or link to related skills
|
||||
|
||||
**Files typically involved**:
|
||||
- `rules/*/coding-style.md`
|
||||
- `rules/*/hooks.md`
|
||||
- `rules/*/patterns.md`
|
||||
- `rules/*/security.md`
|
||||
- `rules/*/testing.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Create a new directory under rules/{language}/
|
||||
Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
|
||||
Optionally reference or link to related skills
|
||||
```
|
||||
|
||||
### Add New Skill
|
||||
|
||||
Adds a new skill to the system, documenting its workflow, triggers, and usage, often with supporting scripts.
|
||||
|
||||
**Frequency**: ~4 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create a new directory under skills/{skill-name}/
|
||||
2. Add SKILL.md with documentation (When to Use, How It Works, Examples, etc.)
|
||||
3. Optionally add scripts or supporting files under skills/{skill-name}/scripts/
|
||||
4. Address review feedback and iterate on documentation
|
||||
|
||||
**Files typically involved**:
|
||||
- `skills/*/SKILL.md`
|
||||
- `skills/*/scripts/*.sh`
|
||||
- `skills/*/scripts/*.js`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Create a new directory under skills/{skill-name}/
|
||||
Add SKILL.md with documentation (When to Use, How It Works, Examples, etc.)
|
||||
Optionally add scripts or supporting files under skills/{skill-name}/scripts/
|
||||
Address review feedback and iterate on documentation
|
||||
```
|
||||
|
||||
### Add New Agent
|
||||
|
||||
Adds a new agent to the system for code review, build resolution, or other automated tasks.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create a new agent markdown file under agents/{agent-name}.md
|
||||
2. Register the agent in AGENTS.md
|
||||
3. Optionally update README.md and docs/COMMAND-AGENT-MAP.md
|
||||
|
||||
**Files typically involved**:
|
||||
- `agents/*.md`
|
||||
- `AGENTS.md`
|
||||
- `README.md`
|
||||
- `docs/COMMAND-AGENT-MAP.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Create a new agent markdown file under agents/{agent-name}.md
|
||||
Register the agent in AGENTS.md
|
||||
Optionally update README.md and docs/COMMAND-AGENT-MAP.md
|
||||
```
|
||||
|
||||
### Add New Command
|
||||
|
||||
Adds a new command to the system, often paired with a backing skill.
|
||||
|
||||
**Frequency**: ~1 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create a new markdown file under commands/{command-name}.md
|
||||
2. Optionally add or update a backing skill under skills/{skill-name}/SKILL.md
|
||||
|
||||
**Files typically involved**:
|
||||
- `commands/*.md`
|
||||
- `skills/*/SKILL.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Create a new markdown file under commands/{command-name}.md
|
||||
Optionally add or update a backing skill under skills/{skill-name}/SKILL.md
|
||||
```
|
||||
|
||||
### Sync Catalog Counts
|
||||
|
||||
Synchronizes the documented counts of agents, skills, and commands in AGENTS.md and README.md with the actual repository state.
|
||||
|
||||
**Frequency**: ~3 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Update agent, skill, and command counts in AGENTS.md
|
||||
2. Update the same counts in README.md (quick-start, comparison table, etc.)
|
||||
3. Optionally update other documentation files
|
||||
|
||||
**Files typically involved**:
|
||||
- `AGENTS.md`
|
||||
- `README.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Update agent, skill, and command counts in AGENTS.md
|
||||
Update the same counts in README.md (quick-start, comparison table, etc.)
|
||||
Optionally update other documentation files
|
||||
```
|
||||
|
||||
### Add Cross Harness Skill Copies
|
||||
|
||||
Adds skill copies for different agent harnesses (e.g., Codex, Cursor, Antigravity) to ensure compatibility across platforms.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Copy or adapt SKILL.md to .agents/skills/{skill}/SKILL.md and/or .cursor/skills/{skill}/SKILL.md
|
||||
2. Optionally add harness-specific openai.yaml or config files
|
||||
3. Address review feedback to align with CONTRIBUTING template
|
||||
|
||||
**Files typically involved**:
|
||||
- `.agents/skills/*/SKILL.md`
|
||||
- `.cursor/skills/*/SKILL.md`
|
||||
- `.agents/skills/*/agents/openai.yaml`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Copy or adapt SKILL.md to .agents/skills/{skill}/SKILL.md and/or .cursor/skills/{skill}/SKILL.md
|
||||
Optionally add harness-specific openai.yaml or config files
|
||||
Address review feedback to align with CONTRIBUTING template
|
||||
```
|
||||
|
||||
### Add Or Update Hook
|
||||
|
||||
Adds or updates git or bash hooks to enforce workflow, quality, or security policies.
|
||||
|
||||
**Frequency**: ~1 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Add or update hook scripts in hooks/ or scripts/hooks/
|
||||
2. Register the hook in hooks/hooks.json or similar config
|
||||
3. Optionally add or update tests in tests/hooks/
|
||||
|
||||
**Files typically involved**:
|
||||
- `hooks/*.hook`
|
||||
- `hooks/hooks.json`
|
||||
- `scripts/hooks/*.js`
|
||||
- `tests/hooks/*.test.js`
|
||||
- `.cursor/hooks.json`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Add or update hook scripts in hooks/ or scripts/hooks/
|
||||
Register the hook in hooks/hooks.json or similar config
|
||||
Optionally add or update tests in tests/hooks/
|
||||
```
|
||||
|
||||
### Address Review Feedback
|
||||
|
||||
Addresses code review feedback by updating documentation, scripts, or configuration for clarity, correctness, or convention alignment.
|
||||
|
||||
**Frequency**: ~4 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Edit SKILL.md, agent, or command files to address reviewer comments
|
||||
2. Update examples, headings, or configuration as requested
|
||||
3. Iterate until all review feedback is resolved
|
||||
|
||||
**Files typically involved**:
|
||||
- `skills/*/SKILL.md`
|
||||
- `agents/*.md`
|
||||
- `commands/*.md`
|
||||
- `.agents/skills/*/SKILL.md`
|
||||
- `.cursor/skills/*/SKILL.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Edit SKILL.md, agent, or command files to address reviewer comments
|
||||
Update examples, headings, or configuration as requested
|
||||
Iterate until all review feedback is resolved
|
||||
```
|
||||
|
||||
|
||||
## Best Practices
|
||||
|
||||
Based on analysis of the codebase, follow these practices:
|
||||
|
||||
### Do
|
||||
|
||||
- Use conventional commit format (feat:, fix:, etc.)
|
||||
- Follow *.test.js naming pattern
|
||||
- Use camelCase for file names
|
||||
- Prefer mixed exports
|
||||
|
||||
### Don't
|
||||
|
||||
- Don't write vague commit messages
|
||||
- Don't skip tests for new features
|
||||
- Don't deviate from established patterns without discussion
|
||||
|
||||
---
|
||||
|
||||
*This skill was auto-generated by [ECC Tools](https://ecc.tools). Review and customize as needed for your team.*
|
||||
|
||||
15
.claude/team/everything-claude-code-team-config.json
Normal file
15
.claude/team/everything-claude-code-team-config.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"version": "1.0",
|
||||
"generatedBy": "ecc-tools",
|
||||
"profile": "full",
|
||||
"sharedSkills": [
|
||||
".claude/skills/everything-claude-code/SKILL.md",
|
||||
".agents/skills/everything-claude-code/SKILL.md"
|
||||
],
|
||||
"commandFiles": [
|
||||
".claude/commands/database-migration.md",
|
||||
".claude/commands/feature-development.md",
|
||||
".claude/commands/add-language-rules.md"
|
||||
],
|
||||
"updatedAt": "2026-03-20T12:07:36.496Z"
|
||||
}
|
||||
@@ -6,4 +6,4 @@ developer_instructions = """
|
||||
Verify APIs, framework behavior, and release-note claims against primary documentation before changes land.
|
||||
Cite the exact docs or file paths that support each claim.
|
||||
Do not invent undocumented behavior.
|
||||
"""
|
||||
"""
|
||||
@@ -6,4 +6,4 @@ developer_instructions = """
|
||||
Stay in exploration mode.
|
||||
Trace the real execution path, cite files and symbols, and avoid proposing fixes unless the parent agent asks for them.
|
||||
Prefer targeted search and file reads over broad scans.
|
||||
"""
|
||||
"""
|
||||
@@ -6,4 +6,4 @@ developer_instructions = """
|
||||
Review like an owner.
|
||||
Prioritize correctness, security, behavioral regressions, and missing tests.
|
||||
Lead with concrete findings and avoid style-only feedback unless it hides a real bug.
|
||||
"""
|
||||
"""
|
||||
@@ -1,6 +1,6 @@
|
||||
# Everything Claude Code (ECC) — Agent Instructions
|
||||
|
||||
This is a **production-ready AI coding plugin** providing 27 specialized agents, 114 skills, 59 commands, and automated hook workflows for software development.
|
||||
This is a **production-ready AI coding plugin** providing 28 specialized agents, 116 skills, 59 commands, and automated hook workflows for software development.
|
||||
|
||||
**Version:** 1.9.0
|
||||
|
||||
@@ -141,8 +141,8 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
agents/ — 27 specialized subagents
|
||||
skills/ — 114 workflow skills and domain knowledge
|
||||
agents/ — 28 specialized subagents
|
||||
skills/ — 115 workflow skills and domain knowledge
|
||||
commands/ — 59 slash commands
|
||||
hooks/ — Trigger-based automations
|
||||
rules/ — Always-follow guidelines (common + per-language)
|
||||
|
||||
@@ -203,7 +203,7 @@ For manual install instructions see the README in the `rules/` folder.
|
||||
/plugin list everything-claude-code@everything-claude-code
|
||||
```
|
||||
|
||||
✨ **That's it!** You now have access to 27 agents, 114 skills, and 59 commands.
|
||||
✨ **That's it!** You now have access to 28 agents, 116 skills, and 59 commands.
|
||||
|
||||
---
|
||||
|
||||
@@ -264,7 +264,7 @@ everything-claude-code/
|
||||
| |-- plugin.json # Plugin metadata and component paths
|
||||
| |-- marketplace.json # Marketplace catalog for /plugin marketplace add
|
||||
|
|
||||
|-- agents/ # 27 specialized subagents for delegation
|
||||
|-- agents/ # 28 specialized subagents for delegation
|
||||
| |-- planner.md # Feature implementation planning
|
||||
| |-- architect.md # System design decisions
|
||||
| |-- tdd-guide.md # Test-driven development
|
||||
@@ -1069,9 +1069,9 @@ The configuration is automatically detected from `.opencode/opencode.json`.
|
||||
|
||||
| Feature | Claude Code | OpenCode | Status |
|
||||
|---------|-------------|----------|--------|
|
||||
| Agents | ✅ 27 agents | ✅ 12 agents | **Claude Code leads** |
|
||||
| Agents | ✅ 28 agents | ✅ 12 agents | **Claude Code leads** |
|
||||
| Commands | ✅ 59 commands | ✅ 31 commands | **Claude Code leads** |
|
||||
| Skills | ✅ 114 skills | ✅ 37 skills | **Claude Code leads** |
|
||||
| Skills | ✅ 116 skills | ✅ 37 skills | **Claude Code leads** |
|
||||
| Hooks | ✅ 8 event types | ✅ 11 events | **OpenCode has more!** |
|
||||
| Rules | ✅ 29 rules | ✅ 13 instructions | **Claude Code leads** |
|
||||
| MCP Servers | ✅ 14 servers | ✅ Full | **Full parity** |
|
||||
|
||||
243
agents/flutter-reviewer.md
Normal file
243
agents/flutter-reviewer.md
Normal file
@@ -0,0 +1,243 @@
|
||||
---
|
||||
name: flutter-reviewer
|
||||
description: Flutter and Dart code reviewer. Reviews Flutter code for widget best practices, state management patterns, Dart idioms, performance pitfalls, accessibility, and clean architecture violations. Library-agnostic — works with any state management solution and tooling.
|
||||
tools: ["Read", "Grep", "Glob", "Bash"]
|
||||
model: sonnet
|
||||
---
|
||||
|
||||
You are a senior Flutter and Dart code reviewer ensuring idiomatic, performant, and maintainable code.
|
||||
|
||||
## Your Role
|
||||
|
||||
- Review Flutter/Dart code for idiomatic patterns and framework best practices
|
||||
- Detect state management anti-patterns and widget rebuild issues regardless of which solution is used
|
||||
- Enforce the project's chosen architecture boundaries
|
||||
- Identify performance, accessibility, and security issues
|
||||
- You DO NOT refactor or rewrite code — you report findings only
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Gather Context
|
||||
|
||||
Run `git diff --staged` and `git diff` to see changes. If no diff, check `git log --oneline -5`. Identify changed Dart files.
|
||||
|
||||
### Step 2: Understand Project Structure
|
||||
|
||||
Check for:
|
||||
- `pubspec.yaml` — dependencies and project type
|
||||
- `analysis_options.yaml` — lint rules
|
||||
- `CLAUDE.md` — project-specific conventions
|
||||
- Whether this is a monorepo (melos) or single-package project
|
||||
- **Identify the state management approach** (BLoC, Riverpod, Provider, GetX, MobX, Signals, or built-in). Adapt review to the chosen solution's conventions.
|
||||
- **Identify the routing and DI approach** to avoid flagging idiomatic usage as violations
|
||||
|
||||
### Step 2b: Security Review
|
||||
|
||||
Check before continuing — if any CRITICAL security issue is found, stop and hand off to `security-reviewer`:
|
||||
- Hardcoded API keys, tokens, or secrets in Dart source
|
||||
- Sensitive data in plaintext storage instead of platform-secure storage
|
||||
- Missing input validation on user input and deep link URLs
|
||||
- Cleartext HTTP traffic; sensitive data logged via `print()`/`debugPrint()`
|
||||
- Exported Android components and iOS URL schemes without proper guards
|
||||
|
||||
### Step 3: Read and Review
|
||||
|
||||
Read changed files fully. Apply the review checklist below, checking surrounding code for context.
|
||||
|
||||
### Step 4: Report Findings
|
||||
|
||||
Use the output format below. Only report issues with >80% confidence.
|
||||
|
||||
**Noise control:**
|
||||
- Consolidate similar issues (e.g. "5 widgets missing `const` constructors" not 5 separate findings)
|
||||
- Skip stylistic preferences unless they violate project conventions or cause functional issues
|
||||
- Only flag unchanged code for CRITICAL security issues
|
||||
- Prioritize bugs, security, data loss, and correctness over style
|
||||
|
||||
## Review Checklist
|
||||
|
||||
### Architecture (CRITICAL)
|
||||
|
||||
Adapt to the project's chosen architecture (Clean Architecture, MVVM, feature-first, etc.):
|
||||
|
||||
- **Business logic in widgets** — Complex logic belongs in a state management component, not in `build()` or callbacks
|
||||
- **Data models leaking across layers** — If the project separates DTOs and domain entities, they must be mapped at boundaries; if models are shared, review for consistency
|
||||
- **Cross-layer imports** — Imports must respect the project's layer boundaries; inner layers must not depend on outer layers
|
||||
- **Framework leaking into pure-Dart layers** — If the project has a domain/model layer intended to be framework-free, it must not import Flutter or platform code
|
||||
- **Circular dependencies** — Package A depends on B and B depends on A
|
||||
- **Private `src/` imports across packages** — Importing `package:other/src/internal.dart` breaks Dart package encapsulation
|
||||
- **Direct instantiation in business logic** — State managers should receive dependencies via injection, not construct them internally
|
||||
- **Missing abstractions at layer boundaries** — Concrete classes imported across layers instead of depending on interfaces
|
||||
|
||||
### State Management (CRITICAL)
|
||||
|
||||
**Universal (all solutions):**
|
||||
- **Boolean flag soup** — `isLoading`/`isError`/`hasData` as separate fields allows impossible states; use sealed types, union variants, or the solution's built-in async state type
|
||||
- **Non-exhaustive state handling** — All state variants must be handled exhaustively; unhandled variants silently break
|
||||
- **Single responsibility violated** — Avoid "god" managers handling unrelated concerns
|
||||
- **Direct API/DB calls from widgets** — Data access should go through a service/repository layer
|
||||
- **Subscribing in `build()`** — Never call `.listen()` inside build methods; use declarative builders
|
||||
- **Stream/subscription leaks** — All manual subscriptions must be cancelled in `dispose()`/`close()`
|
||||
- **Missing error/loading states** — Every async operation must model loading, success, and error distinctly
|
||||
|
||||
**Immutable-state solutions (BLoC, Riverpod, Redux):**
|
||||
- **Mutable state** — State must be immutable; create new instances via `copyWith`, never mutate in-place
|
||||
- **Missing value equality** — State classes must implement `==`/`hashCode` so the framework detects changes
|
||||
|
||||
**Reactive-mutation solutions (MobX, GetX, Signals):**
|
||||
- **Mutations outside reactivity API** — State must only change through `@action`, `.value`, `.obs`, etc.; direct mutation bypasses tracking
|
||||
- **Missing computed state** — Derivable values should use the solution's computed mechanism, not be stored redundantly
|
||||
|
||||
**Cross-component dependencies:**
|
||||
- In **Riverpod**, `ref.watch` between providers is expected — flag only circular or tangled chains
|
||||
- In **BLoC**, blocs should not directly depend on other blocs — prefer shared repositories
|
||||
- In other solutions, follow documented conventions for inter-component communication
|
||||
|
||||
### Widget Composition (HIGH)
|
||||
|
||||
- **Oversized `build()`** — Exceeding ~80 lines; extract subtrees to separate widget classes
|
||||
- **`_build*()` helper methods** — Private methods returning widgets prevent framework optimizations; extract to classes
|
||||
- **Missing `const` constructors** — Widgets with all-final fields must declare `const` to prevent unnecessary rebuilds
|
||||
- **Object allocation in parameters** — Inline `TextStyle(...)` without `const` causes rebuilds
|
||||
- **`StatefulWidget` overuse** — Prefer `StatelessWidget` when no mutable local state is needed
|
||||
- **Missing `key` in list items** — `ListView.builder` items without stable `ValueKey` cause state bugs
|
||||
- **Hardcoded colors/text styles** — Use `Theme.of(context).colorScheme`/`textTheme`; hardcoded styles break dark mode
|
||||
- **Hardcoded spacing** — Prefer design tokens or named constants over magic numbers
|
||||
|
||||
### Performance (HIGH)
|
||||
|
||||
- **Unnecessary rebuilds** — State consumers wrapping too much tree; scope narrow and use selectors
|
||||
- **Expensive work in `build()`** — Sorting, filtering, regex, or I/O in build; compute in the state layer
|
||||
- **`MediaQuery.of(context)` overuse** — Use specific accessors (`MediaQuery.sizeOf(context)`)
|
||||
- **Concrete list constructors for large data** — Use `ListView.builder`/`GridView.builder` for lazy construction
|
||||
- **Missing image optimization** — No caching, no `cacheWidth`/`cacheHeight`, full-res thumbnails
|
||||
- **`Opacity` in animations** — Use `AnimatedOpacity` or `FadeTransition`
|
||||
- **Missing `const` propagation** — `const` widgets stop rebuild propagation; use wherever possible
|
||||
- **`IntrinsicHeight`/`IntrinsicWidth` overuse** — Cause extra layout passes; avoid in scrollable lists
|
||||
- **`RepaintBoundary` missing** — Complex independently-repainting subtrees should be wrapped
|
||||
|
||||
### Dart Idioms (MEDIUM)
|
||||
|
||||
- **Missing type annotations / implicit `dynamic`** — Enable `strict-casts`, `strict-inference`, `strict-raw-types` to catch these
|
||||
- **`!` bang overuse** — Prefer `?.`, `??`, `case var v?`, or `requireNotNull`
|
||||
- **Broad exception catching** — `catch (e)` without `on` clause; specify exception types
|
||||
- **Catching `Error` subtypes** — `Error` indicates bugs, not recoverable conditions
|
||||
- **`var` where `final` works** — Prefer `final` for locals, `const` for compile-time constants
|
||||
- **Relative imports** — Use `package:` imports for consistency
|
||||
- **Missing Dart 3 patterns** — Prefer switch expressions and `if-case` over verbose `is` checks
|
||||
- **`print()` in production** — Use `dart:developer` `log()` or the project's logging package
|
||||
- **`late` overuse** — Prefer nullable types or constructor initialization
|
||||
- **Ignoring `Future` return values** — Use `await` or mark with `unawaited()`
|
||||
- **Unused `async`** — Functions marked `async` that never `await` add unnecessary overhead
|
||||
- **Mutable collections exposed** — Public APIs should return unmodifiable views
|
||||
- **String concatenation in loops** — Use `StringBuffer` for iterative building
|
||||
- **Mutable fields in `const` classes** — Fields in `const` constructor classes must be final
|
||||
|
||||
### Resource Lifecycle (HIGH)
|
||||
|
||||
- **Missing `dispose()`** — Every resource from `initState()` (controllers, subscriptions, timers) must be disposed
|
||||
- **`BuildContext` used after `await`** — Check `context.mounted` (Flutter 3.7+) before navigation/dialogs after async gaps
|
||||
- **`setState` after `dispose`** — Async callbacks must check `mounted` before calling `setState`
|
||||
- **`BuildContext` stored in long-lived objects** — Never store context in singletons or static fields
|
||||
- **Unclosed `StreamController`** / **`Timer` not cancelled** — Must be cleaned up in `dispose()`
|
||||
- **Duplicated lifecycle logic** — Identical init/dispose blocks should be extracted to reusable patterns
|
||||
|
||||
### Error Handling (HIGH)
|
||||
|
||||
- **Missing global error capture** — Both `FlutterError.onError` and `PlatformDispatcher.instance.onError` must be set
|
||||
- **No error reporting service** — Crashlytics/Sentry or equivalent should be integrated with non-fatal reporting
|
||||
- **Missing state management error observer** — Wire errors to reporting (BlocObserver, ProviderObserver, etc.)
|
||||
- **Red screen in production** — `ErrorWidget.builder` not customized for release mode
|
||||
- **Raw exceptions reaching UI** — Map to user-friendly, localized messages before presentation layer
|
||||
|
||||
### Testing (HIGH)
|
||||
|
||||
- **Missing unit tests** — State manager changes must have corresponding tests
|
||||
- **Missing widget tests** — New/changed widgets should have widget tests
|
||||
- **Missing golden tests** — Design-critical components should have pixel-perfect regression tests
|
||||
- **Untested state transitions** — All paths (loading→success, loading→error, retry, empty) must be tested
|
||||
- **Test isolation violated** — External dependencies must be mocked; no shared mutable state between tests
|
||||
- **Flaky async tests** — Use `pumpAndSettle` or explicit `pump(Duration)`, not timing assumptions
|
||||
|
||||
### Accessibility (MEDIUM)
|
||||
|
||||
- **Missing semantic labels** — Images without `semanticLabel`, icons without `tooltip`
|
||||
- **Small tap targets** — Interactive elements below 48x48 pixels
|
||||
- **Color-only indicators** — Color alone conveying meaning without icon/text alternative
|
||||
- **Missing `ExcludeSemantics`/`MergeSemantics`** — Decorative elements and related widget groups need proper semantics
|
||||
- **Text scaling ignored** — Hardcoded sizes that don't respect system accessibility settings
|
||||
|
||||
### Platform, Responsive & Navigation (MEDIUM)
|
||||
|
||||
- **Missing `SafeArea`** — Content obscured by notches/status bars
|
||||
- **Broken back navigation** — Android back button or iOS swipe-to-go-back not working as expected
|
||||
- **Missing platform permissions** — Required permissions not declared in `AndroidManifest.xml` or `Info.plist`
|
||||
- **No responsive layout** — Fixed layouts that break on tablets/desktops/landscape
|
||||
- **Text overflow** — Unbounded text without `Flexible`/`Expanded`/`FittedBox`
|
||||
- **Mixed navigation patterns** — `Navigator.push` mixed with declarative router; pick one
|
||||
- **Hardcoded route paths** — Use constants, enums, or generated routes
|
||||
- **Missing deep link validation** — URLs not sanitized before navigation
|
||||
- **Missing auth guards** — Protected routes accessible without redirect
|
||||
|
||||
### Internationalization (MEDIUM)
|
||||
|
||||
- **Hardcoded user-facing strings** — All visible text must use a localization system
|
||||
- **String concatenation for localized text** — Use parameterized messages
|
||||
- **Locale-unaware formatting** — Dates, numbers, currencies must use locale-aware formatters
|
||||
|
||||
### Dependencies & Build (LOW)
|
||||
|
||||
- **No strict static analysis** — Project should have strict `analysis_options.yaml`
|
||||
- **Stale/unused dependencies** — Run `flutter pub outdated`; remove unused packages
|
||||
- **Dependency overrides in production** — Only with comment linking to tracking issue
|
||||
- **Unjustified lint suppressions** — `// ignore:` without explanatory comment
|
||||
- **Hardcoded path deps in monorepo** — Use workspace resolution, not `path: ../../`
|
||||
|
||||
### Security (CRITICAL)
|
||||
|
||||
- **Hardcoded secrets** — API keys, tokens, or credentials in Dart source
|
||||
- **Insecure storage** — Sensitive data in plaintext instead of Keychain/EncryptedSharedPreferences
|
||||
- **Cleartext traffic** — HTTP without HTTPS; missing network security config
|
||||
- **Sensitive logging** — Tokens, PII, or credentials in `print()`/`debugPrint()`
|
||||
- **Missing input validation** — User input passed to APIs/navigation without sanitization
|
||||
- **Unsafe deep links** — Handlers that act without validation
|
||||
|
||||
If any CRITICAL security issue is present, stop and escalate to `security-reviewer`.
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
[CRITICAL] Domain layer imports Flutter framework
|
||||
File: packages/domain/lib/src/usecases/user_usecase.dart:3
|
||||
Issue: `import 'package:flutter/material.dart'` — domain must be pure Dart.
|
||||
Fix: Move widget-dependent logic to presentation layer.
|
||||
|
||||
[HIGH] State consumer wraps entire screen
|
||||
File: lib/features/cart/presentation/cart_page.dart:42
|
||||
Issue: Consumer rebuilds entire page on every state change.
|
||||
Fix: Narrow scope to the subtree that depends on changed state, or use a selector.
|
||||
```
|
||||
|
||||
## Summary Format
|
||||
|
||||
End every review with:
|
||||
|
||||
```
|
||||
## Review Summary
|
||||
|
||||
| Severity | Count | Status |
|
||||
|----------|-------|--------|
|
||||
| CRITICAL | 0 | pass |
|
||||
| HIGH | 1 | block |
|
||||
| MEDIUM | 2 | info |
|
||||
| LOW | 0 | note |
|
||||
|
||||
Verdict: BLOCK — HIGH issues must be fixed before merge.
|
||||
```
|
||||
|
||||
## Approval Criteria
|
||||
|
||||
- **Approve**: No CRITICAL or HIGH issues
|
||||
- **Block**: Any CRITICAL or HIGH issues — must fix before merge
|
||||
|
||||
Refer to the `flutter-dart-code-review` skill for the comprehensive review checklist.
|
||||
@@ -71,7 +71,7 @@
|
||||
|
||||
## 归属
|
||||
|
||||
本行为准则改编自 \[贡献者公约]\[homepage] 2.0 版本,可访问
|
||||
本行为准则改编自 [贡献者公约][homepage] 2.0 版本,可访问
|
||||
<https://www.contributor-covenant.org/version/2/0/code_of_conduct.html> 获取。
|
||||
|
||||
社区影响指南的灵感来源于 [Mozilla 的行为准则执行阶梯](https://github.com/mozilla/diversity)。
|
||||
|
||||
@@ -315,6 +315,6 @@ result = "".join(str(item) for item in items)
|
||||
| 海象运算符 (`:=`) | 3.8+ |
|
||||
| 仅限位置参数 | 3.8+ |
|
||||
| Match 语句 | 3.10+ |
|
||||
| 类型联合 (\`x | None\`) | 3.10+ |
|
||||
| 类型联合 (`x \| None`) | 3.10+ |
|
||||
|
||||
确保你的项目 `pyproject.toml` 或 `setup.py` 指定了正确的最低 Python 版本。
|
||||
|
||||
@@ -95,6 +95,16 @@
|
||||
}
|
||||
],
|
||||
"description": "Capture governance events (secrets, policy violations, approval requests). Enable with ECC_GOVERNANCE_CAPTURE=1"
|
||||
},
|
||||
{
|
||||
"matcher": "*",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:mcp-health-check\" \"scripts/hooks/mcp-health-check.js\" \"standard,strict\""
|
||||
}
|
||||
],
|
||||
"description": "Check MCP server health before MCP tool execution and block unhealthy MCP calls"
|
||||
}
|
||||
],
|
||||
"PreCompact": [
|
||||
@@ -210,6 +220,18 @@
|
||||
"description": "Capture tool use results for continuous learning"
|
||||
}
|
||||
],
|
||||
"PostToolUseFailure": [
|
||||
{
|
||||
"matcher": "*",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:mcp-health-check\" \"scripts/hooks/mcp-health-check.js\" \"standard,strict\""
|
||||
}
|
||||
],
|
||||
"description": "Track failed MCP tool calls, mark unhealthy servers, and attempt reconnect"
|
||||
}
|
||||
],
|
||||
"Stop": [
|
||||
{
|
||||
"matcher": "*",
|
||||
|
||||
72
rules/csharp/coding-style.md
Normal file
72
rules/csharp/coding-style.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.cs"
|
||||
- "**/*.csx"
|
||||
---
|
||||
# C# Coding Style
|
||||
|
||||
> This file extends [common/coding-style.md](../common/coding-style.md) with C#-specific content.
|
||||
|
||||
## Standards
|
||||
|
||||
- Follow current .NET conventions and enable nullable reference types
|
||||
- Prefer explicit access modifiers on public and internal APIs
|
||||
- Keep files aligned with the primary type they define
|
||||
|
||||
## Types and Models
|
||||
|
||||
- Prefer `record` or `record struct` for immutable value-like models
|
||||
- Use `class` for entities or types with identity and lifecycle
|
||||
- Use `interface` for service boundaries and abstractions
|
||||
- Avoid `dynamic` in application code; prefer generics or explicit models
|
||||
|
||||
```csharp
|
||||
public sealed record UserDto(Guid Id, string Email);
|
||||
|
||||
public interface IUserRepository
|
||||
{
|
||||
Task<UserDto?> FindByIdAsync(Guid id, CancellationToken cancellationToken);
|
||||
}
|
||||
```
|
||||
|
||||
## Immutability
|
||||
|
||||
- Prefer `init` setters, constructor parameters, and immutable collections for shared state
|
||||
- Do not mutate input models in-place when producing updated state
|
||||
|
||||
```csharp
|
||||
public sealed record UserProfile(string Name, string Email);
|
||||
|
||||
public static UserProfile Rename(UserProfile profile, string name) =>
|
||||
profile with { Name = name };
|
||||
```
|
||||
|
||||
## Async and Error Handling
|
||||
|
||||
- Prefer `async`/`await` over blocking calls like `.Result` or `.Wait()`
|
||||
- Pass `CancellationToken` through public async APIs
|
||||
- Throw specific exceptions and log with structured properties
|
||||
|
||||
```csharp
|
||||
public async Task<Order> LoadOrderAsync(
|
||||
Guid orderId,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
try
|
||||
{
|
||||
return await repository.FindAsync(orderId, cancellationToken)
|
||||
?? throw new InvalidOperationException($"Order {orderId} was not found.");
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
logger.LogError(ex, "Failed to load order {OrderId}", orderId);
|
||||
throw;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Formatting
|
||||
|
||||
- Use `dotnet format` for formatting and analyzer fixes
|
||||
- Keep `using` directives organized and remove unused imports
|
||||
- Prefer expression-bodied members only when they stay readable
|
||||
25
rules/csharp/hooks.md
Normal file
25
rules/csharp/hooks.md
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.cs"
|
||||
- "**/*.csx"
|
||||
- "**/*.csproj"
|
||||
- "**/*.sln"
|
||||
- "**/Directory.Build.props"
|
||||
- "**/Directory.Build.targets"
|
||||
---
|
||||
# C# Hooks
|
||||
|
||||
> This file extends [common/hooks.md](../common/hooks.md) with C#-specific content.
|
||||
|
||||
## PostToolUse Hooks
|
||||
|
||||
Configure in `~/.claude/settings.json`:
|
||||
|
||||
- **dotnet format**: Auto-format edited C# files and apply analyzer fixes
|
||||
- **dotnet build**: Verify the solution or project still compiles after edits
|
||||
- **dotnet test --no-build**: Re-run the nearest relevant test project after behavior changes
|
||||
|
||||
## Stop Hooks
|
||||
|
||||
- Run a final `dotnet build` before ending a session with broad C# changes
|
||||
- Warn on modified `appsettings*.json` files so secrets do not get committed
|
||||
50
rules/csharp/patterns.md
Normal file
50
rules/csharp/patterns.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.cs"
|
||||
- "**/*.csx"
|
||||
---
|
||||
# C# Patterns
|
||||
|
||||
> This file extends [common/patterns.md](../common/patterns.md) with C#-specific content.
|
||||
|
||||
## API Response Pattern
|
||||
|
||||
```csharp
|
||||
public sealed record ApiResponse<T>(
|
||||
bool Success,
|
||||
T? Data = default,
|
||||
string? Error = null,
|
||||
object? Meta = null);
|
||||
```
|
||||
|
||||
## Repository Pattern
|
||||
|
||||
```csharp
|
||||
public interface IRepository<T>
|
||||
{
|
||||
Task<IReadOnlyList<T>> FindAllAsync(CancellationToken cancellationToken);
|
||||
Task<T?> FindByIdAsync(Guid id, CancellationToken cancellationToken);
|
||||
Task<T> CreateAsync(T entity, CancellationToken cancellationToken);
|
||||
Task<T> UpdateAsync(T entity, CancellationToken cancellationToken);
|
||||
Task DeleteAsync(Guid id, CancellationToken cancellationToken);
|
||||
}
|
||||
```
|
||||
|
||||
## Options Pattern
|
||||
|
||||
Use strongly typed options for config instead of reading raw strings throughout the codebase.
|
||||
|
||||
```csharp
|
||||
public sealed class PaymentsOptions
|
||||
{
|
||||
public const string SectionName = "Payments";
|
||||
public required string BaseUrl { get; init; }
|
||||
public required string ApiKeySecretName { get; init; }
|
||||
}
|
||||
```
|
||||
|
||||
## Dependency Injection
|
||||
|
||||
- Depend on interfaces at service boundaries
|
||||
- Keep constructors focused; if a service needs too many dependencies, split responsibilities
|
||||
- Register lifetimes intentionally: singleton for stateless/shared services, scoped for request data, transient for lightweight pure workers
|
||||
58
rules/csharp/security.md
Normal file
58
rules/csharp/security.md
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.cs"
|
||||
- "**/*.csx"
|
||||
- "**/*.csproj"
|
||||
- "**/appsettings*.json"
|
||||
---
|
||||
# C# Security
|
||||
|
||||
> This file extends [common/security.md](../common/security.md) with C#-specific content.
|
||||
|
||||
## Secret Management
|
||||
|
||||
- Never hardcode API keys, tokens, or connection strings in source code
|
||||
- Use environment variables, user secrets for local development, and a secret manager in production
|
||||
- Keep `appsettings.*.json` free of real credentials
|
||||
|
||||
```csharp
|
||||
// BAD
|
||||
const string ApiKey = "sk-live-123";
|
||||
|
||||
// GOOD
|
||||
var apiKey = builder.Configuration["OpenAI:ApiKey"]
|
||||
?? throw new InvalidOperationException("OpenAI:ApiKey is not configured.");
|
||||
```
|
||||
|
||||
## SQL Injection Prevention
|
||||
|
||||
- Always use parameterized queries with ADO.NET, Dapper, or EF Core
|
||||
- Never concatenate user input into SQL strings
|
||||
- Validate sort fields and filter operators before using dynamic query composition
|
||||
|
||||
```csharp
|
||||
const string sql = "SELECT * FROM Orders WHERE CustomerId = @customerId";
|
||||
await connection.QueryAsync<Order>(sql, new { customerId });
|
||||
```
|
||||
|
||||
## Input Validation
|
||||
|
||||
- Validate DTOs at the application boundary
|
||||
- Use data annotations, FluentValidation, or explicit guard clauses
|
||||
- Reject invalid model state before running business logic
|
||||
|
||||
## Authentication and Authorization
|
||||
|
||||
- Prefer framework auth handlers instead of custom token parsing
|
||||
- Enforce authorization policies at endpoint or handler boundaries
|
||||
- Never log raw tokens, passwords, or PII
|
||||
|
||||
## Error Handling
|
||||
|
||||
- Return safe client-facing messages
|
||||
- Log detailed exceptions with structured context server-side
|
||||
- Do not expose stack traces, SQL text, or filesystem paths in API responses
|
||||
|
||||
## References
|
||||
|
||||
See skill: `security-review` for broader application security review checklists.
|
||||
46
rules/csharp/testing.md
Normal file
46
rules/csharp/testing.md
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.cs"
|
||||
- "**/*.csx"
|
||||
- "**/*.csproj"
|
||||
---
|
||||
# C# Testing
|
||||
|
||||
> This file extends [common/testing.md](../common/testing.md) with C#-specific content.
|
||||
|
||||
## Test Framework
|
||||
|
||||
- Prefer **xUnit** for unit and integration tests
|
||||
- Use **FluentAssertions** for readable assertions
|
||||
- Use **Moq** or **NSubstitute** for mocking dependencies
|
||||
- Use **Testcontainers** when integration tests need real infrastructure
|
||||
|
||||
## Test Organization
|
||||
|
||||
- Mirror `src/` structure under `tests/`
|
||||
- Separate unit, integration, and end-to-end coverage clearly
|
||||
- Name tests by behavior, not implementation details
|
||||
|
||||
```csharp
|
||||
public sealed class OrderServiceTests
|
||||
{
|
||||
[Fact]
|
||||
public async Task FindByIdAsync_ReturnsOrder_WhenOrderExists()
|
||||
{
|
||||
// Arrange
|
||||
// Act
|
||||
// Assert
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## ASP.NET Core Integration Tests
|
||||
|
||||
- Use `WebApplicationFactory<TEntryPoint>` for API integration coverage
|
||||
- Test auth, validation, and serialization through HTTP, not by bypassing middleware
|
||||
|
||||
## Coverage
|
||||
|
||||
- Target 80%+ line coverage
|
||||
- Focus coverage on domain logic, validation, auth, and failure paths
|
||||
- Run `dotnet test` in CI with coverage collection enabled where available
|
||||
588
scripts/hooks/mcp-health-check.js
Normal file
588
scripts/hooks/mcp-health-check.js
Normal file
@@ -0,0 +1,588 @@
|
||||
#!/usr/bin/env node
|
||||
'use strict';
|
||||
|
||||
/**
|
||||
* MCP health-check hook.
|
||||
*
|
||||
* Compatible with Claude Code's existing hook events:
|
||||
* - PreToolUse: probe MCP server health before MCP tool execution
|
||||
* - PostToolUseFailure: mark unhealthy servers, attempt reconnect, and re-probe
|
||||
*
|
||||
* The hook persists health state outside the conversation context so it
|
||||
* survives compaction and later turns.
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const os = require('os');
|
||||
const path = require('path');
|
||||
const http = require('http');
|
||||
const https = require('https');
|
||||
const { spawn, spawnSync } = require('child_process');
|
||||
|
||||
const MAX_STDIN = 1024 * 1024;
|
||||
const DEFAULT_TTL_MS = 2 * 60 * 1000;
|
||||
const DEFAULT_TIMEOUT_MS = 5000;
|
||||
const DEFAULT_BACKOFF_MS = 30 * 1000;
|
||||
const MAX_BACKOFF_MS = 10 * 60 * 1000;
|
||||
const HEALTHY_HTTP_CODES = new Set([200, 201, 202, 204, 301, 302, 303, 304, 307, 308, 405]);
|
||||
const RECONNECT_STATUS_CODES = new Set([401, 403, 429, 503]);
|
||||
const FAILURE_PATTERNS = [
|
||||
{ code: 401, pattern: /\b401\b|unauthori[sz]ed|auth(?:entication)?\s+(?:failed|expired|invalid)/i },
|
||||
{ code: 403, pattern: /\b403\b|forbidden|permission denied/i },
|
||||
{ code: 429, pattern: /\b429\b|rate limit|too many requests/i },
|
||||
{ code: 503, pattern: /\b503\b|service unavailable|overloaded|temporarily unavailable/i },
|
||||
{ code: 'transport', pattern: /ECONNREFUSED|ENOTFOUND|EAI_AGAIN|timed? out|socket hang up|connection (?:failed|lost|reset|closed)/i }
|
||||
];
|
||||
|
||||
function envNumber(name, fallback) {
|
||||
const value = Number(process.env[name]);
|
||||
return Number.isFinite(value) && value >= 0 ? value : fallback;
|
||||
}
|
||||
|
||||
function stateFilePath() {
|
||||
if (process.env.ECC_MCP_HEALTH_STATE_PATH) {
|
||||
return path.resolve(process.env.ECC_MCP_HEALTH_STATE_PATH);
|
||||
}
|
||||
return path.join(os.homedir(), '.claude', 'mcp-health-cache.json');
|
||||
}
|
||||
|
||||
function configPaths() {
|
||||
if (process.env.ECC_MCP_CONFIG_PATH) {
|
||||
return process.env.ECC_MCP_CONFIG_PATH
|
||||
.split(path.delimiter)
|
||||
.map(entry => entry.trim())
|
||||
.filter(Boolean)
|
||||
.map(entry => path.resolve(entry));
|
||||
}
|
||||
|
||||
const cwd = process.cwd();
|
||||
const home = os.homedir();
|
||||
|
||||
return [
|
||||
path.join(cwd, '.claude.json'),
|
||||
path.join(cwd, '.claude', 'settings.json'),
|
||||
path.join(home, '.claude.json'),
|
||||
path.join(home, '.claude', 'settings.json')
|
||||
];
|
||||
}
|
||||
|
||||
function readJsonFile(filePath) {
|
||||
try {
|
||||
return JSON.parse(fs.readFileSync(filePath, 'utf8'));
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
function loadState(filePath) {
|
||||
const state = readJsonFile(filePath);
|
||||
if (!state || typeof state !== 'object' || Array.isArray(state)) {
|
||||
return { version: 1, servers: {} };
|
||||
}
|
||||
|
||||
if (!state.servers || typeof state.servers !== 'object' || Array.isArray(state.servers)) {
|
||||
state.servers = {};
|
||||
}
|
||||
|
||||
return state;
|
||||
}
|
||||
|
||||
function saveState(filePath, state) {
|
||||
try {
|
||||
fs.mkdirSync(path.dirname(filePath), { recursive: true });
|
||||
fs.writeFileSync(filePath, JSON.stringify(state, null, 2));
|
||||
} catch {
|
||||
// Never block the hook on state persistence errors.
|
||||
}
|
||||
}
|
||||
|
||||
function readRawStdin() {
|
||||
return new Promise(resolve => {
|
||||
let raw = '';
|
||||
process.stdin.setEncoding('utf8');
|
||||
process.stdin.on('data', chunk => {
|
||||
if (raw.length < MAX_STDIN) {
|
||||
const remaining = MAX_STDIN - raw.length;
|
||||
raw += chunk.substring(0, remaining);
|
||||
}
|
||||
});
|
||||
process.stdin.on('end', () => resolve(raw));
|
||||
process.stdin.on('error', () => resolve(raw));
|
||||
});
|
||||
}
|
||||
|
||||
function safeParse(raw) {
|
||||
try {
|
||||
return raw.trim() ? JSON.parse(raw) : {};
|
||||
} catch {
|
||||
return {};
|
||||
}
|
||||
}
|
||||
|
||||
function extractMcpTarget(input) {
|
||||
const toolName = String(input.tool_name || input.name || '');
|
||||
const explicitServer = input.server
|
||||
|| input.mcp_server
|
||||
|| input.tool_input?.server
|
||||
|| input.tool_input?.mcp_server
|
||||
|| input.tool_input?.connector
|
||||
|| null;
|
||||
const explicitTool = input.tool
|
||||
|| input.mcp_tool
|
||||
|| input.tool_input?.tool
|
||||
|| input.tool_input?.mcp_tool
|
||||
|| null;
|
||||
|
||||
if (explicitServer) {
|
||||
return {
|
||||
server: String(explicitServer),
|
||||
tool: explicitTool ? String(explicitTool) : toolName
|
||||
};
|
||||
}
|
||||
|
||||
if (!toolName.startsWith('mcp__')) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const segments = toolName.slice(5).split('__');
|
||||
if (segments.length < 2 || !segments[0]) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return {
|
||||
server: segments[0],
|
||||
tool: segments.slice(1).join('__')
|
||||
};
|
||||
}
|
||||
|
||||
function resolveServerConfig(serverName) {
|
||||
for (const filePath of configPaths()) {
|
||||
const data = readJsonFile(filePath);
|
||||
const server = data?.mcpServers?.[serverName]
|
||||
|| data?.mcp_servers?.[serverName]
|
||||
|| null;
|
||||
|
||||
if (server && typeof server === 'object' && !Array.isArray(server)) {
|
||||
return {
|
||||
config: server,
|
||||
source: filePath
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
function markHealthy(state, serverName, now, details = {}) {
|
||||
state.servers[serverName] = {
|
||||
status: 'healthy',
|
||||
checkedAt: now,
|
||||
expiresAt: now + envNumber('ECC_MCP_HEALTH_TTL_MS', DEFAULT_TTL_MS),
|
||||
failureCount: 0,
|
||||
lastError: null,
|
||||
lastFailureCode: null,
|
||||
nextRetryAt: now,
|
||||
lastRestoredAt: now,
|
||||
...details
|
||||
};
|
||||
}
|
||||
|
||||
function markUnhealthy(state, serverName, now, failureCode, errorMessage) {
|
||||
const previous = state.servers[serverName] || {};
|
||||
const failureCount = Number(previous.failureCount || 0) + 1;
|
||||
const backoffBase = envNumber('ECC_MCP_HEALTH_BACKOFF_MS', DEFAULT_BACKOFF_MS);
|
||||
const nextRetryDelay = Math.min(backoffBase * (2 ** Math.max(failureCount - 1, 0)), MAX_BACKOFF_MS);
|
||||
|
||||
state.servers[serverName] = {
|
||||
status: 'unhealthy',
|
||||
checkedAt: now,
|
||||
expiresAt: now,
|
||||
failureCount,
|
||||
lastError: errorMessage || null,
|
||||
lastFailureCode: failureCode || null,
|
||||
nextRetryAt: now + nextRetryDelay,
|
||||
lastRestoredAt: previous.lastRestoredAt || null
|
||||
};
|
||||
}
|
||||
|
||||
function failureSummary(input) {
|
||||
const output = input.tool_output;
|
||||
const pieces = [
|
||||
typeof input.error === 'string' ? input.error : '',
|
||||
typeof input.message === 'string' ? input.message : '',
|
||||
typeof input.tool_response === 'string' ? input.tool_response : '',
|
||||
typeof output === 'string' ? output : '',
|
||||
typeof output?.output === 'string' ? output.output : '',
|
||||
typeof output?.stderr === 'string' ? output.stderr : '',
|
||||
typeof input.tool_input?.error === 'string' ? input.tool_input.error : ''
|
||||
].filter(Boolean);
|
||||
|
||||
return pieces.join('\n');
|
||||
}
|
||||
|
||||
function detectFailureCode(text) {
|
||||
const summary = String(text || '');
|
||||
for (const entry of FAILURE_PATTERNS) {
|
||||
if (entry.pattern.test(summary)) {
|
||||
return entry.code;
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function requestHttp(urlString, headers, timeoutMs) {
|
||||
return new Promise(resolve => {
|
||||
let settled = false;
|
||||
let timedOut = false;
|
||||
|
||||
const url = new URL(urlString);
|
||||
const client = url.protocol === 'https:' ? https : http;
|
||||
|
||||
const req = client.request(
|
||||
url,
|
||||
{
|
||||
method: 'GET',
|
||||
headers,
|
||||
},
|
||||
res => {
|
||||
if (settled) return;
|
||||
settled = true;
|
||||
res.resume();
|
||||
resolve({
|
||||
ok: HEALTHY_HTTP_CODES.has(res.statusCode),
|
||||
statusCode: res.statusCode,
|
||||
reason: `HTTP ${res.statusCode}`
|
||||
});
|
||||
}
|
||||
);
|
||||
|
||||
req.setTimeout(timeoutMs, () => {
|
||||
timedOut = true;
|
||||
req.destroy(new Error('timeout'));
|
||||
});
|
||||
|
||||
req.on('error', error => {
|
||||
if (settled) return;
|
||||
settled = true;
|
||||
resolve({
|
||||
ok: false,
|
||||
statusCode: null,
|
||||
reason: timedOut ? 'request timed out' : error.message
|
||||
});
|
||||
});
|
||||
|
||||
req.end();
|
||||
});
|
||||
}
|
||||
|
||||
function probeCommandServer(serverName, config) {
|
||||
return new Promise(resolve => {
|
||||
const command = config.command;
|
||||
const args = Array.isArray(config.args) ? config.args.map(arg => String(arg)) : [];
|
||||
const timeoutMs = envNumber('ECC_MCP_HEALTH_TIMEOUT_MS', DEFAULT_TIMEOUT_MS);
|
||||
const mergedEnv = {
|
||||
...process.env,
|
||||
...(config.env && typeof config.env === 'object' && !Array.isArray(config.env) ? config.env : {})
|
||||
};
|
||||
|
||||
let stderr = '';
|
||||
let done = false;
|
||||
|
||||
function finish(result) {
|
||||
if (done) return;
|
||||
done = true;
|
||||
resolve(result);
|
||||
}
|
||||
|
||||
let child;
|
||||
try {
|
||||
child = spawn(command, args, {
|
||||
env: mergedEnv,
|
||||
cwd: process.cwd(),
|
||||
stdio: ['pipe', 'ignore', 'pipe']
|
||||
});
|
||||
} catch (error) {
|
||||
finish({
|
||||
ok: false,
|
||||
statusCode: null,
|
||||
reason: error.message
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
child.stderr.on('data', chunk => {
|
||||
if (stderr.length < 4000) {
|
||||
const remaining = 4000 - stderr.length;
|
||||
stderr += String(chunk).slice(0, remaining);
|
||||
}
|
||||
});
|
||||
|
||||
child.on('error', error => {
|
||||
finish({
|
||||
ok: false,
|
||||
statusCode: null,
|
||||
reason: error.message
|
||||
});
|
||||
});
|
||||
|
||||
child.on('exit', (code, signal) => {
|
||||
finish({
|
||||
ok: false,
|
||||
statusCode: code,
|
||||
reason: stderr.trim() || `process exited before handshake (${signal || code || 'unknown'})`
|
||||
});
|
||||
});
|
||||
|
||||
const timer = setTimeout(() => {
|
||||
try {
|
||||
child.kill('SIGTERM');
|
||||
} catch {
|
||||
// ignore
|
||||
}
|
||||
|
||||
setTimeout(() => {
|
||||
try {
|
||||
child.kill('SIGKILL');
|
||||
} catch {
|
||||
// ignore
|
||||
}
|
||||
}, 200).unref?.();
|
||||
|
||||
finish({
|
||||
ok: true,
|
||||
statusCode: null,
|
||||
reason: `${serverName} accepted a new stdio process`
|
||||
});
|
||||
}, timeoutMs);
|
||||
|
||||
if (typeof timer.unref === 'function') {
|
||||
timer.unref();
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
async function probeServer(serverName, resolvedConfig) {
|
||||
const config = resolvedConfig.config;
|
||||
|
||||
if (config.type === 'http' || config.url) {
|
||||
const result = await requestHttp(config.url, config.headers || {}, envNumber('ECC_MCP_HEALTH_TIMEOUT_MS', DEFAULT_TIMEOUT_MS));
|
||||
|
||||
return {
|
||||
ok: result.ok,
|
||||
failureCode: RECONNECT_STATUS_CODES.has(result.statusCode) ? result.statusCode : null,
|
||||
reason: result.reason,
|
||||
source: resolvedConfig.source
|
||||
};
|
||||
}
|
||||
|
||||
if (config.command) {
|
||||
const result = await probeCommandServer(serverName, config);
|
||||
|
||||
return {
|
||||
ok: result.ok,
|
||||
failureCode: RECONNECT_STATUS_CODES.has(result.statusCode) ? result.statusCode : null,
|
||||
reason: result.reason,
|
||||
source: resolvedConfig.source
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
ok: false,
|
||||
failureCode: null,
|
||||
reason: 'unsupported MCP server config',
|
||||
source: resolvedConfig.source
|
||||
};
|
||||
}
|
||||
|
||||
function reconnectCommand(serverName) {
|
||||
const key = `ECC_MCP_RECONNECT_${String(serverName).toUpperCase().replace(/[^A-Z0-9]/g, '_')}`;
|
||||
const command = process.env[key] || process.env.ECC_MCP_RECONNECT_COMMAND || '';
|
||||
if (!command.trim()) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return command.includes('{server}')
|
||||
? command.replace(/\{server\}/g, serverName)
|
||||
: command;
|
||||
}
|
||||
|
||||
function attemptReconnect(serverName) {
|
||||
const command = reconnectCommand(serverName);
|
||||
if (!command) {
|
||||
return { attempted: false, success: false, reason: 'no reconnect command configured' };
|
||||
}
|
||||
|
||||
const result = spawnSync(command, {
|
||||
shell: true,
|
||||
env: process.env,
|
||||
cwd: process.cwd(),
|
||||
encoding: 'utf8',
|
||||
timeout: envNumber('ECC_MCP_RECONNECT_TIMEOUT_MS', DEFAULT_TIMEOUT_MS)
|
||||
});
|
||||
|
||||
if (result.error) {
|
||||
return { attempted: true, success: false, reason: result.error.message };
|
||||
}
|
||||
|
||||
if (result.status !== 0) {
|
||||
return {
|
||||
attempted: true,
|
||||
success: false,
|
||||
reason: (result.stderr || result.stdout || `reconnect exited ${result.status}`).trim()
|
||||
};
|
||||
}
|
||||
|
||||
return { attempted: true, success: true, reason: 'reconnect command completed' };
|
||||
}
|
||||
|
||||
function shouldFailOpen() {
|
||||
return /^(1|true|yes)$/i.test(String(process.env.ECC_MCP_HEALTH_FAIL_OPEN || ''));
|
||||
}
|
||||
|
||||
function emitLogs(logs) {
|
||||
for (const line of logs) {
|
||||
process.stderr.write(`${line}\n`);
|
||||
}
|
||||
}
|
||||
|
||||
async function handlePreToolUse(rawInput, input, target, statePathValue, now) {
|
||||
const logs = [];
|
||||
const state = loadState(statePathValue);
|
||||
const previous = state.servers[target.server] || {};
|
||||
|
||||
if (previous.status === 'healthy' && Number(previous.expiresAt || 0) > now) {
|
||||
return { rawInput, exitCode: 0, logs };
|
||||
}
|
||||
|
||||
if (previous.status === 'unhealthy' && Number(previous.nextRetryAt || 0) > now) {
|
||||
logs.push(
|
||||
`[MCPHealthCheck] ${target.server} is marked unhealthy until ${new Date(previous.nextRetryAt).toISOString()}; skipping ${target.tool || 'tool'}`
|
||||
);
|
||||
return { rawInput, exitCode: shouldFailOpen() ? 0 : 2, logs };
|
||||
}
|
||||
|
||||
const resolvedConfig = resolveServerConfig(target.server);
|
||||
if (!resolvedConfig) {
|
||||
logs.push(`[MCPHealthCheck] No MCP config found for ${target.server}; skipping preflight probe`);
|
||||
return { rawInput, exitCode: 0, logs };
|
||||
}
|
||||
|
||||
const probe = await probeServer(target.server, resolvedConfig);
|
||||
if (probe.ok) {
|
||||
markHealthy(state, target.server, now, { source: resolvedConfig.source });
|
||||
saveState(statePathValue, state);
|
||||
|
||||
if (previous.status === 'unhealthy') {
|
||||
logs.push(`[MCPHealthCheck] ${target.server} connection restored`);
|
||||
}
|
||||
|
||||
return { rawInput, exitCode: 0, logs };
|
||||
}
|
||||
|
||||
let reconnect = { attempted: false, success: false, reason: 'probe failed' };
|
||||
if (probe.failureCode || previous.status === 'unhealthy') {
|
||||
reconnect = attemptReconnect(target.server);
|
||||
if (reconnect.success) {
|
||||
const reprobe = await probeServer(target.server, resolvedConfig);
|
||||
if (reprobe.ok) {
|
||||
markHealthy(state, target.server, now, {
|
||||
source: resolvedConfig.source,
|
||||
restoredBy: 'reconnect-command'
|
||||
});
|
||||
saveState(statePathValue, state);
|
||||
logs.push(`[MCPHealthCheck] ${target.server} connection restored after reconnect`);
|
||||
return { rawInput, exitCode: 0, logs };
|
||||
}
|
||||
probe.reason = `${probe.reason}; reconnect reprobe failed: ${reprobe.reason}`;
|
||||
}
|
||||
}
|
||||
|
||||
markUnhealthy(state, target.server, now, probe.failureCode, probe.reason);
|
||||
saveState(statePathValue, state);
|
||||
|
||||
const reconnectSuffix = reconnect.attempted
|
||||
? ` Reconnect attempt: ${reconnect.success ? 'ok' : reconnect.reason}.`
|
||||
: '';
|
||||
logs.push(
|
||||
`[MCPHealthCheck] ${target.server} is unavailable (${probe.reason}). Blocking ${target.tool || 'tool'} so Claude can fall back to non-MCP tools.${reconnectSuffix}`
|
||||
);
|
||||
|
||||
return { rawInput, exitCode: shouldFailOpen() ? 0 : 2, logs };
|
||||
}
|
||||
|
||||
async function handlePostToolUseFailure(rawInput, input, target, statePathValue, now) {
|
||||
const logs = [];
|
||||
const summary = failureSummary(input);
|
||||
const failureCode = detectFailureCode(summary);
|
||||
|
||||
if (!failureCode) {
|
||||
return { rawInput, exitCode: 0, logs };
|
||||
}
|
||||
|
||||
const state = loadState(statePathValue);
|
||||
markUnhealthy(state, target.server, now, failureCode, summary.slice(0, 500));
|
||||
saveState(statePathValue, state);
|
||||
|
||||
logs.push(`[MCPHealthCheck] ${target.server} reported ${failureCode}; marking server unhealthy and attempting reconnect`);
|
||||
|
||||
const reconnect = attemptReconnect(target.server);
|
||||
if (!reconnect.attempted) {
|
||||
logs.push(`[MCPHealthCheck] ${target.server} reconnect skipped: ${reconnect.reason}`);
|
||||
return { rawInput, exitCode: 0, logs };
|
||||
}
|
||||
|
||||
if (!reconnect.success) {
|
||||
logs.push(`[MCPHealthCheck] ${target.server} reconnect failed: ${reconnect.reason}`);
|
||||
return { rawInput, exitCode: 0, logs };
|
||||
}
|
||||
|
||||
const resolvedConfig = resolveServerConfig(target.server);
|
||||
if (!resolvedConfig) {
|
||||
logs.push(`[MCPHealthCheck] ${target.server} reconnect completed but no config was available for a follow-up probe`);
|
||||
return { rawInput, exitCode: 0, logs };
|
||||
}
|
||||
|
||||
const reprobe = await probeServer(target.server, resolvedConfig);
|
||||
if (!reprobe.ok) {
|
||||
logs.push(`[MCPHealthCheck] ${target.server} reconnect command ran, but health probe still failed: ${reprobe.reason}`);
|
||||
return { rawInput, exitCode: 0, logs };
|
||||
}
|
||||
|
||||
const refreshed = loadState(statePathValue);
|
||||
markHealthy(refreshed, target.server, now, {
|
||||
source: resolvedConfig.source,
|
||||
restoredBy: 'post-failure-reconnect'
|
||||
});
|
||||
saveState(statePathValue, refreshed);
|
||||
logs.push(`[MCPHealthCheck] ${target.server} connection restored`);
|
||||
return { rawInput, exitCode: 0, logs };
|
||||
}
|
||||
|
||||
async function main() {
|
||||
const rawInput = await readRawStdin();
|
||||
const input = safeParse(rawInput);
|
||||
const target = extractMcpTarget(input);
|
||||
|
||||
if (!target) {
|
||||
process.stdout.write(rawInput);
|
||||
process.exit(0);
|
||||
return;
|
||||
}
|
||||
|
||||
const eventName = process.env.CLAUDE_HOOK_EVENT_NAME || 'PreToolUse';
|
||||
const now = Date.now();
|
||||
const statePathValue = stateFilePath();
|
||||
|
||||
const result = eventName === 'PostToolUseFailure'
|
||||
? await handlePostToolUseFailure(rawInput, input, target, statePathValue, now)
|
||||
: await handlePreToolUse(rawInput, input, target, statePathValue, now);
|
||||
|
||||
emitLogs(result.logs);
|
||||
process.stdout.write(result.rawInput);
|
||||
process.exit(result.exitCode);
|
||||
}
|
||||
|
||||
main().catch(error => {
|
||||
process.stderr.write(`[MCPHealthCheck] Unexpected error: ${error.message}\n`);
|
||||
process.exit(0);
|
||||
});
|
||||
@@ -40,11 +40,10 @@ async function main() {
|
||||
log(`[SessionStart] Latest: ${latest.path}`);
|
||||
|
||||
// Read and inject the latest session content into Claude's context
|
||||
const content = readFile(latest.path);
|
||||
const content = stripAnsi(readFile(latest.path));
|
||||
if (content && !content.includes('[Session context goes here]')) {
|
||||
// Only inject if the session has actual content (not the blank template)
|
||||
// Strip ANSI escape codes that may have leaked from terminal output (#642)
|
||||
output(`Previous session summary:\n${stripAnsi(content)}`);
|
||||
output(`Previous session summary:\n${content}`);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ const path = require('path');
|
||||
* Returns { frontmatter: {}, body: string }.
|
||||
*/
|
||||
function parseFrontmatter(content) {
|
||||
const match = content.match(/^---\r?\n([\s\S]*?)\r?\n---\r?\n([\s\S]*)$/);
|
||||
const match = content.match(/^---\r?\n([\s\S]*?)\r?\n---(?:\r?\n([\s\S]*))?$/);
|
||||
if (!match) {
|
||||
return { frontmatter: {}, body: content };
|
||||
}
|
||||
@@ -38,21 +38,29 @@ function parseFrontmatter(content) {
|
||||
frontmatter[key] = value;
|
||||
}
|
||||
|
||||
return { frontmatter, body: match[2] };
|
||||
return { frontmatter, body: match[2] || '' };
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract the first meaningful paragraph from agent body as a summary.
|
||||
* Skips headings and blank lines, returns up to maxSentences sentences.
|
||||
* Skips headings, list items, code blocks, and table rows.
|
||||
*/
|
||||
function extractSummary(body, maxSentences = 1) {
|
||||
const lines = body.split('\n');
|
||||
const paragraphs = [];
|
||||
let current = [];
|
||||
let inCodeBlock = false;
|
||||
|
||||
for (const line of lines) {
|
||||
const trimmed = line.trim();
|
||||
|
||||
// Track fenced code blocks
|
||||
if (trimmed.startsWith('```')) {
|
||||
inCodeBlock = !inCodeBlock;
|
||||
continue;
|
||||
}
|
||||
if (inCodeBlock) continue;
|
||||
|
||||
if (trimmed === '') {
|
||||
if (current.length > 0) {
|
||||
paragraphs.push(current.join(' '));
|
||||
@@ -61,8 +69,14 @@ function extractSummary(body, maxSentences = 1) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Skip headings
|
||||
if (trimmed.startsWith('#')) {
|
||||
// Skip headings, list items (bold, plain, asterisk), numbered lists, table rows
|
||||
if (
|
||||
trimmed.startsWith('#') ||
|
||||
trimmed.startsWith('- ') ||
|
||||
trimmed.startsWith('* ') ||
|
||||
/^\d+\.\s/.test(trimmed) ||
|
||||
trimmed.startsWith('|')
|
||||
) {
|
||||
if (current.length > 0) {
|
||||
paragraphs.push(current.join(' '));
|
||||
current = [];
|
||||
@@ -70,31 +84,21 @@ function extractSummary(body, maxSentences = 1) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Skip list items, code blocks, etc.
|
||||
if (trimmed.startsWith('```') || trimmed.startsWith('- **') || trimmed.startsWith('|')) {
|
||||
continue;
|
||||
}
|
||||
|
||||
current.push(trimmed);
|
||||
}
|
||||
if (current.length > 0) {
|
||||
paragraphs.push(current.join(' '));
|
||||
}
|
||||
|
||||
// Find first non-empty paragraph
|
||||
const firstParagraph = paragraphs.find(p => p.length > 0);
|
||||
if (!firstParagraph) {
|
||||
return '';
|
||||
}
|
||||
if (!firstParagraph) return '';
|
||||
|
||||
// Extract up to maxSentences sentences
|
||||
const sentences = firstParagraph.match(/[^.!?]+[.!?]+/g) || [firstParagraph];
|
||||
return sentences.slice(0, maxSentences).join(' ').trim();
|
||||
return sentences.slice(0, maxSentences).map(s => s.trim()).join(' ').trim();
|
||||
}
|
||||
|
||||
/**
|
||||
* Load and parse a single agent file.
|
||||
* Returns the full agent object with frontmatter and body.
|
||||
*/
|
||||
function loadAgent(filePath) {
|
||||
const content = fs.readFileSync(filePath, 'utf8');
|
||||
@@ -116,9 +120,7 @@ function loadAgent(filePath) {
|
||||
* Load all agents from a directory.
|
||||
*/
|
||||
function loadAgents(agentsDir) {
|
||||
if (!fs.existsSync(agentsDir)) {
|
||||
return [];
|
||||
}
|
||||
if (!fs.existsSync(agentsDir)) return [];
|
||||
|
||||
return fs.readdirSync(agentsDir)
|
||||
.filter(f => f.endsWith('.md'))
|
||||
@@ -127,8 +129,7 @@ function loadAgents(agentsDir) {
|
||||
}
|
||||
|
||||
/**
|
||||
* Compress an agent to its catalog entry (metadata only).
|
||||
* This is the minimal representation needed for agent selection.
|
||||
* Compress an agent to catalog entry (metadata only).
|
||||
*/
|
||||
function compressToCatalog(agent) {
|
||||
return {
|
||||
@@ -140,31 +141,34 @@ function compressToCatalog(agent) {
|
||||
}
|
||||
|
||||
/**
|
||||
* Compress an agent to a summary entry (metadata + first paragraph).
|
||||
* More context than catalog, less than full body.
|
||||
* Compress an agent to summary entry (metadata + first paragraph).
|
||||
*/
|
||||
function compressToSummary(agent) {
|
||||
return {
|
||||
name: agent.name,
|
||||
description: agent.description,
|
||||
tools: agent.tools,
|
||||
model: agent.model,
|
||||
...compressToCatalog(agent),
|
||||
summary: extractSummary(agent.body),
|
||||
};
|
||||
}
|
||||
|
||||
const allowedModes = ['catalog', 'summary', 'full'];
|
||||
|
||||
/**
|
||||
* Build a full compressed catalog from a directory of agents.
|
||||
* Build a compressed catalog from a directory of agents.
|
||||
*
|
||||
* Modes:
|
||||
* - 'catalog': name, description, tools, model only (~2-3k tokens for 27 agents)
|
||||
* - 'summary': catalog + first paragraph summary (~4-5k tokens)
|
||||
* - 'full': no compression, full body included
|
||||
*
|
||||
* Returns { agents: [], stats: { totalAgents, originalBytes, compressedTokenEstimate } }
|
||||
* Returns { agents: [], stats: { totalAgents, originalBytes, compressedBytes, compressedTokenEstimate, mode } }
|
||||
*/
|
||||
function buildAgentCatalog(agentsDir, options = {}) {
|
||||
const mode = options.mode || 'catalog';
|
||||
|
||||
if (!allowedModes.includes(mode)) {
|
||||
throw new Error(`Invalid mode "${mode}". Allowed modes: ${allowedModes.join(', ')}`);
|
||||
}
|
||||
|
||||
const filter = options.filter || null;
|
||||
|
||||
let agents = loadAgents(agentsDir);
|
||||
@@ -207,14 +211,24 @@ function buildAgentCatalog(agentsDir, options = {}) {
|
||||
}
|
||||
|
||||
/**
|
||||
* Lazy-load a single agent's full content by name from a directory.
|
||||
* Lazy-load a single agent's full content by name.
|
||||
* Returns null if not found.
|
||||
*/
|
||||
function lazyLoadAgent(agentsDir, agentName) {
|
||||
const filePath = path.join(agentsDir, `${agentName}.md`);
|
||||
if (!fs.existsSync(filePath)) {
|
||||
// Validate agentName: only allow alphanumeric, hyphen, underscore
|
||||
if (!/^[\w-]+$/.test(agentName)) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const filePath = path.resolve(agentsDir, `${agentName}.md`);
|
||||
|
||||
// Verify the resolved path is still within agentsDir
|
||||
const resolvedAgentsDir = path.resolve(agentsDir);
|
||||
if (!filePath.startsWith(resolvedAgentsDir + path.sep)) {
|
||||
return null;
|
||||
}
|
||||
|
||||
if (!fs.existsSync(filePath)) return null;
|
||||
return loadAgent(filePath);
|
||||
}
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ set -euo pipefail
|
||||
|
||||
# Sync Everything Claude Code (ECC) assets into a local Codex CLI setup.
|
||||
# - Backs up ~/.codex config and AGENTS.md
|
||||
# - Replaces AGENTS.md with ECC AGENTS.md
|
||||
# - Merges ECC AGENTS.md into existing AGENTS.md (marker-based, preserves user content)
|
||||
# - Syncs Codex-ready skills from .agents/skills
|
||||
# - Generates prompt files from commands/*.md
|
||||
# - Generates Codex QA wrappers and optional language rule-pack prompts
|
||||
@@ -143,16 +143,68 @@ if [[ -f "$AGENTS_FILE" ]]; then
|
||||
run_or_echo "cp \"$AGENTS_FILE\" \"$BACKUP_DIR/AGENTS.md\""
|
||||
fi
|
||||
|
||||
log "Replacing global AGENTS.md with ECC AGENTS + Codex supplement"
|
||||
ECC_BEGIN_MARKER="<!-- BEGIN ECC -->"
|
||||
ECC_END_MARKER="<!-- END ECC -->"
|
||||
|
||||
compose_ecc_block() {
|
||||
printf '%s\n' "$ECC_BEGIN_MARKER"
|
||||
cat "$AGENTS_ROOT_SRC"
|
||||
printf '\n\n---\n\n'
|
||||
printf '# Codex Supplement (From ECC .codex/AGENTS.md)\n\n'
|
||||
cat "$AGENTS_CODEX_SUPP_SRC"
|
||||
printf '\n%s\n' "$ECC_END_MARKER"
|
||||
}
|
||||
|
||||
log "Merging ECC AGENTS into $AGENTS_FILE (preserving user content)"
|
||||
if [[ "$MODE" == "dry-run" ]]; then
|
||||
printf '[dry-run] compose %s from %s + %s\n' "$AGENTS_FILE" "$AGENTS_ROOT_SRC" "$AGENTS_CODEX_SUPP_SRC"
|
||||
printf '[dry-run] merge ECC block into %s from %s + %s\n' "$AGENTS_FILE" "$AGENTS_ROOT_SRC" "$AGENTS_CODEX_SUPP_SRC"
|
||||
else
|
||||
{
|
||||
cat "$AGENTS_ROOT_SRC"
|
||||
printf '\n\n---\n\n'
|
||||
printf '# Codex Supplement (From ECC .codex/AGENTS.md)\n\n'
|
||||
cat "$AGENTS_CODEX_SUPP_SRC"
|
||||
} > "$AGENTS_FILE"
|
||||
replace_ecc_section() {
|
||||
# Replace the ECC block between markers in $AGENTS_FILE with fresh content.
|
||||
# Uses awk to correctly handle all positions including line 1.
|
||||
local tmp
|
||||
tmp="$(mktemp)"
|
||||
local ecc_tmp
|
||||
ecc_tmp="$(mktemp)"
|
||||
compose_ecc_block > "$ecc_tmp"
|
||||
awk -v begin="$ECC_BEGIN_MARKER" -v end="$ECC_END_MARKER" -v ecc="$ecc_tmp" '
|
||||
{ gsub(/\r$/, "") }
|
||||
$0 == begin { skip = 1; while ((getline line < ecc) > 0) print line; close(ecc); next }
|
||||
$0 == end { skip = 0; next }
|
||||
!skip { print }
|
||||
' "$AGENTS_FILE" > "$tmp"
|
||||
# Write through the path (preserves symlinks) instead of mv
|
||||
cat "$tmp" > "$AGENTS_FILE"
|
||||
rm -f "$tmp" "$ecc_tmp"
|
||||
}
|
||||
|
||||
if [[ ! -f "$AGENTS_FILE" ]]; then
|
||||
# No existing file — create fresh with markers
|
||||
compose_ecc_block > "$AGENTS_FILE"
|
||||
elif awk -v b="$ECC_BEGIN_MARKER" -v e="$ECC_END_MARKER" '
|
||||
{ gsub(/\r$/, "") }
|
||||
$0 == b { found_b = NR } $0 == e { found_e = NR }
|
||||
END { exit !(found_b && found_e && found_b < found_e) }
|
||||
' "$AGENTS_FILE"; then
|
||||
# Existing file with matched, correctly ordered ECC markers — replace only the ECC section
|
||||
replace_ecc_section
|
||||
elif grep -qF "$ECC_BEGIN_MARKER" "$AGENTS_FILE"; then
|
||||
# BEGIN marker exists but END marker is missing (corrupted). Warn and
|
||||
# replace the file entirely to restore a valid state. Backup was saved.
|
||||
log "WARNING: found BEGIN marker but no END marker — replacing file (backup saved)"
|
||||
compose_ecc_block > "$AGENTS_FILE"
|
||||
else
|
||||
# Existing file without markers — append ECC block, preserve user content.
|
||||
# Note: legacy ECC-only files (from old '>' overwrite) will get a second copy
|
||||
# on this first run. This is intentional — the alternative (heading-match
|
||||
# heuristic) risks false-positive overwrites of user-authored files. The next
|
||||
# run deduplicates via markers, and a timestamped backup was saved above.
|
||||
log "No ECC markers found — appending managed block (backup saved)"
|
||||
{
|
||||
printf '\n\n'
|
||||
compose_ecc_block
|
||||
} >> "$AGENTS_FILE"
|
||||
fi
|
||||
fi
|
||||
|
||||
log "Syncing ECC Codex skills"
|
||||
|
||||
435
skills/flutter-dart-code-review/SKILL.md
Normal file
435
skills/flutter-dart-code-review/SKILL.md
Normal file
@@ -0,0 +1,435 @@
|
||||
---
|
||||
name: flutter-dart-code-review
|
||||
description: Library-agnostic Flutter/Dart code review checklist covering widget best practices, state management patterns (BLoC, Riverpod, Provider, GetX, MobX, Signals), Dart idioms, performance, accessibility, security, and clean architecture.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Flutter/Dart Code Review Best Practices
|
||||
|
||||
Comprehensive, library-agnostic checklist for reviewing Flutter/Dart applications. These principles apply regardless of which state management solution, routing library, or DI framework is used.
|
||||
|
||||
---
|
||||
|
||||
## 1. General Project Health
|
||||
|
||||
- [ ] Project follows consistent folder structure (feature-first or layer-first)
|
||||
- [ ] Proper separation of concerns: UI, business logic, data layers
|
||||
- [ ] No business logic in widgets; widgets are purely presentational
|
||||
- [ ] `pubspec.yaml` is clean — no unused dependencies, versions pinned appropriately
|
||||
- [ ] `analysis_options.yaml` includes a strict lint set with strict analyzer settings enabled
|
||||
- [ ] No `print()` statements in production code — use `dart:developer` `log()` or a logging package
|
||||
- [ ] Generated files (`.g.dart`, `.freezed.dart`, `.gr.dart`) are up-to-date or in `.gitignore`
|
||||
- [ ] Platform-specific code isolated behind abstractions
|
||||
|
||||
---
|
||||
|
||||
## 2. Dart Language Pitfalls
|
||||
|
||||
- [ ] **Implicit dynamic**: Missing type annotations leading to `dynamic` — enable `strict-casts`, `strict-inference`, `strict-raw-types`
|
||||
- [ ] **Null safety misuse**: Excessive `!` (bang operator) instead of proper null checks or Dart 3 pattern matching (`if (value case var v?)`)
|
||||
- [ ] **Type promotion failures**: Using `this.field` where local variable promotion would work
|
||||
- [ ] **Catching too broadly**: `catch (e)` without `on` clause; always specify exception types
|
||||
- [ ] **Catching `Error`**: `Error` subtypes indicate bugs and should not be caught
|
||||
- [ ] **Unused `async`**: Functions marked `async` that never `await` — unnecessary overhead
|
||||
- [ ] **`late` overuse**: `late` used where nullable or constructor initialization would be safer; defers errors to runtime
|
||||
- [ ] **String concatenation in loops**: Use `StringBuffer` instead of `+` for iterative string building
|
||||
- [ ] **Mutable state in `const` contexts**: Fields in `const` constructor classes should not be mutable
|
||||
- [ ] **Ignoring `Future` return values**: Use `await` or explicitly call `unawaited()` to signal intent
|
||||
- [ ] **`var` where `final` works**: Prefer `final` for locals and `const` for compile-time constants
|
||||
- [ ] **Relative imports**: Use `package:` imports for consistency
|
||||
- [ ] **Mutable collections exposed**: Public APIs should return unmodifiable views, not raw `List`/`Map`
|
||||
- [ ] **Missing Dart 3 pattern matching**: Prefer switch expressions and `if-case` over verbose `is` checks and manual casting
|
||||
- [ ] **Throwaway classes for multiple returns**: Use Dart 3 records `(String, int)` instead of single-use DTOs
|
||||
- [ ] **`print()` in production code**: Use `dart:developer` `log()` or the project's logging package; `print()` has no log levels and cannot be filtered
|
||||
|
||||
---
|
||||
|
||||
## 3. Widget Best Practices
|
||||
|
||||
### Widget decomposition:
|
||||
- [ ] No single widget with a `build()` method exceeding ~80-100 lines
|
||||
- [ ] Widgets split by encapsulation AND by how they change (rebuild boundaries)
|
||||
- [ ] Private `_build*()` helper methods that return widgets are extracted to separate widget classes (enables element reuse, const propagation, and framework optimizations)
|
||||
- [ ] Stateless widgets preferred over Stateful where no mutable local state is needed
|
||||
- [ ] Extracted widgets are in separate files when reusable
|
||||
|
||||
### Const usage:
|
||||
- [ ] `const` constructors used wherever possible — prevents unnecessary rebuilds
|
||||
- [ ] `const` literals for collections that don't change (`const []`, `const {}`)
|
||||
- [ ] Constructor is declared `const` when all fields are final
|
||||
|
||||
### Key usage:
|
||||
- [ ] `ValueKey` used in lists/grids to preserve state across reorders
|
||||
- [ ] `GlobalKey` used sparingly — only when accessing state across the tree is truly needed
|
||||
- [ ] `UniqueKey` avoided in `build()` — it forces rebuild every frame
|
||||
- [ ] `ObjectKey` used when identity is based on a data object rather than a single value
|
||||
|
||||
### Theming & design system:
|
||||
- [ ] Colors come from `Theme.of(context).colorScheme` — no hardcoded `Colors.red` or hex values
|
||||
- [ ] Text styles come from `Theme.of(context).textTheme` — no inline `TextStyle` with raw font sizes
|
||||
- [ ] Dark mode compatibility verified — no assumptions about light background
|
||||
- [ ] Spacing and sizing use consistent design tokens or constants, not magic numbers
|
||||
|
||||
### Build method complexity:
|
||||
- [ ] No network calls, file I/O, or heavy computation in `build()`
|
||||
- [ ] No `Future.then()` or `async` work in `build()`
|
||||
- [ ] No subscription creation (`.listen()`) in `build()`
|
||||
- [ ] `setState()` localized to smallest possible subtree
|
||||
|
||||
---
|
||||
|
||||
## 4. State Management (Library-Agnostic)
|
||||
|
||||
These principles apply to all Flutter state management solutions (BLoC, Riverpod, Provider, GetX, MobX, Signals, ValueNotifier, etc.).
|
||||
|
||||
### Architecture:
|
||||
- [ ] Business logic lives outside the widget layer — in a state management component (BLoC, Notifier, Controller, Store, ViewModel, etc.)
|
||||
- [ ] State managers receive dependencies via injection, not by constructing them internally
|
||||
- [ ] A service or repository layer abstracts data sources — widgets and state managers should not call APIs or databases directly
|
||||
- [ ] State managers have a single responsibility — no "god" managers handling unrelated concerns
|
||||
- [ ] Cross-component dependencies follow the solution's conventions:
|
||||
- In **Riverpod**: providers depending on providers via `ref.watch` is expected — flag only circular or overly tangled chains
|
||||
- In **BLoC**: blocs should not directly depend on other blocs — prefer shared repositories or presentation-layer coordination
|
||||
- In other solutions: follow the documented conventions for inter-component communication
|
||||
|
||||
### Immutability & value equality (for immutable-state solutions: BLoC, Riverpod, Redux):
|
||||
- [ ] State objects are immutable — new instances created via `copyWith()` or constructors, never mutated in-place
|
||||
- [ ] State classes implement `==` and `hashCode` properly (all fields included in comparison)
|
||||
- [ ] Mechanism is consistent across the project — manual override, `Equatable`, `freezed`, Dart records, or other
|
||||
- [ ] Collections inside state objects are not exposed as raw mutable `List`/`Map`
|
||||
|
||||
### Reactivity discipline (for reactive-mutation solutions: MobX, GetX, Signals):
|
||||
- [ ] State is only mutated through the solution's reactive API (`@action` in MobX, `.value` on signals, `.obs` in GetX) — direct field mutation bypasses change tracking
|
||||
- [ ] Derived values use the solution's computed mechanism rather than being stored redundantly
|
||||
- [ ] Reactions and disposers are properly cleaned up (`ReactionDisposer` in MobX, effect cleanup in Signals)
|
||||
|
||||
### State shape design:
|
||||
- [ ] Mutually exclusive states use sealed types, union variants, or the solution's built-in async state type (e.g. Riverpod's `AsyncValue`) — not boolean flags (`isLoading`, `isError`, `hasData`)
|
||||
- [ ] Every async operation models loading, success, and error as distinct states
|
||||
- [ ] All state variants are handled exhaustively in UI — no silently ignored cases
|
||||
- [ ] Error states carry error information for display; loading states don't carry stale data
|
||||
- [ ] Nullable data is not used as a loading indicator — states are explicit
|
||||
|
||||
```dart
|
||||
// BAD — boolean flag soup allows impossible states
|
||||
class UserState {
|
||||
bool isLoading = false;
|
||||
bool hasError = false; // isLoading && hasError is representable!
|
||||
User? user;
|
||||
}
|
||||
|
||||
// GOOD (immutable approach) — sealed types make impossible states unrepresentable
|
||||
sealed class UserState {}
|
||||
class UserInitial extends UserState {}
|
||||
class UserLoading extends UserState {}
|
||||
class UserLoaded extends UserState {
|
||||
final User user;
|
||||
const UserLoaded(this.user);
|
||||
}
|
||||
class UserError extends UserState {
|
||||
final String message;
|
||||
const UserError(this.message);
|
||||
}
|
||||
|
||||
// GOOD (reactive approach) — observable enum + data, mutations via reactivity API
|
||||
// enum UserStatus { initial, loading, loaded, error }
|
||||
// Use your solution's observable/signal to wrap status and data separately
|
||||
```
|
||||
|
||||
### Rebuild optimization:
|
||||
- [ ] State consumer widgets (Builder, Consumer, Observer, Obx, Watch, etc.) scoped as narrow as possible
|
||||
- [ ] Selectors used to rebuild only when specific fields change — not on every state emission
|
||||
- [ ] `const` widgets used to stop rebuild propagation through the tree
|
||||
- [ ] Computed/derived state is calculated reactively, not stored redundantly
|
||||
|
||||
### Subscriptions & disposal:
|
||||
- [ ] All manual subscriptions (`.listen()`) are cancelled in `dispose()` / `close()`
|
||||
- [ ] Stream controllers are closed when no longer needed
|
||||
- [ ] Timers are cancelled in disposal lifecycle
|
||||
- [ ] Framework-managed lifecycle is preferred over manual subscription (declarative builders over `.listen()`)
|
||||
- [ ] `mounted` check before `setState` in async callbacks
|
||||
- [ ] `BuildContext` not used after `await` without checking `context.mounted` (Flutter 3.7+) — stale context causes crashes
|
||||
- [ ] No navigation, dialogs, or scaffold messages after async gaps without verifying the widget is still mounted
|
||||
- [ ] `BuildContext` never stored in singletons, state managers, or static fields
|
||||
|
||||
### Local vs global state:
|
||||
- [ ] Ephemeral UI state (checkbox, slider, animation) uses local state (`setState`, `ValueNotifier`)
|
||||
- [ ] Shared state is lifted only as high as needed — not over-globalized
|
||||
- [ ] Feature-scoped state is properly disposed when the feature is no longer active
|
||||
|
||||
---
|
||||
|
||||
## 5. Performance
|
||||
|
||||
### Unnecessary rebuilds:
|
||||
- [ ] `setState()` not called at root widget level — localize state changes
|
||||
- [ ] `const` widgets used to stop rebuild propagation
|
||||
- [ ] `RepaintBoundary` used around complex subtrees that repaint independently
|
||||
- [ ] `AnimatedBuilder` child parameter used for subtrees independent of animation
|
||||
|
||||
### Expensive operations in build():
|
||||
- [ ] No sorting, filtering, or mapping large collections in `build()` — compute in state management layer
|
||||
- [ ] No regex compilation in `build()`
|
||||
- [ ] `MediaQuery.of(context)` usage is specific (e.g., `MediaQuery.sizeOf(context)`)
|
||||
|
||||
### Image optimization:
|
||||
- [ ] Network images use caching (any caching solution appropriate for the project)
|
||||
- [ ] Appropriate image resolution for target device (no loading 4K images for thumbnails)
|
||||
- [ ] `Image.asset` with `cacheWidth`/`cacheHeight` to decode at display size
|
||||
- [ ] Placeholder and error widgets provided for network images
|
||||
|
||||
### Lazy loading:
|
||||
- [ ] `ListView.builder` / `GridView.builder` used instead of `ListView(children: [...])` for large or dynamic lists (concrete constructors are fine for small, static lists)
|
||||
- [ ] Pagination implemented for large data sets
|
||||
- [ ] Deferred loading (`deferred as`) used for heavy libraries in web builds
|
||||
|
||||
### Other:
|
||||
- [ ] `Opacity` widget avoided in animations — use `AnimatedOpacity` or `FadeTransition`
|
||||
- [ ] Clipping avoided in animations — pre-clip images
|
||||
- [ ] `operator ==` not overridden on widgets — use `const` constructors instead
|
||||
- [ ] Intrinsic dimension widgets (`IntrinsicHeight`, `IntrinsicWidth`) used sparingly (extra layout pass)
|
||||
|
||||
---
|
||||
|
||||
## 6. Testing
|
||||
|
||||
### Test types and expectations:
|
||||
- [ ] **Unit tests**: Cover all business logic (state managers, repositories, utility functions)
|
||||
- [ ] **Widget tests**: Cover individual widget behavior, interactions, and visual output
|
||||
- [ ] **Integration tests**: Cover critical user flows end-to-end
|
||||
- [ ] **Golden tests**: Pixel-perfect comparisons for design-critical UI components
|
||||
|
||||
### Coverage targets:
|
||||
- [ ] Aim for 80%+ line coverage on business logic
|
||||
- [ ] All state transitions have corresponding tests (loading → success, loading → error, retry, etc.)
|
||||
- [ ] Edge cases tested: empty states, error states, loading states, boundary values
|
||||
|
||||
### Test isolation:
|
||||
- [ ] External dependencies (API clients, databases, services) are mocked or faked
|
||||
- [ ] Each test file tests exactly one class/unit
|
||||
- [ ] Tests verify behavior, not implementation details
|
||||
- [ ] Stubs define only the behavior needed for each test (minimal stubbing)
|
||||
- [ ] No shared mutable state between test cases
|
||||
|
||||
### Widget test quality:
|
||||
- [ ] `pumpWidget` and `pump` used correctly for async operations
|
||||
- [ ] `find.byType`, `find.text`, `find.byKey` used appropriately
|
||||
- [ ] No flaky tests depending on timing — use `pumpAndSettle` or explicit `pump(Duration)`
|
||||
- [ ] Tests run in CI and failures block merges
|
||||
|
||||
---
|
||||
|
||||
## 7. Accessibility
|
||||
|
||||
### Semantic widgets:
|
||||
- [ ] `Semantics` widget used to provide screen reader labels where automatic labels are insufficient
|
||||
- [ ] `ExcludeSemantics` used for purely decorative elements
|
||||
- [ ] `MergeSemantics` used to combine related widgets into a single accessible element
|
||||
- [ ] Images have `semanticLabel` property set
|
||||
|
||||
### Screen reader support:
|
||||
- [ ] All interactive elements are focusable and have meaningful descriptions
|
||||
- [ ] Focus order is logical (follows visual reading order)
|
||||
|
||||
### Visual accessibility:
|
||||
- [ ] Contrast ratio >= 4.5:1 for text against background
|
||||
- [ ] Tappable targets are at least 48x48 pixels
|
||||
- [ ] Color is not the sole indicator of state (use icons/text alongside)
|
||||
- [ ] Text scales with system font size settings
|
||||
|
||||
### Interaction accessibility:
|
||||
- [ ] No no-op `onPressed` callbacks — every button does something or is disabled
|
||||
- [ ] Error fields suggest corrections
|
||||
- [ ] Context does not change unexpectedly while user is inputting data
|
||||
|
||||
---
|
||||
|
||||
## 8. Platform-Specific Concerns
|
||||
|
||||
### iOS/Android differences:
|
||||
- [ ] Platform-adaptive widgets used where appropriate
|
||||
- [ ] Back navigation handled correctly (Android back button, iOS swipe-to-go-back)
|
||||
- [ ] Status bar and safe area handled via `SafeArea` widget
|
||||
- [ ] Platform-specific permissions declared in `AndroidManifest.xml` and `Info.plist`
|
||||
|
||||
### Responsive design:
|
||||
- [ ] `LayoutBuilder` or `MediaQuery` used for responsive layouts
|
||||
- [ ] Breakpoints defined consistently (phone, tablet, desktop)
|
||||
- [ ] Text doesn't overflow on small screens — use `Flexible`, `Expanded`, `FittedBox`
|
||||
- [ ] Landscape orientation tested or explicitly locked
|
||||
- [ ] Web-specific: mouse/keyboard interactions supported, hover states present
|
||||
|
||||
---
|
||||
|
||||
## 9. Security
|
||||
|
||||
### Secure storage:
|
||||
- [ ] Sensitive data (tokens, credentials) stored using platform-secure storage (Keychain on iOS, EncryptedSharedPreferences on Android)
|
||||
- [ ] Never store secrets in plaintext storage
|
||||
- [ ] Biometric authentication gating considered for sensitive operations
|
||||
|
||||
### API key handling:
|
||||
- [ ] API keys NOT hardcoded in Dart source — use `--dart-define`, `.env` files excluded from VCS, or compile-time configuration
|
||||
- [ ] Secrets not committed to git — check `.gitignore`
|
||||
- [ ] Backend proxy used for truly secret keys (client should never hold server secrets)
|
||||
|
||||
### Input validation:
|
||||
- [ ] All user input validated before sending to API
|
||||
- [ ] Form validation uses proper validation patterns
|
||||
- [ ] No raw SQL or string interpolation of user input
|
||||
- [ ] Deep link URLs validated and sanitized before navigation
|
||||
|
||||
### Network security:
|
||||
- [ ] HTTPS enforced for all API calls
|
||||
- [ ] Certificate pinning considered for high-security apps
|
||||
- [ ] Authentication tokens refreshed and expired properly
|
||||
- [ ] No sensitive data logged or printed
|
||||
|
||||
---
|
||||
|
||||
## 10. Package/Dependency Review
|
||||
|
||||
### Evaluating pub.dev packages:
|
||||
- [ ] Check **pub points score** (aim for 130+/160)
|
||||
- [ ] Check **likes** and **popularity** as community signals
|
||||
- [ ] Verify the publisher is **verified** on pub.dev
|
||||
- [ ] Check last publish date — stale packages (>1 year) are a risk
|
||||
- [ ] Review open issues and response time from maintainers
|
||||
- [ ] Check license compatibility with your project
|
||||
- [ ] Verify platform support covers your targets
|
||||
|
||||
### Version constraints:
|
||||
- [ ] Use caret syntax (`^1.2.3`) for dependencies — allows compatible updates
|
||||
- [ ] Pin exact versions only when absolutely necessary
|
||||
- [ ] Run `flutter pub outdated` regularly to track stale dependencies
|
||||
- [ ] No dependency overrides in production `pubspec.yaml` — only for temporary fixes with a comment/issue link
|
||||
- [ ] Minimize transitive dependency count — each dependency is an attack surface
|
||||
|
||||
### Monorepo-specific (melos/workspace):
|
||||
- [ ] Internal packages import only from public API — no `package:other/src/internal.dart` (breaks Dart package encapsulation)
|
||||
- [ ] Internal package dependencies use workspace resolution, not hardcoded `path: ../../` relative strings
|
||||
- [ ] All sub-packages share or inherit root `analysis_options.yaml`
|
||||
|
||||
---
|
||||
|
||||
## 11. Navigation and Routing
|
||||
|
||||
### General principles (apply to any routing solution):
|
||||
- [ ] One routing approach used consistently — no mixing imperative `Navigator.push` with a declarative router
|
||||
- [ ] Route arguments are typed — no `Map<String, dynamic>` or `Object?` casting
|
||||
- [ ] Route paths defined as constants, enums, or generated — no magic strings scattered in code
|
||||
- [ ] Auth guards/redirects centralized — not duplicated across individual screens
|
||||
- [ ] Deep links configured for both Android and iOS
|
||||
- [ ] Deep link URLs validated and sanitized before navigation
|
||||
- [ ] Navigation state is testable — route changes can be verified in tests
|
||||
- [ ] Back behavior is correct on all platforms
|
||||
|
||||
---
|
||||
|
||||
## 12. Error Handling
|
||||
|
||||
### Framework error handling:
|
||||
- [ ] `FlutterError.onError` overridden to capture framework errors (build, layout, paint)
|
||||
- [ ] `PlatformDispatcher.instance.onError` set for async errors not caught by Flutter
|
||||
- [ ] `ErrorWidget.builder` customized for release mode (user-friendly instead of red screen)
|
||||
- [ ] Global error capture wrapper around `runApp` (e.g., `runZonedGuarded`, Sentry/Crashlytics wrapper)
|
||||
|
||||
### Error reporting:
|
||||
- [ ] Error reporting service integrated (Firebase Crashlytics, Sentry, or equivalent)
|
||||
- [ ] Non-fatal errors reported with stack traces
|
||||
- [ ] State management error observer wired to error reporting (e.g., BlocObserver, ProviderObserver, or equivalent for your solution)
|
||||
- [ ] User-identifiable info (user ID) attached to error reports for debugging
|
||||
|
||||
### Graceful degradation:
|
||||
- [ ] API errors result in user-friendly error UI, not crashes
|
||||
- [ ] Retry mechanisms for transient network failures
|
||||
- [ ] Offline state handled gracefully
|
||||
- [ ] Error states in state management carry error info for display
|
||||
- [ ] Raw exceptions (network, parsing) are mapped to user-friendly, localized messages before reaching the UI — never show raw exception strings to users
|
||||
|
||||
---
|
||||
|
||||
## 13. Internationalization (l10n)
|
||||
|
||||
### Setup:
|
||||
- [ ] Localization solution configured (Flutter's built-in ARB/l10n, easy_localization, or equivalent)
|
||||
- [ ] Supported locales declared in app configuration
|
||||
|
||||
### Content:
|
||||
- [ ] All user-visible strings use the localization system — no hardcoded strings in widgets
|
||||
- [ ] Template file includes descriptions/context for translators
|
||||
- [ ] ICU message syntax used for plurals, genders, selects
|
||||
- [ ] Placeholders defined with types
|
||||
- [ ] No missing keys across locales
|
||||
|
||||
### Code review:
|
||||
- [ ] Localization accessor used consistently throughout the project
|
||||
- [ ] Date, time, number, and currency formatting is locale-aware
|
||||
- [ ] Text directionality (RTL) supported if targeting Arabic, Hebrew, etc.
|
||||
- [ ] No string concatenation for localized text — use parameterized messages
|
||||
|
||||
---
|
||||
|
||||
## 14. Dependency Injection
|
||||
|
||||
### Principles (apply to any DI approach):
|
||||
- [ ] Classes depend on abstractions (interfaces), not concrete implementations at layer boundaries
|
||||
- [ ] Dependencies provided externally via constructor, DI framework, or provider graph — not created internally
|
||||
- [ ] Registration distinguishes lifetime: singleton vs factory vs lazy singleton
|
||||
- [ ] Environment-specific bindings (dev/staging/prod) use configuration, not runtime `if` checks
|
||||
- [ ] No circular dependencies in the DI graph
|
||||
- [ ] Service locator calls (if used) are not scattered throughout business logic
|
||||
|
||||
---
|
||||
|
||||
## 15. Static Analysis
|
||||
|
||||
### Configuration:
|
||||
- [ ] `analysis_options.yaml` present with strict settings enabled
|
||||
- [ ] Strict analyzer settings: `strict-casts: true`, `strict-inference: true`, `strict-raw-types: true`
|
||||
- [ ] A comprehensive lint rule set is included (very_good_analysis, flutter_lints, or custom strict rules)
|
||||
- [ ] All sub-packages in monorepos inherit or share the root analysis options
|
||||
|
||||
### Enforcement:
|
||||
- [ ] No unresolved analyzer warnings in committed code
|
||||
- [ ] Lint suppressions (`// ignore:`) are justified with comments explaining why
|
||||
- [ ] `flutter analyze` runs in CI and failures block merges
|
||||
|
||||
### Key rules to verify regardless of lint package:
|
||||
- [ ] `prefer_const_constructors` — performance in widget trees
|
||||
- [ ] `avoid_print` — use proper logging
|
||||
- [ ] `unawaited_futures` — prevent fire-and-forget async bugs
|
||||
- [ ] `prefer_final_locals` — immutability at variable level
|
||||
- [ ] `always_declare_return_types` — explicit contracts
|
||||
- [ ] `avoid_catches_without_on_clauses` — specific error handling
|
||||
- [ ] `always_use_package_imports` — consistent import style
|
||||
|
||||
---
|
||||
|
||||
## State Management Quick Reference
|
||||
|
||||
The table below maps universal principles to their implementation in popular solutions. Use this to adapt review rules to whichever solution the project uses.
|
||||
|
||||
| Principle | BLoC/Cubit | Riverpod | Provider | GetX | MobX | Signals | Built-in |
|
||||
|-----------|-----------|----------|----------|------|------|---------|----------|
|
||||
| State container | `Bloc`/`Cubit` | `Notifier`/`AsyncNotifier` | `ChangeNotifier` | `GetxController` | `Store` | `signal()` | `StatefulWidget` |
|
||||
| UI consumer | `BlocBuilder` | `ConsumerWidget` | `Consumer` | `Obx`/`GetBuilder` | `Observer` | `Watch` | `setState` |
|
||||
| Selector | `BlocSelector`/`buildWhen` | `ref.watch(p.select(...))` | `Selector` | N/A | computed | `computed()` | N/A |
|
||||
| Side effects | `BlocListener` | `ref.listen` | `Consumer` callback | `ever()`/`once()` | `reaction` | `effect()` | callbacks |
|
||||
| Disposal | auto via `BlocProvider` | `.autoDispose` | auto via `Provider` | `onClose()` | `ReactionDisposer` | manual | `dispose()` |
|
||||
| Testing | `blocTest()` | `ProviderContainer` | `ChangeNotifier` directly | `Get.put` in test | store directly | signal directly | widget test |
|
||||
|
||||
---
|
||||
|
||||
## Sources
|
||||
|
||||
- [Effective Dart: Style](https://dart.dev/effective-dart/style)
|
||||
- [Effective Dart: Usage](https://dart.dev/effective-dart/usage)
|
||||
- [Effective Dart: Design](https://dart.dev/effective-dart/design)
|
||||
- [Flutter Performance Best Practices](https://docs.flutter.dev/perf/best-practices)
|
||||
- [Flutter Testing Overview](https://docs.flutter.dev/testing/overview)
|
||||
- [Flutter Accessibility](https://docs.flutter.dev/ui/accessibility-and-internationalization/accessibility)
|
||||
- [Flutter Internationalization](https://docs.flutter.dev/ui/accessibility-and-internationalization/internationalization)
|
||||
- [Flutter Navigation and Routing](https://docs.flutter.dev/ui/navigation)
|
||||
- [Flutter Error Handling](https://docs.flutter.dev/testing/errors)
|
||||
- [Flutter State Management Options](https://docs.flutter.dev/data-and-backend/state-mgmt/options)
|
||||
100
skills/nuxt4-patterns/SKILL.md
Normal file
100
skills/nuxt4-patterns/SKILL.md
Normal file
@@ -0,0 +1,100 @@
|
||||
---
|
||||
name: nuxt4-patterns
|
||||
description: Nuxt 4 app patterns for hydration safety, performance, route rules, lazy loading, and SSR-safe data fetching with useFetch and useAsyncData.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Nuxt 4 Patterns
|
||||
|
||||
Use when building or debugging Nuxt 4 apps with SSR, hybrid rendering, route rules, or page-level data fetching.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- Hydration mismatches between server HTML and client state
|
||||
- Route-level rendering decisions such as prerender, SWR, ISR, or client-only sections
|
||||
- Performance work around lazy loading, lazy hydration, or payload size
|
||||
- Page or component data fetching with `useFetch`, `useAsyncData`, or `$fetch`
|
||||
- Nuxt routing issues tied to route params, middleware, or SSR/client differences
|
||||
|
||||
## Hydration Safety
|
||||
|
||||
- Keep the first render deterministic. Do not put `Date.now()`, `Math.random()`, browser-only APIs, or storage reads directly into SSR-rendered template state.
|
||||
- Move browser-only logic behind `onMounted()`, `import.meta.client`, `ClientOnly`, or a `.client.vue` component when the server cannot produce the same markup.
|
||||
- Use Nuxt's `useRoute()` composable, not the one from `vue-router`.
|
||||
- Do not use `route.fullPath` to drive SSR-rendered markup. URL fragments are client-only, which can create hydration mismatches.
|
||||
- Treat `ssr: false` as an escape hatch for truly browser-only areas, not a default fix for mismatches.
|
||||
|
||||
## Data Fetching
|
||||
|
||||
- Prefer `await useFetch()` for SSR-safe API reads in pages and components. It forwards server-fetched data into the Nuxt payload and avoids a second fetch on hydration.
|
||||
- Use `useAsyncData()` when the fetcher is not a simple `$fetch()` call, when you need a custom key, or when you are composing multiple async sources.
|
||||
- Give `useAsyncData()` a stable key for cache reuse and predictable refresh behavior.
|
||||
- Keep `useAsyncData()` handlers side-effect free. They can run during SSR and hydration.
|
||||
- Use `$fetch()` for user-triggered writes or client-only actions, not top-level page data that should be hydrated from SSR.
|
||||
- Use `lazy: true`, `useLazyFetch()`, or `useLazyAsyncData()` for non-critical data that should not block navigation. Handle `status === 'pending'` in the UI.
|
||||
- Use `server: false` only for data that is not needed for SEO or the first paint.
|
||||
- Trim payload size with `pick` and prefer shallower payloads when deep reactivity is unnecessary.
|
||||
|
||||
```ts
|
||||
const route = useRoute()
|
||||
|
||||
const { data: article, status, error, refresh } = await useAsyncData(
|
||||
() => `article:${route.params.slug}`,
|
||||
() => $fetch(`/api/articles/${route.params.slug}`),
|
||||
)
|
||||
|
||||
const { data: comments } = await useFetch(`/api/articles/${route.params.slug}/comments`, {
|
||||
lazy: true,
|
||||
server: false,
|
||||
})
|
||||
```
|
||||
|
||||
## Route Rules
|
||||
|
||||
Prefer `routeRules` in `nuxt.config.ts` for rendering and caching strategy:
|
||||
|
||||
```ts
|
||||
export default defineNuxtConfig({
|
||||
routeRules: {
|
||||
'/': { prerender: true },
|
||||
'/products/**': { swr: 3600 },
|
||||
'/blog/**': { isr: true },
|
||||
'/admin/**': { ssr: false },
|
||||
'/api/**': { cache: { maxAge: 60 * 60 } },
|
||||
},
|
||||
})
|
||||
```
|
||||
|
||||
- `prerender`: static HTML at build time
|
||||
- `swr`: serve cached content and revalidate in the background
|
||||
- `isr`: incremental static regeneration on supported platforms
|
||||
- `ssr: false`: client-rendered route
|
||||
- `cache` or `redirect`: Nitro-level response behavior
|
||||
|
||||
Pick route rules per route group, not globally. Marketing pages, catalogs, dashboards, and APIs usually need different strategies.
|
||||
|
||||
## Lazy Loading and Performance
|
||||
|
||||
- Nuxt already code-splits pages by route. Keep route boundaries meaningful before micro-optimizing component splits.
|
||||
- Use the `Lazy` prefix to dynamically import non-critical components.
|
||||
- Conditionally render lazy components with `v-if` so the chunk is not loaded until the UI actually needs it.
|
||||
- Use lazy hydration for below-the-fold or non-critical interactive UI.
|
||||
|
||||
```vue
|
||||
<template>
|
||||
<LazyRecommendations v-if="showRecommendations" />
|
||||
<LazyProductGallery hydrate-on-visible />
|
||||
</template>
|
||||
```
|
||||
|
||||
- For custom strategies, use `defineLazyHydrationComponent()` with a visibility or idle strategy.
|
||||
- Nuxt lazy hydration works on single-file components. Passing new props to a lazily hydrated component will trigger hydration immediately.
|
||||
- Use `NuxtLink` for internal navigation so Nuxt can prefetch route components and generated payloads.
|
||||
|
||||
## Review Checklist
|
||||
|
||||
- First SSR render and hydrated client render produce the same markup
|
||||
- Page data uses `useFetch` or `useAsyncData`, not top-level `$fetch`
|
||||
- Non-critical data is lazy and has explicit loading UI
|
||||
- Route rules match the page's SEO and freshness requirements
|
||||
- Heavy interactive islands are lazy-loaded or lazily hydrated
|
||||
@@ -52,6 +52,16 @@ function writeInstallComponentsManifest(testDir, components) {
|
||||
});
|
||||
}
|
||||
|
||||
function stripShebang(source) {
|
||||
let s = source;
|
||||
if (s.charCodeAt(0) === 0xFEFF) s = s.slice(1);
|
||||
if (s.startsWith('#!')) {
|
||||
const nl = s.indexOf('\n');
|
||||
s = nl === -1 ? '' : s.slice(nl + 1);
|
||||
}
|
||||
return s;
|
||||
}
|
||||
|
||||
/**
|
||||
* Run modified source via a temp file (avoids Windows node -e shebang issues).
|
||||
* The temp file is written inside the repo so require() can resolve node_modules.
|
||||
@@ -95,8 +105,8 @@ function runValidatorWithDir(validatorName, dirConstant, overridePath) {
|
||||
// Read the validator source, replace the directory constant, and run as a wrapper
|
||||
let source = fs.readFileSync(validatorPath, 'utf8');
|
||||
|
||||
// Remove the shebang line (Windows node cannot parse shebangs in eval/inline mode)
|
||||
source = source.replace(/^#!.*\n/, '');
|
||||
// Remove the shebang line so wrappers also work against CRLF-checked-out files on Windows.
|
||||
source = stripShebang(source);
|
||||
|
||||
// Replace the directory constant with our override path
|
||||
const dirRegex = new RegExp(`const ${dirConstant} = .*?;`);
|
||||
@@ -113,7 +123,7 @@ function runValidatorWithDir(validatorName, dirConstant, overridePath) {
|
||||
function runValidatorWithDirs(validatorName, overrides) {
|
||||
const validatorPath = path.join(validatorsDir, `${validatorName}.js`);
|
||||
let source = fs.readFileSync(validatorPath, 'utf8');
|
||||
source = source.replace(/^#!.*\n/, '');
|
||||
source = stripShebang(source);
|
||||
for (const [constant, overridePath] of Object.entries(overrides)) {
|
||||
const dirRegex = new RegExp(`const ${constant} = .*?;`);
|
||||
source = source.replace(dirRegex, `const ${constant} = ${JSON.stringify(overridePath)};`);
|
||||
@@ -145,7 +155,7 @@ function runValidator(validatorName) {
|
||||
function runCatalogValidator(overrides = {}) {
|
||||
const validatorPath = path.join(validatorsDir, 'catalog.js');
|
||||
let source = fs.readFileSync(validatorPath, 'utf8');
|
||||
source = source.replace(/^#!.*\n/, '');
|
||||
source = stripShebang(source);
|
||||
source = `process.argv.push('--text');\n${source}`;
|
||||
|
||||
const resolvedOverrides = {
|
||||
@@ -202,6 +212,11 @@ function runTests() {
|
||||
// ==========================================
|
||||
console.log('validate-agents.js:');
|
||||
|
||||
if (test('strips CRLF shebangs before writing temp wrappers', () => {
|
||||
const source = '#!/usr/bin/env node\r\nconsole.log("ok");';
|
||||
assert.strictEqual(stripShebang(source), 'console.log("ok");');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('passes on real project agents', () => {
|
||||
const result = runValidator('validate-agents');
|
||||
assert.strictEqual(result.code, 0, `Should pass, got stderr: ${result.stderr}`);
|
||||
|
||||
@@ -28,6 +28,13 @@ function makeTempDir() {
|
||||
return fs.mkdtempSync(path.join(os.tmpdir(), 'cost-tracker-test-'));
|
||||
}
|
||||
|
||||
function withTempHome(homeDir) {
|
||||
return {
|
||||
HOME: homeDir,
|
||||
USERPROFILE: homeDir,
|
||||
};
|
||||
}
|
||||
|
||||
function runScript(input, envOverrides = {}) {
|
||||
const inputStr = typeof input === 'string' ? input : JSON.stringify(input);
|
||||
const result = spawnSync('node', [script], {
|
||||
@@ -64,7 +71,7 @@ function runTests() {
|
||||
model: 'claude-sonnet-4-20250514',
|
||||
usage: { input_tokens: 1000, output_tokens: 500 },
|
||||
};
|
||||
const result = runScript(input, { HOME: tmpHome });
|
||||
const result = runScript(input, withTempHome(tmpHome));
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
|
||||
const metricsFile = path.join(tmpHome, '.claude', 'metrics', 'costs.jsonl');
|
||||
@@ -84,7 +91,7 @@ function runTests() {
|
||||
// 3. Handles empty input gracefully
|
||||
(test('handles empty input gracefully', () => {
|
||||
const tmpHome = makeTempDir();
|
||||
const result = runScript('', { HOME: tmpHome });
|
||||
const result = runScript('', withTempHome(tmpHome));
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
// stdout should be empty since input was empty
|
||||
assert.strictEqual(result.stdout, '', 'Expected empty stdout for empty input');
|
||||
@@ -96,7 +103,7 @@ function runTests() {
|
||||
(test('handles invalid JSON gracefully', () => {
|
||||
const tmpHome = makeTempDir();
|
||||
const invalidInput = 'not valid json {{{';
|
||||
const result = runScript(invalidInput, { HOME: tmpHome });
|
||||
const result = runScript(invalidInput, withTempHome(tmpHome));
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
// Should still pass through the raw input on stdout
|
||||
assert.strictEqual(result.stdout, invalidInput, 'Expected stdout to contain original invalid input');
|
||||
@@ -109,7 +116,7 @@ function runTests() {
|
||||
const tmpHome = makeTempDir();
|
||||
const input = { model: 'claude-sonnet-4-20250514' };
|
||||
const inputStr = JSON.stringify(input);
|
||||
const result = runScript(input, { HOME: tmpHome });
|
||||
const result = runScript(input, withTempHome(tmpHome));
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
assert.strictEqual(result.stdout, inputStr, 'Expected stdout to match original input');
|
||||
|
||||
|
||||
@@ -8,11 +8,17 @@
|
||||
* Run with: node tests/hooks/detect-project-worktree.test.js
|
||||
*/
|
||||
|
||||
|
||||
// Skip on Windows — these tests invoke bash scripts directly
|
||||
if (process.platform === 'win32') {
|
||||
console.log('Skipping bash-dependent worktree tests on Windows\n');
|
||||
process.exit(0);
|
||||
}
|
||||
const assert = require('assert');
|
||||
const path = require('path');
|
||||
const fs = require('fs');
|
||||
const os = require('os');
|
||||
const { execSync } = require('child_process');
|
||||
const { execFileSync, execSync } = require('child_process');
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
@@ -41,6 +47,20 @@ function cleanupDir(dir) {
|
||||
}
|
||||
}
|
||||
|
||||
function toBashPath(filePath) {
|
||||
if (process.platform !== 'win32') {
|
||||
return filePath;
|
||||
}
|
||||
|
||||
return String(filePath)
|
||||
.replace(/^([A-Za-z]):/, (_, driveLetter) => `/${driveLetter.toLowerCase()}`)
|
||||
.replace(/\\/g, '/');
|
||||
}
|
||||
|
||||
function runBash(command, options = {}) {
|
||||
return execFileSync('bash', ['-lc', command], options).toString().trim();
|
||||
}
|
||||
|
||||
const repoRoot = path.resolve(__dirname, '..', '..');
|
||||
const detectProjectPath = path.join(
|
||||
repoRoot,
|
||||
@@ -98,7 +118,7 @@ test('[ -d ] returns true for .git directory', () => {
|
||||
const dir = path.join(behaviorDir, 'test-d-dir');
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
fs.mkdirSync(path.join(dir, '.git'));
|
||||
const result = execSync(`bash -c '[ -d "${dir}/.git" ] && echo yes || echo no'`).toString().trim();
|
||||
const result = runBash(`[ -d "${toBashPath(path.join(dir, '.git'))}" ] && echo yes || echo no`);
|
||||
assert.strictEqual(result, 'yes');
|
||||
});
|
||||
|
||||
@@ -106,7 +126,7 @@ test('[ -d ] returns false for .git file', () => {
|
||||
const dir = path.join(behaviorDir, 'test-d-file');
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
fs.writeFileSync(path.join(dir, '.git'), 'gitdir: /some/path\n');
|
||||
const result = execSync(`bash -c '[ -d "${dir}/.git" ] && echo yes || echo no'`).toString().trim();
|
||||
const result = runBash(`[ -d "${toBashPath(path.join(dir, '.git'))}" ] && echo yes || echo no`);
|
||||
assert.strictEqual(result, 'no');
|
||||
});
|
||||
|
||||
@@ -114,7 +134,7 @@ test('[ -e ] returns true for .git directory', () => {
|
||||
const dir = path.join(behaviorDir, 'test-e-dir');
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
fs.mkdirSync(path.join(dir, '.git'));
|
||||
const result = execSync(`bash -c '[ -e "${dir}/.git" ] && echo yes || echo no'`).toString().trim();
|
||||
const result = runBash(`[ -e "${toBashPath(path.join(dir, '.git'))}" ] && echo yes || echo no`);
|
||||
assert.strictEqual(result, 'yes');
|
||||
});
|
||||
|
||||
@@ -122,14 +142,14 @@ test('[ -e ] returns true for .git file', () => {
|
||||
const dir = path.join(behaviorDir, 'test-e-file');
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
fs.writeFileSync(path.join(dir, '.git'), 'gitdir: /some/path\n');
|
||||
const result = execSync(`bash -c '[ -e "${dir}/.git" ] && echo yes || echo no'`).toString().trim();
|
||||
const result = runBash(`[ -e "${toBashPath(path.join(dir, '.git'))}" ] && echo yes || echo no`);
|
||||
assert.strictEqual(result, 'yes');
|
||||
});
|
||||
|
||||
test('[ -e ] returns false when .git does not exist', () => {
|
||||
const dir = path.join(behaviorDir, 'test-e-none');
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
const result = execSync(`bash -c '[ -e "${dir}/.git" ] && echo yes || echo no'`).toString().trim();
|
||||
const result = runBash(`[ -e "${toBashPath(path.join(dir, '.git'))}" ] && echo yes || echo no`);
|
||||
assert.strictEqual(result, 'no');
|
||||
});
|
||||
|
||||
@@ -188,20 +208,21 @@ test('detect-project.sh sets PROJECT_NAME and non-global PROJECT_ID for worktree
|
||||
|
||||
// Source detect-project.sh from the worktree directory and capture results
|
||||
const script = `
|
||||
export CLAUDE_PROJECT_DIR="${worktreeDir}"
|
||||
export HOME="${testDir}"
|
||||
source "${detectProjectPath}"
|
||||
export CLAUDE_PROJECT_DIR="${toBashPath(worktreeDir)}"
|
||||
export HOME="${toBashPath(testDir)}"
|
||||
source "${toBashPath(detectProjectPath)}"
|
||||
echo "PROJECT_NAME=\${PROJECT_NAME}"
|
||||
echo "PROJECT_ID=\${PROJECT_ID}"
|
||||
`;
|
||||
|
||||
const result = execSync(`bash -c '${script.replace(/'/g, "'\\''")}'`, {
|
||||
const result = execFileSync('bash', ['-lc', script], {
|
||||
cwd: worktreeDir,
|
||||
timeout: 10000,
|
||||
env: {
|
||||
...process.env,
|
||||
HOME: testDir,
|
||||
CLAUDE_PROJECT_DIR: worktreeDir
|
||||
HOME: toBashPath(testDir),
|
||||
USERPROFILE: testDir,
|
||||
CLAUDE_PROJECT_DIR: toBashPath(worktreeDir)
|
||||
}
|
||||
}).toString();
|
||||
|
||||
|
||||
@@ -8,7 +8,9 @@ const assert = require('assert');
|
||||
const path = require('path');
|
||||
const fs = require('fs');
|
||||
const os = require('os');
|
||||
const { spawn, spawnSync } = require('child_process');
|
||||
const { execFileSync, spawn, spawnSync } = require('child_process');
|
||||
|
||||
const SKIP_BASH = process.platform === 'win32';
|
||||
|
||||
function toBashPath(filePath) {
|
||||
if (process.platform !== 'win32') {
|
||||
@@ -16,10 +18,66 @@ function toBashPath(filePath) {
|
||||
}
|
||||
|
||||
return String(filePath)
|
||||
.replace(/^([A-Za-z]):/, (_, driveLetter) => `/mnt/${driveLetter.toLowerCase()}`)
|
||||
.replace(/^([A-Za-z]):/, (_, driveLetter) => `/${driveLetter.toLowerCase()}`)
|
||||
.replace(/\\/g, '/');
|
||||
}
|
||||
|
||||
function fromBashPath(filePath) {
|
||||
if (process.platform !== 'win32') {
|
||||
return filePath;
|
||||
}
|
||||
|
||||
const rawPath = String(filePath || '');
|
||||
if (!rawPath) {
|
||||
return rawPath;
|
||||
}
|
||||
|
||||
try {
|
||||
return execFileSync(
|
||||
'bash',
|
||||
['-lc', 'cygpath -w -- "$1"', 'bash', rawPath],
|
||||
{ stdio: ['ignore', 'pipe', 'ignore'] }
|
||||
)
|
||||
.toString()
|
||||
.trim();
|
||||
} catch {
|
||||
// Fall back to common Git Bash path shapes when cygpath is unavailable.
|
||||
}
|
||||
|
||||
const match = rawPath.match(/^\/(?:cygdrive\/)?([A-Za-z])\/(.*)$/)
|
||||
|| rawPath.match(/^\/\/([A-Za-z])\/(.*)$/);
|
||||
if (match) {
|
||||
return `${match[1].toUpperCase()}:\\${match[2].replace(/\//g, '\\')}`;
|
||||
}
|
||||
|
||||
if (/^[A-Za-z]:\//.test(rawPath)) {
|
||||
return rawPath.replace(/\//g, '\\');
|
||||
}
|
||||
|
||||
return rawPath;
|
||||
}
|
||||
|
||||
function normalizeComparablePath(filePath) {
|
||||
const nativePath = fromBashPath(filePath);
|
||||
if (!nativePath) {
|
||||
return nativePath;
|
||||
}
|
||||
|
||||
let comparablePath = nativePath;
|
||||
try {
|
||||
comparablePath = fs.realpathSync.native ? fs.realpathSync.native(nativePath) : fs.realpathSync(nativePath);
|
||||
} catch {
|
||||
comparablePath = path.resolve(nativePath);
|
||||
}
|
||||
|
||||
comparablePath = comparablePath.replace(/[\\/]+/g, '/');
|
||||
if (comparablePath.length > 1 && !/^[A-Za-z]:\/$/.test(comparablePath)) {
|
||||
comparablePath = comparablePath.replace(/\/+$/, '');
|
||||
}
|
||||
|
||||
return process.platform === 'win32' ? comparablePath.toLowerCase() : comparablePath;
|
||||
}
|
||||
|
||||
function sleepMs(ms) {
|
||||
Atomics.wait(new Int32Array(new SharedArrayBuffer(4)), 0, 0, ms);
|
||||
}
|
||||
@@ -93,8 +151,8 @@ function runShellScript(scriptPath, args = [], input = '', env = {}, cwd = proce
|
||||
}
|
||||
proc.stdin.end();
|
||||
|
||||
proc.stdout.on('data', data => stdout += data);
|
||||
proc.stderr.on('data', data => stderr += data);
|
||||
proc.stdout.on('data', data => (stdout += data));
|
||||
proc.stderr.on('data', data => (stderr += data));
|
||||
proc.on('close', code => resolve({ code, stdout, stderr }));
|
||||
proc.on('error', reject);
|
||||
});
|
||||
@@ -180,9 +238,7 @@ function assertNoProjectDetectionSideEffects(homeDir, testName) {
|
||||
|
||||
assert.ok(!fs.existsSync(registryPath), `${testName} should not create projects.json`);
|
||||
|
||||
const projectEntries = fs.existsSync(projectsDir)
|
||||
? fs.readdirSync(projectsDir).filter(entry => fs.statSync(path.join(projectsDir, entry)).isDirectory())
|
||||
: [];
|
||||
const projectEntries = fs.existsSync(projectsDir) ? fs.readdirSync(projectsDir).filter(entry => fs.statSync(path.join(projectsDir, entry)).isDirectory()) : [];
|
||||
assert.strictEqual(projectEntries.length, 0, `${testName} should not create project directories`);
|
||||
}
|
||||
|
||||
@@ -204,11 +260,17 @@ async function assertObserveSkipBeforeProjectDetection(testCase) {
|
||||
...(testCase.payload || {})
|
||||
});
|
||||
|
||||
const result = await runShellScript(observePath, ['post'], payload, {
|
||||
HOME: homeDir,
|
||||
USERPROFILE: homeDir,
|
||||
...testCase.env
|
||||
}, projectDir);
|
||||
const result = await runShellScript(
|
||||
observePath,
|
||||
['post'],
|
||||
payload,
|
||||
{
|
||||
HOME: homeDir,
|
||||
USERPROFILE: homeDir,
|
||||
...testCase.env
|
||||
},
|
||||
projectDir
|
||||
);
|
||||
|
||||
assert.strictEqual(result.code, 0, `${testCase.name} should exit successfully, stderr: ${result.stderr}`);
|
||||
assertNoProjectDetectionSideEffects(homeDir, testCase.name);
|
||||
@@ -228,13 +290,13 @@ function runPatchedRunAll(tempRoot) {
|
||||
const result = spawnSync('node', [wrapperPath], {
|
||||
encoding: 'utf8',
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
timeout: 15000,
|
||||
timeout: 15000
|
||||
});
|
||||
|
||||
return {
|
||||
code: result.status ?? 1,
|
||||
stdout: result.stdout || '',
|
||||
stderr: result.stderr || '',
|
||||
stderr: result.stderr || ''
|
||||
};
|
||||
}
|
||||
|
||||
@@ -353,6 +415,36 @@ async function runTests() {
|
||||
passed++;
|
||||
else failed++;
|
||||
|
||||
if (
|
||||
await asyncTest('strips ANSI escape codes from injected session content', async () => {
|
||||
const isoHome = path.join(os.tmpdir(), `ecc-ansi-start-${Date.now()}`);
|
||||
const sessionsDir = path.join(isoHome, '.claude', 'sessions');
|
||||
fs.mkdirSync(sessionsDir, { recursive: true });
|
||||
fs.mkdirSync(path.join(isoHome, '.claude', 'skills', 'learned'), { recursive: true });
|
||||
|
||||
const sessionFile = path.join(sessionsDir, '2026-02-11-winansi00-session.tmp');
|
||||
fs.writeFileSync(
|
||||
sessionFile,
|
||||
'\x1b[H\x1b[2J\x1b[3J# Real Session\n\nI worked on \x1b[1;36mWindows terminal handling\x1b[0m.\x1b[K\n'
|
||||
);
|
||||
|
||||
try {
|
||||
const result = await runScript(path.join(scriptsDir, 'session-start.js'), '', {
|
||||
HOME: isoHome,
|
||||
USERPROFILE: isoHome
|
||||
});
|
||||
assert.strictEqual(result.code, 0);
|
||||
assert.ok(result.stdout.includes('Previous session summary'), 'Should inject real session content');
|
||||
assert.ok(result.stdout.includes('Windows terminal handling'), 'Should preserve sanitized session text');
|
||||
assert.ok(!result.stdout.includes('\x1b['), 'Should not emit ANSI escape codes');
|
||||
} finally {
|
||||
fs.rmSync(isoHome, { recursive: true, force: true });
|
||||
}
|
||||
})
|
||||
)
|
||||
passed++;
|
||||
else failed++;
|
||||
|
||||
if (
|
||||
await asyncTest('reports learned skills count', async () => {
|
||||
const isoHome = path.join(os.tmpdir(), `ecc-skills-start-${Date.now()}`);
|
||||
@@ -388,11 +480,7 @@ async function runTests() {
|
||||
tool_name: 'Write',
|
||||
tool_input: { file_path: 'src/index.ts', content: 'console.log("ok");' }
|
||||
});
|
||||
const result = await runScript(
|
||||
path.join(scriptsDir, 'insaits-security-wrapper.js'),
|
||||
stdinData,
|
||||
{ ECC_ENABLE_INSAITS: '' }
|
||||
);
|
||||
const result = await runScript(path.join(scriptsDir, 'insaits-security-wrapper.js'), stdinData, { ECC_ENABLE_INSAITS: '' });
|
||||
assert.strictEqual(result.code, 0, `Exit code should be 0, got ${result.code}`);
|
||||
assert.strictEqual(result.stdout, stdinData, 'Should pass stdin through unchanged');
|
||||
assert.strictEqual(result.stderr, '', 'Should stay silent when integration is disabled');
|
||||
@@ -1782,10 +1870,14 @@ async function runTests() {
|
||||
for (const hook of entry.hooks) {
|
||||
if (hook.type === 'command') {
|
||||
const isNode = hook.command.startsWith('node');
|
||||
const isNpx = hook.command.startsWith('npx ');
|
||||
const isSkillScript = hook.command.includes('/skills/') && (/^(bash|sh)\s/.test(hook.command) || hook.command.startsWith('${CLAUDE_PLUGIN_ROOT}/skills/'));
|
||||
const isHookShellWrapper = /^(bash|sh)\s+["']?\$\{CLAUDE_PLUGIN_ROOT\}\/scripts\/hooks\/run-with-flags-shell\.sh/.test(hook.command);
|
||||
const isSessionStartFallback = hook.command.startsWith('bash -lc') && hook.command.includes('run-with-flags.js');
|
||||
assert.ok(isNode || isSkillScript || isHookShellWrapper || isSessionStartFallback, `Hook command should use node or approved shell wrapper: ${hook.command.substring(0, 100)}...`);
|
||||
assert.ok(
|
||||
isNode || isNpx || isSkillScript || isHookShellWrapper || isSessionStartFallback,
|
||||
`Hook command should use node or approved shell wrapper: ${hook.command.substring(0, 100)}...`
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1834,10 +1926,7 @@ async function runTests() {
|
||||
assert.ok(insaitsHook, 'Should define an InsAIts PreToolUse hook');
|
||||
assert.strictEqual(insaitsHook.matcher, 'Bash|Write|Edit|MultiEdit', 'InsAIts hook should avoid matching every tool');
|
||||
assert.ok(insaitsHook.description.includes('ECC_ENABLE_INSAITS=1'), 'InsAIts hook should document explicit opt-in');
|
||||
assert.ok(
|
||||
insaitsHook.hooks[0].command.includes('insaits-security-wrapper.js'),
|
||||
'InsAIts hook should execute through the JS wrapper'
|
||||
);
|
||||
assert.ok(insaitsHook.hooks[0].command.includes('insaits-security-wrapper.js'), 'InsAIts hook should execute through the JS wrapper');
|
||||
})
|
||||
)
|
||||
passed++;
|
||||
@@ -2261,10 +2350,7 @@ async function runTests() {
|
||||
|
||||
if (
|
||||
test('observer-loop uses a configurable max-turn budget with safe default', () => {
|
||||
const observerLoopSource = fs.readFileSync(
|
||||
path.join(__dirname, '..', '..', 'skills', 'continuous-learning-v2', 'agents', 'observer-loop.sh'),
|
||||
'utf8'
|
||||
);
|
||||
const observerLoopSource = fs.readFileSync(path.join(__dirname, '..', '..', 'skills', 'continuous-learning-v2', 'agents', 'observer-loop.sh'), 'utf8');
|
||||
|
||||
assert.ok(observerLoopSource.includes('ECC_OBSERVER_MAX_TURNS'), 'observer-loop should allow max-turn overrides');
|
||||
assert.ok(observerLoopSource.includes('max_turns="${ECC_OBSERVER_MAX_TURNS:-10}"'), 'observer-loop should default to 10 turns');
|
||||
@@ -2276,7 +2362,10 @@ async function runTests() {
|
||||
passed++;
|
||||
else failed++;
|
||||
|
||||
if (
|
||||
if (SKIP_BASH) {
|
||||
console.log(' ⊘ detect-project exports the resolved Python command (skipped on Windows)');
|
||||
passed++;
|
||||
} else if (
|
||||
await asyncTest('detect-project exports the resolved Python command for downstream scripts', async () => {
|
||||
const detectProjectPath = path.join(__dirname, '..', '..', 'skills', 'continuous-learning-v2', 'scripts', 'detect-project.sh');
|
||||
const shellCommand = [`source "${toBashPath(detectProjectPath)}" >/dev/null 2>&1`, 'printf "%s\\n" "${CLV2_PYTHON_CMD:-}"'].join('; ');
|
||||
@@ -2304,7 +2393,10 @@ async function runTests() {
|
||||
passed++;
|
||||
else failed++;
|
||||
|
||||
if (
|
||||
if (SKIP_BASH) {
|
||||
console.log(' ⊘ detect-project writes project metadata (skipped on Windows)');
|
||||
passed++;
|
||||
} else if (
|
||||
await asyncTest('detect-project writes project metadata to the registry and project directory', async () => {
|
||||
const testRoot = createTestDir();
|
||||
const homeDir = path.join(testRoot, 'home');
|
||||
@@ -2317,15 +2409,15 @@ async function runTests() {
|
||||
spawnSync('git', ['init'], { cwd: repoDir, stdio: 'ignore' });
|
||||
spawnSync('git', ['remote', 'add', 'origin', 'https://github.com/example/ecc-test.git'], { cwd: repoDir, stdio: 'ignore' });
|
||||
|
||||
const shellCommand = [
|
||||
`cd "${toBashPath(repoDir)}"`,
|
||||
`source "${toBashPath(detectProjectPath)}" >/dev/null 2>&1`,
|
||||
'printf "%s\\n" "$PROJECT_ID"',
|
||||
'printf "%s\\n" "$PROJECT_DIR"'
|
||||
].join('; ');
|
||||
const shellCommand = [`cd "${toBashPath(repoDir)}"`, `source "${toBashPath(detectProjectPath)}" >/dev/null 2>&1`, 'printf "%s\\n" "$PROJECT_ID"', 'printf "%s\\n" "$PROJECT_DIR"'].join('; ');
|
||||
|
||||
const proc = spawn('bash', ['-lc', shellCommand], {
|
||||
env: { ...process.env, HOME: homeDir, USERPROFILE: homeDir },
|
||||
env: {
|
||||
...process.env,
|
||||
HOME: homeDir,
|
||||
USERPROFILE: homeDir,
|
||||
CLAUDE_PROJECT_DIR: ''
|
||||
},
|
||||
stdio: ['ignore', 'pipe', 'pipe']
|
||||
});
|
||||
|
||||
@@ -2343,22 +2435,43 @@ async function runTests() {
|
||||
|
||||
const [projectId, projectDir] = stdout.trim().split(/\r?\n/);
|
||||
const registryPath = path.join(homeDir, '.claude', 'homunculus', 'projects.json');
|
||||
const projectMetadataPath = path.join(projectDir, 'project.json');
|
||||
const expectedProjectDir = path.join(
|
||||
homeDir,
|
||||
'.claude',
|
||||
'homunculus',
|
||||
'projects',
|
||||
projectId
|
||||
);
|
||||
const projectMetadataPath = path.join(expectedProjectDir, 'project.json');
|
||||
|
||||
assert.ok(projectId, 'detect-project should emit a project id');
|
||||
assert.ok(projectDir, 'detect-project should emit a project directory');
|
||||
assert.ok(fs.existsSync(registryPath), 'projects.json should be created');
|
||||
assert.ok(fs.existsSync(projectMetadataPath), 'project.json should be written in the project directory');
|
||||
|
||||
const registry = JSON.parse(fs.readFileSync(registryPath, 'utf8'));
|
||||
const metadata = JSON.parse(fs.readFileSync(projectMetadataPath, 'utf8'));
|
||||
const comparableMetadataRoot = normalizeComparablePath(metadata.root);
|
||||
const comparableRepoDir = normalizeComparablePath(repoDir);
|
||||
const comparableProjectDir = normalizeComparablePath(projectDir);
|
||||
const comparableExpectedProjectDir = normalizeComparablePath(expectedProjectDir);
|
||||
|
||||
assert.ok(registry[projectId], 'registry should contain the detected project');
|
||||
assert.strictEqual(metadata.id, projectId, 'project.json should include the detected id');
|
||||
assert.strictEqual(metadata.name, path.basename(repoDir), 'project.json should include the repo name');
|
||||
assert.strictEqual(fs.realpathSync(metadata.root), fs.realpathSync(repoDir), 'project.json should include the repo root');
|
||||
assert.strictEqual(
|
||||
comparableMetadataRoot,
|
||||
comparableRepoDir,
|
||||
`project.json should include the repo root (expected ${comparableRepoDir}, got ${comparableMetadataRoot})`
|
||||
);
|
||||
assert.strictEqual(metadata.remote, 'https://github.com/example/ecc-test.git', 'project.json should include the sanitized remote');
|
||||
assert.ok(metadata.created_at, 'project.json should include created_at');
|
||||
assert.ok(metadata.last_seen, 'project.json should include last_seen');
|
||||
assert.strictEqual(
|
||||
comparableProjectDir,
|
||||
comparableExpectedProjectDir,
|
||||
`PROJECT_DIR should point at the project storage directory (expected ${comparableExpectedProjectDir}, got ${comparableProjectDir})`
|
||||
);
|
||||
} finally {
|
||||
cleanupTestDir(testRoot);
|
||||
}
|
||||
@@ -2367,88 +2480,125 @@ async function runTests() {
|
||||
passed++;
|
||||
else failed++;
|
||||
|
||||
if (await asyncTest('observe.sh falls back to legacy output fields when tool_response is null', async () => {
|
||||
const homeDir = createTestDir();
|
||||
const projectDir = createTestDir();
|
||||
const observePath = path.join(__dirname, '..', '..', 'skills', 'continuous-learning-v2', 'hooks', 'observe.sh');
|
||||
const payload = JSON.stringify({
|
||||
tool_name: 'Bash',
|
||||
tool_input: { command: 'echo hello' },
|
||||
tool_response: null,
|
||||
tool_output: 'legacy output',
|
||||
session_id: 'session-123',
|
||||
cwd: projectDir
|
||||
});
|
||||
if (SKIP_BASH) {
|
||||
console.log(' ⊘ observe.sh falls back to legacy output fields (skipped on Windows)');
|
||||
passed++;
|
||||
} else if (
|
||||
await asyncTest('observe.sh falls back to legacy output fields when tool_response is null', async () => {
|
||||
const homeDir = createTestDir();
|
||||
const projectDir = createTestDir();
|
||||
const observePath = path.join(__dirname, '..', '..', 'skills', 'continuous-learning-v2', 'hooks', 'observe.sh');
|
||||
const payload = JSON.stringify({
|
||||
tool_name: 'Bash',
|
||||
tool_input: { command: 'echo hello' },
|
||||
tool_response: null,
|
||||
tool_output: 'legacy output',
|
||||
session_id: 'session-123',
|
||||
cwd: projectDir
|
||||
});
|
||||
|
||||
try {
|
||||
const result = await runShellScript(observePath, ['post'], payload, {
|
||||
HOME: homeDir,
|
||||
USERPROFILE: homeDir,
|
||||
CLAUDE_PROJECT_DIR: projectDir
|
||||
}, projectDir);
|
||||
try {
|
||||
const result = await runShellScript(
|
||||
observePath,
|
||||
['post'],
|
||||
payload,
|
||||
{
|
||||
HOME: homeDir,
|
||||
USERPROFILE: homeDir,
|
||||
CLAUDE_PROJECT_DIR: projectDir
|
||||
},
|
||||
projectDir
|
||||
);
|
||||
|
||||
assert.strictEqual(result.code, 0, `observe.sh should exit successfully, stderr: ${result.stderr}`);
|
||||
assert.strictEqual(result.code, 0, `observe.sh should exit successfully, stderr: ${result.stderr}`);
|
||||
|
||||
const projectsDir = path.join(homeDir, '.claude', 'homunculus', 'projects');
|
||||
const projectIds = fs.readdirSync(projectsDir);
|
||||
assert.strictEqual(projectIds.length, 1, 'observe.sh should create one project-scoped observation directory');
|
||||
const projectsDir = path.join(homeDir, '.claude', 'homunculus', 'projects');
|
||||
const projectIds = fs.readdirSync(projectsDir);
|
||||
assert.strictEqual(projectIds.length, 1, 'observe.sh should create one project-scoped observation directory');
|
||||
|
||||
const observationsPath = path.join(projectsDir, projectIds[0], 'observations.jsonl');
|
||||
const observations = fs.readFileSync(observationsPath, 'utf8').trim().split('\n').filter(Boolean);
|
||||
assert.ok(observations.length > 0, 'observe.sh should append at least one observation');
|
||||
const observationsPath = path.join(projectsDir, projectIds[0], 'observations.jsonl');
|
||||
const observations = fs.readFileSync(observationsPath, 'utf8').trim().split('\n').filter(Boolean);
|
||||
assert.ok(observations.length > 0, 'observe.sh should append at least one observation');
|
||||
|
||||
const observation = JSON.parse(observations[0]);
|
||||
assert.strictEqual(observation.output, 'legacy output', 'observe.sh should fall back to legacy tool_output when tool_response is null');
|
||||
} finally {
|
||||
cleanupTestDir(homeDir);
|
||||
cleanupTestDir(projectDir);
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
const observation = JSON.parse(observations[0]);
|
||||
assert.strictEqual(observation.output, 'legacy output', 'observe.sh should fall back to legacy tool_output when tool_response is null');
|
||||
} finally {
|
||||
cleanupTestDir(homeDir);
|
||||
cleanupTestDir(projectDir);
|
||||
}
|
||||
})
|
||||
)
|
||||
passed++;
|
||||
else failed++;
|
||||
|
||||
if (await asyncTest('observe.sh skips non-cli entrypoints before project detection side effects', async () => {
|
||||
await assertObserveSkipBeforeProjectDetection({
|
||||
name: 'non-cli entrypoint',
|
||||
env: { CLAUDE_CODE_ENTRYPOINT: 'mcp' }
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
if (SKIP_BASH) {
|
||||
console.log(' \u2298 observe.sh skips non-cli entrypoints (skipped on Windows)');
|
||||
passed++;
|
||||
} else if (
|
||||
await asyncTest('observe.sh skips non-cli entrypoints before project detection side effects', async () => {
|
||||
await assertObserveSkipBeforeProjectDetection({
|
||||
name: 'non-cli entrypoint',
|
||||
env: { CLAUDE_CODE_ENTRYPOINT: 'mcp' }
|
||||
});
|
||||
})
|
||||
)
|
||||
passed++;
|
||||
else failed++;
|
||||
|
||||
if (await asyncTest('observe.sh skips minimal hook profile before project detection side effects', async () => {
|
||||
await assertObserveSkipBeforeProjectDetection({
|
||||
name: 'minimal hook profile',
|
||||
env: { CLAUDE_CODE_ENTRYPOINT: 'cli', ECC_HOOK_PROFILE: 'minimal' }
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
if (SKIP_BASH) { console.log(" ⊘ observe.sh skips minimal hook profile (skipped on Windows)"); passed++; } else if (
|
||||
await asyncTest('observe.sh skips minimal hook profile before project detection side effects', async () => {
|
||||
await assertObserveSkipBeforeProjectDetection({
|
||||
name: 'minimal hook profile',
|
||||
env: { CLAUDE_CODE_ENTRYPOINT: 'cli', ECC_HOOK_PROFILE: 'minimal' }
|
||||
});
|
||||
})
|
||||
)
|
||||
passed++;
|
||||
else failed++;
|
||||
|
||||
if (await asyncTest('observe.sh skips cooperative skip env before project detection side effects', async () => {
|
||||
await assertObserveSkipBeforeProjectDetection({
|
||||
name: 'cooperative skip env',
|
||||
env: { CLAUDE_CODE_ENTRYPOINT: 'cli', ECC_SKIP_OBSERVE: '1' }
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
if (SKIP_BASH) { console.log(" ⊘ observe.sh skips cooperative skip env (skipped on Windows)"); passed++; } else if (
|
||||
await asyncTest('observe.sh skips cooperative skip env before project detection side effects', async () => {
|
||||
await assertObserveSkipBeforeProjectDetection({
|
||||
name: 'cooperative skip env',
|
||||
env: { CLAUDE_CODE_ENTRYPOINT: 'cli', ECC_SKIP_OBSERVE: '1' }
|
||||
});
|
||||
})
|
||||
)
|
||||
passed++;
|
||||
else failed++;
|
||||
|
||||
if (await asyncTest('observe.sh skips subagent payloads before project detection side effects', async () => {
|
||||
await assertObserveSkipBeforeProjectDetection({
|
||||
name: 'subagent payload',
|
||||
env: { CLAUDE_CODE_ENTRYPOINT: 'cli' },
|
||||
payload: { agent_id: 'agent-123' }
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
if (SKIP_BASH) { console.log(" ⊘ observe.sh skips subagent payloads (skipped on Windows)"); passed++; } else if (
|
||||
await asyncTest('observe.sh skips subagent payloads before project detection side effects', async () => {
|
||||
await assertObserveSkipBeforeProjectDetection({
|
||||
name: 'subagent payload',
|
||||
env: { CLAUDE_CODE_ENTRYPOINT: 'cli' },
|
||||
payload: { agent_id: 'agent-123' }
|
||||
});
|
||||
})
|
||||
)
|
||||
passed++;
|
||||
else failed++;
|
||||
|
||||
if (await asyncTest('observe.sh skips configured observer-session paths before project detection side effects', async () => {
|
||||
await assertObserveSkipBeforeProjectDetection({
|
||||
name: 'cwd skip path',
|
||||
env: {
|
||||
CLAUDE_CODE_ENTRYPOINT: 'cli',
|
||||
ECC_OBSERVE_SKIP_PATHS: ' observer-sessions , .claude-mem '
|
||||
},
|
||||
cwdSuffix: path.join('observer-sessions', 'worker')
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
if (SKIP_BASH) { console.log(" ⊘ observe.sh skips configured observer-session paths (skipped on Windows)"); passed++; } else if (
|
||||
await asyncTest('observe.sh skips configured observer-session paths before project detection side effects', async () => {
|
||||
await assertObserveSkipBeforeProjectDetection({
|
||||
name: 'cwd skip path',
|
||||
env: {
|
||||
CLAUDE_CODE_ENTRYPOINT: 'cli',
|
||||
ECC_OBSERVE_SKIP_PATHS: ' observer-sessions , .claude-mem '
|
||||
},
|
||||
cwdSuffix: path.join('observer-sessions', 'worker')
|
||||
});
|
||||
})
|
||||
)
|
||||
passed++;
|
||||
else failed++;
|
||||
|
||||
if (await asyncTest('matches .tsx extension for type checking', async () => {
|
||||
const testDir = createTestDir();
|
||||
const testFile = path.join(testDir, 'component.tsx');
|
||||
fs.writeFileSync(testFile, 'const x: number = 1;');
|
||||
if (
|
||||
await asyncTest('matches .tsx extension for type checking', async () => {
|
||||
const testDir = createTestDir();
|
||||
const testFile = path.join(testDir, 'component.tsx');
|
||||
fs.writeFileSync(testFile, 'const x: number = 1;');
|
||||
|
||||
const stdinJson = JSON.stringify({ tool_input: { file_path: testFile } });
|
||||
const result = await runScript(path.join(scriptsDir, 'post-edit-typecheck.js'), stdinJson);
|
||||
@@ -2658,10 +2808,7 @@ async function runTests() {
|
||||
const branch = spawnSync('git', ['rev-parse', '--abbrev-ref', 'HEAD'], { encoding: 'utf8' }).stdout.trim();
|
||||
const project = path.basename(spawnSync('git', ['rev-parse', '--show-toplevel'], { encoding: 'utf8' }).stdout.trim());
|
||||
|
||||
fs.writeFileSync(
|
||||
sessionFile,
|
||||
`# Session: ${today}\n**Date:** ${today}\n**Started:** 09:00\n**Last Updated:** 09:00\n\n---\n\n## Current State\n\n[Session context goes here]\n`
|
||||
);
|
||||
fs.writeFileSync(sessionFile, `# Session: ${today}\n**Date:** ${today}\n**Started:** 09:00\n**Last Updated:** 09:00\n\n---\n\n## Current State\n\n[Session context goes here]\n`);
|
||||
|
||||
const result = await runScript(path.join(scriptsDir, 'session-end.js'), '', {
|
||||
HOME: testDir,
|
||||
|
||||
266
tests/hooks/mcp-health-check.test.js
Normal file
266
tests/hooks/mcp-health-check.test.js
Normal file
@@ -0,0 +1,266 @@
|
||||
/**
|
||||
* Tests for scripts/hooks/mcp-health-check.js
|
||||
*
|
||||
* Run with: node tests/hooks/mcp-health-check.test.js
|
||||
*/
|
||||
|
||||
const assert = require('assert');
|
||||
const fs = require('fs');
|
||||
const os = require('os');
|
||||
const path = require('path');
|
||||
const { spawnSync } = require('child_process');
|
||||
|
||||
const script = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'mcp-health-check.js');
|
||||
|
||||
function test(name, fn) {
|
||||
try {
|
||||
fn();
|
||||
console.log(` ✓ ${name}`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.log(` ✗ ${name}`);
|
||||
console.log(` Error: ${err.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
async function asyncTest(name, fn) {
|
||||
try {
|
||||
await fn();
|
||||
console.log(` ✓ ${name}`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.log(` ✗ ${name}`);
|
||||
console.log(` Error: ${err.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function createTempDir() {
|
||||
return fs.mkdtempSync(path.join(os.tmpdir(), 'ecc-mcp-health-'));
|
||||
}
|
||||
|
||||
function cleanupTempDir(dirPath) {
|
||||
fs.rmSync(dirPath, { recursive: true, force: true });
|
||||
}
|
||||
|
||||
function writeConfig(configPath, body) {
|
||||
fs.writeFileSync(configPath, JSON.stringify(body, null, 2));
|
||||
}
|
||||
|
||||
function readState(statePath) {
|
||||
return JSON.parse(fs.readFileSync(statePath, 'utf8'));
|
||||
}
|
||||
|
||||
function createCommandConfig(scriptPath) {
|
||||
return {
|
||||
command: process.execPath,
|
||||
args: [scriptPath]
|
||||
};
|
||||
}
|
||||
|
||||
function runHook(input, env = {}) {
|
||||
const result = spawnSync('node', [script], {
|
||||
input: JSON.stringify(input),
|
||||
encoding: 'utf8',
|
||||
env: {
|
||||
...process.env,
|
||||
ECC_HOOK_PROFILE: 'standard',
|
||||
...env
|
||||
},
|
||||
timeout: 15000,
|
||||
stdio: ['pipe', 'pipe', 'pipe']
|
||||
});
|
||||
|
||||
return {
|
||||
code: result.status || 0,
|
||||
stdout: result.stdout || '',
|
||||
stderr: result.stderr || ''
|
||||
};
|
||||
}
|
||||
|
||||
async function runTests() {
|
||||
console.log('\n=== Testing mcp-health-check.js ===\n');
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
if (test('passes through non-MCP tools untouched', () => {
|
||||
const result = runHook(
|
||||
{ tool_name: 'Read', tool_input: { file_path: 'README.md' } },
|
||||
{ CLAUDE_HOOK_EVENT_NAME: 'PreToolUse' }
|
||||
);
|
||||
|
||||
assert.strictEqual(result.code, 0, 'Expected non-MCP tool to pass through');
|
||||
assert.strictEqual(result.stderr, '', 'Expected no stderr for non-MCP tool');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await asyncTest('marks healthy command MCP servers and allows the tool call', async () => {
|
||||
const tempDir = createTempDir();
|
||||
const configPath = path.join(tempDir, 'claude.json');
|
||||
const statePath = path.join(tempDir, 'mcp-health.json');
|
||||
const serverScript = path.join(tempDir, 'healthy-server.js');
|
||||
|
||||
try {
|
||||
fs.writeFileSync(serverScript, "setInterval(() => {}, 1000);\n");
|
||||
writeConfig(configPath, {
|
||||
mcpServers: {
|
||||
mock: createCommandConfig(serverScript)
|
||||
}
|
||||
});
|
||||
|
||||
const input = { tool_name: 'mcp__mock__list_items', tool_input: {} };
|
||||
const result = runHook(input, {
|
||||
CLAUDE_HOOK_EVENT_NAME: 'PreToolUse',
|
||||
ECC_MCP_CONFIG_PATH: configPath,
|
||||
ECC_MCP_HEALTH_STATE_PATH: statePath,
|
||||
ECC_MCP_HEALTH_TIMEOUT_MS: '100'
|
||||
});
|
||||
|
||||
assert.strictEqual(result.code, 0, `Expected healthy server to pass, got ${result.code}`);
|
||||
assert.strictEqual(result.stdout.trim(), JSON.stringify(input), 'Expected original JSON on stdout');
|
||||
|
||||
const state = readState(statePath);
|
||||
assert.strictEqual(state.servers.mock.status, 'healthy', 'Expected mock server to be marked healthy');
|
||||
} finally {
|
||||
cleanupTempDir(tempDir);
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await asyncTest('blocks unhealthy command MCP servers and records backoff state', async () => {
|
||||
const tempDir = createTempDir();
|
||||
const configPath = path.join(tempDir, 'claude.json');
|
||||
const statePath = path.join(tempDir, 'mcp-health.json');
|
||||
const serverScript = path.join(tempDir, 'unhealthy-server.js');
|
||||
|
||||
try {
|
||||
fs.writeFileSync(serverScript, "process.exit(1);\n");
|
||||
writeConfig(configPath, {
|
||||
mcpServers: {
|
||||
flaky: createCommandConfig(serverScript)
|
||||
}
|
||||
});
|
||||
|
||||
const result = runHook(
|
||||
{ tool_name: 'mcp__flaky__search', tool_input: {} },
|
||||
{
|
||||
CLAUDE_HOOK_EVENT_NAME: 'PreToolUse',
|
||||
ECC_MCP_CONFIG_PATH: configPath,
|
||||
ECC_MCP_HEALTH_STATE_PATH: statePath,
|
||||
ECC_MCP_HEALTH_TIMEOUT_MS: '100'
|
||||
}
|
||||
);
|
||||
|
||||
assert.strictEqual(result.code, 2, 'Expected unhealthy server to block the MCP tool');
|
||||
assert.ok(result.stderr.includes('Blocking search'), `Expected blocking message, got: ${result.stderr}`);
|
||||
|
||||
const state = readState(statePath);
|
||||
assert.strictEqual(state.servers.flaky.status, 'unhealthy', 'Expected flaky server to be marked unhealthy');
|
||||
assert.ok(state.servers.flaky.nextRetryAt > state.servers.flaky.checkedAt, 'Expected retry backoff to be recorded');
|
||||
} finally {
|
||||
cleanupTempDir(tempDir);
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await asyncTest('fail-open mode warns but does not block unhealthy MCP servers', async () => {
|
||||
const tempDir = createTempDir();
|
||||
const configPath = path.join(tempDir, 'claude.json');
|
||||
const statePath = path.join(tempDir, 'mcp-health.json');
|
||||
const serverScript = path.join(tempDir, 'relaxed-server.js');
|
||||
|
||||
try {
|
||||
fs.writeFileSync(serverScript, "process.exit(1);\n");
|
||||
writeConfig(configPath, {
|
||||
mcpServers: {
|
||||
relaxed: createCommandConfig(serverScript)
|
||||
}
|
||||
});
|
||||
|
||||
const result = runHook(
|
||||
{ tool_name: 'mcp__relaxed__list', tool_input: {} },
|
||||
{
|
||||
CLAUDE_HOOK_EVENT_NAME: 'PreToolUse',
|
||||
ECC_MCP_CONFIG_PATH: configPath,
|
||||
ECC_MCP_HEALTH_STATE_PATH: statePath,
|
||||
ECC_MCP_HEALTH_FAIL_OPEN: '1',
|
||||
ECC_MCP_HEALTH_TIMEOUT_MS: '100'
|
||||
}
|
||||
);
|
||||
|
||||
assert.strictEqual(result.code, 0, 'Expected fail-open mode to allow execution');
|
||||
assert.ok(result.stderr.includes('Blocking list') || result.stderr.includes('fall back'), 'Expected warning output in fail-open mode');
|
||||
} finally {
|
||||
cleanupTempDir(tempDir);
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await asyncTest('post-failure reconnect command restores server health when a reprobe succeeds', async () => {
|
||||
const tempDir = createTempDir();
|
||||
const configPath = path.join(tempDir, 'claude.json');
|
||||
const statePath = path.join(tempDir, 'mcp-health.json');
|
||||
const switchFile = path.join(tempDir, 'server-mode.txt');
|
||||
const reconnectFile = path.join(tempDir, 'reconnected.txt');
|
||||
const probeScript = path.join(tempDir, 'probe-server.js');
|
||||
|
||||
fs.writeFileSync(switchFile, 'down');
|
||||
fs.writeFileSync(
|
||||
probeScript,
|
||||
[
|
||||
"const fs = require('fs');",
|
||||
`const mode = fs.readFileSync(${JSON.stringify(switchFile)}, 'utf8').trim();`,
|
||||
"if (mode === 'up') { setInterval(() => {}, 1000); } else { console.error('401 Unauthorized'); process.exit(1); }"
|
||||
].join('\n')
|
||||
);
|
||||
|
||||
const reconnectScript = path.join(tempDir, 'reconnect.js');
|
||||
fs.writeFileSync(
|
||||
reconnectScript,
|
||||
[
|
||||
"const fs = require('fs');",
|
||||
`fs.writeFileSync(${JSON.stringify(switchFile)}, 'up');`,
|
||||
`fs.writeFileSync(${JSON.stringify(reconnectFile)}, 'done');`
|
||||
].join('\n')
|
||||
);
|
||||
|
||||
try {
|
||||
writeConfig(configPath, {
|
||||
mcpServers: {
|
||||
authy: createCommandConfig(probeScript)
|
||||
}
|
||||
});
|
||||
|
||||
const result = runHook(
|
||||
{
|
||||
tool_name: 'mcp__authy__messages',
|
||||
tool_input: {},
|
||||
error: '401 Unauthorized'
|
||||
},
|
||||
{
|
||||
CLAUDE_HOOK_EVENT_NAME: 'PostToolUseFailure',
|
||||
ECC_MCP_CONFIG_PATH: configPath,
|
||||
ECC_MCP_HEALTH_STATE_PATH: statePath,
|
||||
ECC_MCP_RECONNECT_COMMAND: `node ${JSON.stringify(reconnectScript)}`,
|
||||
ECC_MCP_HEALTH_TIMEOUT_MS: '100'
|
||||
}
|
||||
);
|
||||
|
||||
assert.strictEqual(result.code, 0, 'Expected failure hook to remain non-blocking');
|
||||
assert.ok(result.stderr.includes('reported 401'), `Expected reconnect log, got: ${result.stderr}`);
|
||||
assert.ok(result.stderr.includes('connection restored'), `Expected restored log, got: ${result.stderr}`);
|
||||
assert.ok(fs.existsSync(reconnectFile), 'Expected reconnect command to run');
|
||||
|
||||
const state = readState(statePath);
|
||||
assert.strictEqual(state.servers.authy.status, 'healthy', 'Expected authy server to be restored after reconnect');
|
||||
} finally {
|
||||
cleanupTempDir(tempDir);
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
}
|
||||
|
||||
runTests().catch(error => {
|
||||
console.error(error);
|
||||
process.exit(1);
|
||||
});
|
||||
@@ -216,6 +216,10 @@ test('counter file handles missing/corrupt file gracefully', () => {
|
||||
console.log('\n--- observe.sh end-to-end throttle (shell execution) ---');
|
||||
|
||||
test('observe.sh creates counter file and increments on each call', () => {
|
||||
if (process.platform === 'win32') {
|
||||
return;
|
||||
}
|
||||
|
||||
// This test runs observe.sh with minimal input to verify counter behavior.
|
||||
// We need python3, bash, and a valid project dir to test the full flow.
|
||||
// We use ECC_SKIP_OBSERVE=0 and minimal JSON so observe.sh processes but
|
||||
|
||||
@@ -171,6 +171,16 @@ function cleanupTestDir(testDir) {
|
||||
fs.rmSync(testDir, { recursive: true, force: true });
|
||||
}
|
||||
|
||||
function getHookCommandByDescription(hooks, lifecycle, descriptionText) {
|
||||
const hookGroup = hooks.hooks[lifecycle]?.find(
|
||||
entry => entry.description && entry.description.includes(descriptionText)
|
||||
);
|
||||
|
||||
assert.ok(hookGroup, `Expected ${lifecycle} hook matching "${descriptionText}"`);
|
||||
assert.ok(hookGroup.hooks?.[0]?.command, `Expected ${lifecycle} hook command for "${descriptionText}"`);
|
||||
return hookGroup.hooks[0].command;
|
||||
}
|
||||
|
||||
// Test suite
|
||||
async function runTests() {
|
||||
console.log('\n=== Hook Integration Tests ===\n');
|
||||
@@ -253,7 +263,11 @@ async function runTests() {
|
||||
|
||||
if (await asyncTest('dev server hook transforms command to tmux session', async () => {
|
||||
// Test the auto-tmux dev hook — transforms dev commands to run in tmux
|
||||
const hookCommand = hooks.hooks.PreToolUse[0].hooks[0].command;
|
||||
const hookCommand = getHookCommandByDescription(
|
||||
hooks,
|
||||
'PreToolUse',
|
||||
'Auto-start dev servers in tmux'
|
||||
);
|
||||
const result = await runHookCommand(hookCommand, {
|
||||
tool_input: { command: 'npm run dev' }
|
||||
});
|
||||
@@ -280,7 +294,11 @@ async function runTests() {
|
||||
|
||||
if (await asyncTest('dev server hook transforms yarn dev to tmux session', async () => {
|
||||
// The auto-tmux dev hook transforms dev commands (yarn dev, npm run dev, etc.)
|
||||
const hookCommand = hooks.hooks.PreToolUse[0].hooks[0].command;
|
||||
const hookCommand = getHookCommandByDescription(
|
||||
hooks,
|
||||
'PreToolUse',
|
||||
'Auto-start dev servers in tmux'
|
||||
);
|
||||
const result = await runHookCommand(hookCommand, {
|
||||
tool_input: { command: 'yarn dev' }
|
||||
});
|
||||
@@ -295,6 +313,50 @@ async function runTests() {
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await asyncTest('MCP health hook blocks unhealthy MCP tool calls through hooks.json', async () => {
|
||||
const hookCommand = getHookCommandByDescription(
|
||||
hooks,
|
||||
'PreToolUse',
|
||||
'Check MCP server health before MCP tool execution'
|
||||
);
|
||||
|
||||
const testDir = createTestDir();
|
||||
const configPath = path.join(testDir, 'claude.json');
|
||||
const statePath = path.join(testDir, 'mcp-health.json');
|
||||
const serverScript = path.join(testDir, 'broken-mcp.js');
|
||||
|
||||
try {
|
||||
fs.writeFileSync(serverScript, 'process.exit(1);\n');
|
||||
fs.writeFileSync(
|
||||
configPath,
|
||||
JSON.stringify({
|
||||
mcpServers: {
|
||||
broken: {
|
||||
command: process.execPath,
|
||||
args: [serverScript]
|
||||
}
|
||||
}
|
||||
})
|
||||
);
|
||||
|
||||
const result = await runHookCommand(
|
||||
hookCommand,
|
||||
{ tool_name: 'mcp__broken__search', tool_input: {} },
|
||||
{
|
||||
CLAUDE_HOOK_EVENT_NAME: 'PreToolUse',
|
||||
ECC_MCP_CONFIG_PATH: configPath,
|
||||
ECC_MCP_HEALTH_STATE_PATH: statePath,
|
||||
ECC_MCP_HEALTH_TIMEOUT_MS: '100'
|
||||
}
|
||||
);
|
||||
|
||||
assert.strictEqual(result.code, 2, 'Expected unhealthy MCP preflight to block');
|
||||
assert.ok(result.stderr.includes('broken is unavailable'), `Expected health warning, got: ${result.stderr}`);
|
||||
} finally {
|
||||
cleanupTestDir(testDir);
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await asyncTest('hooks handle missing files gracefully', async () => {
|
||||
const testDir = createTestDir();
|
||||
const transcriptPath = path.join(testDir, 'nonexistent.jsonl');
|
||||
@@ -673,6 +735,7 @@ async function runTests() {
|
||||
|
||||
const isInline = hook.command.startsWith('node -e');
|
||||
const isFilePath = hook.command.startsWith('node "');
|
||||
const isNpx = hook.command.startsWith('npx ');
|
||||
const isShellWrapper =
|
||||
hook.command.startsWith('bash "') ||
|
||||
hook.command.startsWith('sh "') ||
|
||||
@@ -681,8 +744,8 @@ async function runTests() {
|
||||
const isShellScriptPath = hook.command.endsWith('.sh');
|
||||
|
||||
assert.ok(
|
||||
isInline || isFilePath || isShellWrapper || isShellScriptPath,
|
||||
`Hook command in ${hookType} should be node -e, node script, or shell wrapper/script, got: ${hook.command.substring(0, 80)}`
|
||||
isInline || isFilePath || isNpx || isShellWrapper || isShellScriptPath,
|
||||
`Hook command in ${hookType} should be node -e, node script, npx, or shell wrapper/script, got: ${hook.command.substring(0, 80)}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,11 +1,13 @@
|
||||
/**
|
||||
* Tests for agent description compression and lazy loading.
|
||||
* Tests for scripts/lib/agent-compress.js
|
||||
*
|
||||
* Run with: node tests/lib/agent-compress.test.js
|
||||
*/
|
||||
|
||||
const assert = require('assert');
|
||||
const path = require('path');
|
||||
const fs = require('fs');
|
||||
const os = require('os');
|
||||
const path = require('path');
|
||||
|
||||
const {
|
||||
parseFrontmatter,
|
||||
@@ -18,273 +20,249 @@ const {
|
||||
lazyLoadAgent,
|
||||
} = require('../../scripts/lib/agent-compress');
|
||||
|
||||
function createTempDir(prefix) {
|
||||
return fs.mkdtempSync(path.join(os.tmpdir(), prefix));
|
||||
}
|
||||
|
||||
function cleanupTempDir(dirPath) {
|
||||
fs.rmSync(dirPath, { recursive: true, force: true });
|
||||
}
|
||||
|
||||
function writeAgent(dir, name, content) {
|
||||
fs.writeFileSync(path.join(dir, `${name}.md`), content, 'utf8');
|
||||
}
|
||||
|
||||
const SAMPLE_AGENT = `---
|
||||
name: test-agent
|
||||
description: A test agent for unit testing purposes.
|
||||
tools: ["Read", "Grep", "Glob"]
|
||||
model: sonnet
|
||||
---
|
||||
|
||||
You are a test agent that validates compression logic.
|
||||
|
||||
## Your Role
|
||||
|
||||
- Run unit tests
|
||||
- Validate compression output
|
||||
- Ensure correctness
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Setup
|
||||
- Prepare test fixtures
|
||||
- Load agent files
|
||||
|
||||
### 2. Validate
|
||||
Check the output format and content.
|
||||
`;
|
||||
|
||||
const MINIMAL_AGENT = `---
|
||||
name: minimal
|
||||
description: Minimal agent.
|
||||
tools: ["Read"]
|
||||
model: haiku
|
||||
---
|
||||
|
||||
Short body.
|
||||
`;
|
||||
|
||||
async function test(name, fn) {
|
||||
function test(name, fn) {
|
||||
try {
|
||||
await fn();
|
||||
fn();
|
||||
console.log(` \u2713 ${name}`);
|
||||
return true;
|
||||
} catch (error) {
|
||||
} catch (err) {
|
||||
console.log(` \u2717 ${name}`);
|
||||
console.log(` Error: ${error.message}`);
|
||||
console.log(` Error: ${err.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
async function runTests() {
|
||||
function runTests() {
|
||||
console.log('\n=== Testing agent-compress ===\n');
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
if (await test('parseFrontmatter extracts YAML frontmatter and body', async () => {
|
||||
const { frontmatter, body } = parseFrontmatter(SAMPLE_AGENT);
|
||||
// --- parseFrontmatter ---
|
||||
|
||||
if (test('parseFrontmatter extracts YAML frontmatter and body', () => {
|
||||
const content = '---\nname: test-agent\ndescription: A test\ntools: ["Read", "Grep"]\nmodel: sonnet\n---\n\nBody text here.';
|
||||
const { frontmatter, body } = parseFrontmatter(content);
|
||||
assert.strictEqual(frontmatter.name, 'test-agent');
|
||||
assert.strictEqual(frontmatter.description, 'A test agent for unit testing purposes.');
|
||||
assert.deepStrictEqual(frontmatter.tools, ['Read', 'Grep', 'Glob']);
|
||||
assert.strictEqual(frontmatter.description, 'A test');
|
||||
assert.deepStrictEqual(frontmatter.tools, ['Read', 'Grep']);
|
||||
assert.strictEqual(frontmatter.model, 'sonnet');
|
||||
assert.ok(body.includes('You are a test agent'));
|
||||
})) passed += 1; else failed += 1;
|
||||
assert.ok(body.includes('Body text here.'));
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await test('parseFrontmatter handles content without frontmatter', async () => {
|
||||
const { frontmatter, body } = parseFrontmatter('Just a plain document.');
|
||||
if (test('parseFrontmatter handles content without frontmatter', () => {
|
||||
const content = 'Just a regular markdown file.';
|
||||
const { frontmatter, body } = parseFrontmatter(content);
|
||||
assert.deepStrictEqual(frontmatter, {});
|
||||
assert.strictEqual(body, 'Just a plain document.');
|
||||
})) passed += 1; else failed += 1;
|
||||
assert.strictEqual(body, content);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await test('extractSummary returns the first paragraph of the body', async () => {
|
||||
const { body } = parseFrontmatter(SAMPLE_AGENT);
|
||||
if (test('parseFrontmatter handles colons in values', () => {
|
||||
const content = '---\nname: test\ndescription: Use this: it works\n---\n\nBody.';
|
||||
const { frontmatter } = parseFrontmatter(content);
|
||||
assert.strictEqual(frontmatter.description, 'Use this: it works');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('parseFrontmatter strips surrounding quotes', () => {
|
||||
const content = '---\nname: "quoted-name"\n---\n\nBody.';
|
||||
const { frontmatter } = parseFrontmatter(content);
|
||||
assert.strictEqual(frontmatter.name, 'quoted-name');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('parseFrontmatter handles content ending right after closing ---', () => {
|
||||
const content = '---\nname: test\ndescription: No body\n---';
|
||||
const { frontmatter, body } = parseFrontmatter(content);
|
||||
assert.strictEqual(frontmatter.name, 'test');
|
||||
assert.strictEqual(frontmatter.description, 'No body');
|
||||
assert.strictEqual(body, '');
|
||||
})) passed++; else failed++;
|
||||
|
||||
// --- extractSummary ---
|
||||
|
||||
if (test('extractSummary returns the first paragraph of the body', () => {
|
||||
const body = '# Heading\n\nThis is the first paragraph. It has two sentences.\n\nSecond paragraph.';
|
||||
const summary = extractSummary(body);
|
||||
assert.ok(summary.includes('test agent'));
|
||||
assert.ok(summary.includes('compression logic'));
|
||||
})) passed += 1; else failed += 1;
|
||||
assert.strictEqual(summary, 'This is the first paragraph.');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await test('extractSummary returns empty string for empty body', async () => {
|
||||
if (test('extractSummary returns empty string for empty body', () => {
|
||||
assert.strictEqual(extractSummary(''), '');
|
||||
assert.strictEqual(extractSummary('# Just a heading'), '');
|
||||
})) passed += 1; else failed += 1;
|
||||
assert.strictEqual(extractSummary('# Only Headings\n\n## Another'), '');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await test('loadAgent reads and parses a single agent file', async () => {
|
||||
const tmpDir = createTempDir('ecc-agent-compress-');
|
||||
try {
|
||||
writeAgent(tmpDir, 'test-agent', SAMPLE_AGENT);
|
||||
const agent = loadAgent(path.join(tmpDir, 'test-agent.md'));
|
||||
assert.strictEqual(agent.name, 'test-agent');
|
||||
assert.strictEqual(agent.fileName, 'test-agent');
|
||||
assert.deepStrictEqual(agent.tools, ['Read', 'Grep', 'Glob']);
|
||||
assert.strictEqual(agent.model, 'sonnet');
|
||||
assert.ok(agent.byteSize > 0);
|
||||
assert.ok(agent.body.includes('You are a test agent'));
|
||||
} finally {
|
||||
cleanupTempDir(tmpDir);
|
||||
}
|
||||
})) passed += 1; else failed += 1;
|
||||
if (test('extractSummary skips code blocks', () => {
|
||||
const body = '```\ncode here\n```\n\nActual summary sentence.';
|
||||
const summary = extractSummary(body);
|
||||
assert.strictEqual(summary, 'Actual summary sentence.');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await test('loadAgents reads all .md files from a directory', async () => {
|
||||
const tmpDir = createTempDir('ecc-agent-compress-');
|
||||
try {
|
||||
writeAgent(tmpDir, 'agent-a', SAMPLE_AGENT);
|
||||
writeAgent(tmpDir, 'agent-b', MINIMAL_AGENT);
|
||||
const agents = loadAgents(tmpDir);
|
||||
assert.strictEqual(agents.length, 2);
|
||||
assert.strictEqual(agents[0].fileName, 'agent-a');
|
||||
assert.strictEqual(agents[1].fileName, 'agent-b');
|
||||
} finally {
|
||||
cleanupTempDir(tmpDir);
|
||||
}
|
||||
})) passed += 1; else failed += 1;
|
||||
if (test('extractSummary respects maxSentences', () => {
|
||||
const body = 'First sentence. Second sentence. Third sentence.';
|
||||
const one = extractSummary(body, 1);
|
||||
const two = extractSummary(body, 2);
|
||||
assert.strictEqual(one, 'First sentence.');
|
||||
assert.strictEqual(two, 'First sentence. Second sentence.');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await test('loadAgents returns empty array for non-existent directory', async () => {
|
||||
const agents = loadAgents('/tmp/nonexistent-ecc-dir-12345');
|
||||
if (test('extractSummary skips plain bullet items', () => {
|
||||
const body = '- plain bullet\n- another bullet\n\nActual paragraph here.';
|
||||
const summary = extractSummary(body);
|
||||
assert.strictEqual(summary, 'Actual paragraph here.');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('extractSummary skips asterisk bullets and numbered lists', () => {
|
||||
const body = '* star bullet\n1. numbered item\n2. second item\n\nReal paragraph.';
|
||||
const summary = extractSummary(body);
|
||||
assert.strictEqual(summary, 'Real paragraph.');
|
||||
})) passed++; else failed++;
|
||||
|
||||
// --- loadAgent / loadAgents ---
|
||||
|
||||
// Create a temp directory with test agent files
|
||||
const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'agent-compress-test-'));
|
||||
const agentContent = '---\nname: test-agent\ndescription: A test agent\ntools: ["Read"]\nmodel: haiku\n---\n\nTest agent body paragraph.\n\n## Details\nMore info.';
|
||||
fs.writeFileSync(path.join(tmpDir, 'test-agent.md'), agentContent);
|
||||
fs.writeFileSync(path.join(tmpDir, 'not-an-agent.txt'), 'ignored');
|
||||
|
||||
if (test('loadAgent reads and parses a single agent file', () => {
|
||||
const agent = loadAgent(path.join(tmpDir, 'test-agent.md'));
|
||||
assert.strictEqual(agent.name, 'test-agent');
|
||||
assert.strictEqual(agent.description, 'A test agent');
|
||||
assert.deepStrictEqual(agent.tools, ['Read']);
|
||||
assert.strictEqual(agent.model, 'haiku');
|
||||
assert.ok(agent.body.includes('Test agent body paragraph'));
|
||||
assert.strictEqual(agent.fileName, 'test-agent');
|
||||
assert.ok(agent.byteSize > 0);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('loadAgents reads all .md files from a directory', () => {
|
||||
const agents = loadAgents(tmpDir);
|
||||
assert.strictEqual(agents.length, 1);
|
||||
assert.strictEqual(agents[0].name, 'test-agent');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('loadAgents returns empty array for non-existent directory', () => {
|
||||
const agents = loadAgents(path.join(os.tmpdir(), 'does-not-exist-agent-compress-test'));
|
||||
assert.deepStrictEqual(agents, []);
|
||||
})) passed += 1; else failed += 1;
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await test('compressToCatalog strips body and keeps only metadata', async () => {
|
||||
const tmpDir = createTempDir('ecc-agent-compress-');
|
||||
try {
|
||||
writeAgent(tmpDir, 'test-agent', SAMPLE_AGENT);
|
||||
const agent = loadAgent(path.join(tmpDir, 'test-agent.md'));
|
||||
const catalog = compressToCatalog(agent);
|
||||
// --- compressToCatalog / compressToSummary ---
|
||||
|
||||
assert.strictEqual(catalog.name, 'test-agent');
|
||||
assert.strictEqual(catalog.description, 'A test agent for unit testing purposes.');
|
||||
assert.deepStrictEqual(catalog.tools, ['Read', 'Grep', 'Glob']);
|
||||
assert.strictEqual(catalog.model, 'sonnet');
|
||||
assert.strictEqual(catalog.body, undefined);
|
||||
} finally {
|
||||
cleanupTempDir(tmpDir);
|
||||
}
|
||||
})) passed += 1; else failed += 1;
|
||||
const sampleAgent = loadAgent(path.join(tmpDir, 'test-agent.md'));
|
||||
|
||||
if (await test('compressToSummary includes first paragraph summary', async () => {
|
||||
const tmpDir = createTempDir('ecc-agent-compress-');
|
||||
try {
|
||||
writeAgent(tmpDir, 'test-agent', SAMPLE_AGENT);
|
||||
const agent = loadAgent(path.join(tmpDir, 'test-agent.md'));
|
||||
const summary = compressToSummary(agent);
|
||||
if (test('compressToCatalog strips body and keeps only metadata', () => {
|
||||
const catalog = compressToCatalog(sampleAgent);
|
||||
assert.strictEqual(catalog.name, 'test-agent');
|
||||
assert.strictEqual(catalog.description, 'A test agent');
|
||||
assert.deepStrictEqual(catalog.tools, ['Read']);
|
||||
assert.strictEqual(catalog.model, 'haiku');
|
||||
assert.strictEqual(catalog.body, undefined);
|
||||
assert.strictEqual(catalog.byteSize, undefined);
|
||||
})) passed++; else failed++;
|
||||
|
||||
assert.strictEqual(summary.name, 'test-agent');
|
||||
assert.ok(summary.summary.length > 0);
|
||||
assert.strictEqual(summary.body, undefined);
|
||||
} finally {
|
||||
cleanupTempDir(tmpDir);
|
||||
}
|
||||
})) passed += 1; else failed += 1;
|
||||
if (test('compressToSummary includes first paragraph summary', () => {
|
||||
const summary = compressToSummary(sampleAgent);
|
||||
assert.strictEqual(summary.name, 'test-agent');
|
||||
assert.ok(summary.summary.includes('Test agent body paragraph'));
|
||||
assert.strictEqual(summary.body, undefined);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await test('buildAgentCatalog in catalog mode produces minimal output with stats', async () => {
|
||||
const tmpDir = createTempDir('ecc-agent-compress-');
|
||||
try {
|
||||
writeAgent(tmpDir, 'agent-a', SAMPLE_AGENT);
|
||||
writeAgent(tmpDir, 'agent-b', MINIMAL_AGENT);
|
||||
// --- buildAgentCatalog ---
|
||||
|
||||
const result = buildAgentCatalog(tmpDir, { mode: 'catalog' });
|
||||
assert.strictEqual(result.agents.length, 2);
|
||||
assert.strictEqual(result.stats.totalAgents, 2);
|
||||
assert.strictEqual(result.stats.mode, 'catalog');
|
||||
assert.ok(result.stats.originalBytes > 0);
|
||||
assert.ok(result.stats.compressedBytes > 0);
|
||||
assert.ok(result.stats.compressedBytes < result.stats.originalBytes);
|
||||
assert.ok(result.stats.compressedTokenEstimate > 0);
|
||||
|
||||
// Catalog entries should not have body
|
||||
for (const agent of result.agents) {
|
||||
assert.strictEqual(agent.body, undefined);
|
||||
assert.ok(agent.name);
|
||||
assert.ok(agent.description);
|
||||
}
|
||||
} finally {
|
||||
cleanupTempDir(tmpDir);
|
||||
}
|
||||
})) passed += 1; else failed += 1;
|
||||
|
||||
if (await test('buildAgentCatalog in summary mode includes summaries', async () => {
|
||||
const tmpDir = createTempDir('ecc-agent-compress-');
|
||||
try {
|
||||
writeAgent(tmpDir, 'agent-a', SAMPLE_AGENT);
|
||||
|
||||
const result = buildAgentCatalog(tmpDir, { mode: 'summary' });
|
||||
assert.strictEqual(result.agents.length, 1);
|
||||
assert.ok(result.agents[0].summary);
|
||||
assert.strictEqual(result.agents[0].body, undefined);
|
||||
} finally {
|
||||
cleanupTempDir(tmpDir);
|
||||
}
|
||||
})) passed += 1; else failed += 1;
|
||||
|
||||
if (await test('buildAgentCatalog in full mode preserves body', async () => {
|
||||
const tmpDir = createTempDir('ecc-agent-compress-');
|
||||
try {
|
||||
writeAgent(tmpDir, 'agent-a', SAMPLE_AGENT);
|
||||
|
||||
const result = buildAgentCatalog(tmpDir, { mode: 'full' });
|
||||
assert.strictEqual(result.agents.length, 1);
|
||||
assert.ok(result.agents[0].body.includes('You are a test agent'));
|
||||
} finally {
|
||||
cleanupTempDir(tmpDir);
|
||||
}
|
||||
})) passed += 1; else failed += 1;
|
||||
|
||||
if (await test('buildAgentCatalog supports filter function', async () => {
|
||||
const tmpDir = createTempDir('ecc-agent-compress-');
|
||||
try {
|
||||
writeAgent(tmpDir, 'agent-a', SAMPLE_AGENT);
|
||||
writeAgent(tmpDir, 'agent-b', MINIMAL_AGENT);
|
||||
|
||||
const result = buildAgentCatalog(tmpDir, {
|
||||
mode: 'catalog',
|
||||
filter: agent => agent.model === 'haiku',
|
||||
});
|
||||
assert.strictEqual(result.agents.length, 1);
|
||||
assert.strictEqual(result.agents[0].name, 'minimal');
|
||||
} finally {
|
||||
cleanupTempDir(tmpDir);
|
||||
}
|
||||
})) passed += 1; else failed += 1;
|
||||
|
||||
if (await test('lazyLoadAgent loads a single agent by name', async () => {
|
||||
const tmpDir = createTempDir('ecc-agent-compress-');
|
||||
try {
|
||||
writeAgent(tmpDir, 'test-agent', SAMPLE_AGENT);
|
||||
writeAgent(tmpDir, 'other', MINIMAL_AGENT);
|
||||
|
||||
const agent = lazyLoadAgent(tmpDir, 'test-agent');
|
||||
assert.ok(agent);
|
||||
assert.strictEqual(agent.name, 'test-agent');
|
||||
assert.ok(agent.body.includes('You are a test agent'));
|
||||
} finally {
|
||||
cleanupTempDir(tmpDir);
|
||||
}
|
||||
})) passed += 1; else failed += 1;
|
||||
|
||||
if (await test('lazyLoadAgent returns null for non-existent agent', async () => {
|
||||
const tmpDir = createTempDir('ecc-agent-compress-');
|
||||
try {
|
||||
const agent = lazyLoadAgent(tmpDir, 'nonexistent');
|
||||
assert.strictEqual(agent, null);
|
||||
} finally {
|
||||
cleanupTempDir(tmpDir);
|
||||
}
|
||||
})) passed += 1; else failed += 1;
|
||||
|
||||
if (await test('buildAgentCatalog works with real agents directory', async () => {
|
||||
const agentsDir = path.join(__dirname, '..', '..', 'agents');
|
||||
if (!fs.existsSync(agentsDir)) {
|
||||
// Skip if agents dir doesn't exist (shouldn't happen in this repo)
|
||||
return;
|
||||
}
|
||||
|
||||
const result = buildAgentCatalog(agentsDir, { mode: 'catalog' });
|
||||
assert.ok(result.agents.length > 0, 'Should find at least one agent');
|
||||
if (test('buildAgentCatalog in catalog mode produces minimal output with stats', () => {
|
||||
const result = buildAgentCatalog(tmpDir, { mode: 'catalog' });
|
||||
assert.strictEqual(result.agents.length, 1);
|
||||
assert.strictEqual(result.agents[0].body, undefined);
|
||||
assert.strictEqual(result.stats.totalAgents, 1);
|
||||
assert.strictEqual(result.stats.mode, 'catalog');
|
||||
assert.ok(result.stats.originalBytes > 0);
|
||||
assert.ok(result.stats.compressedBytes < result.stats.originalBytes,
|
||||
'Catalog mode should be smaller than full agent files');
|
||||
})) passed += 1; else failed += 1;
|
||||
assert.ok(result.stats.compressedBytes < result.stats.originalBytes);
|
||||
assert.ok(result.stats.compressedTokenEstimate > 0);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('buildAgentCatalog in summary mode includes summaries', () => {
|
||||
const result = buildAgentCatalog(tmpDir, { mode: 'summary' });
|
||||
assert.ok(result.agents[0].summary);
|
||||
assert.strictEqual(result.agents[0].body, undefined);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('buildAgentCatalog in full mode preserves body', () => {
|
||||
const result = buildAgentCatalog(tmpDir, { mode: 'full' });
|
||||
assert.ok(result.agents[0].body);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('buildAgentCatalog throws on invalid mode', () => {
|
||||
assert.throws(
|
||||
() => buildAgentCatalog(tmpDir, { mode: 'invalid' }),
|
||||
/Invalid mode "invalid"/
|
||||
);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('buildAgentCatalog supports filter function', () => {
|
||||
// Add a second agent
|
||||
fs.writeFileSync(
|
||||
path.join(tmpDir, 'other-agent.md'),
|
||||
'---\nname: other\ndescription: Other agent\ntools: ["Bash"]\nmodel: opus\n---\n\nOther body.'
|
||||
);
|
||||
const result = buildAgentCatalog(tmpDir, {
|
||||
filter: a => a.model === 'opus',
|
||||
});
|
||||
assert.strictEqual(result.agents.length, 1);
|
||||
assert.strictEqual(result.agents[0].name, 'other');
|
||||
// Clean up
|
||||
fs.unlinkSync(path.join(tmpDir, 'other-agent.md'));
|
||||
})) passed++; else failed++;
|
||||
|
||||
// --- lazyLoadAgent ---
|
||||
|
||||
if (test('lazyLoadAgent loads a single agent by name', () => {
|
||||
const agent = lazyLoadAgent(tmpDir, 'test-agent');
|
||||
assert.ok(agent);
|
||||
assert.strictEqual(agent.name, 'test-agent');
|
||||
assert.ok(agent.body.includes('Test agent body paragraph'));
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('lazyLoadAgent returns null for non-existent agent', () => {
|
||||
const agent = lazyLoadAgent(tmpDir, 'does-not-exist');
|
||||
assert.strictEqual(agent, null);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('lazyLoadAgent rejects path traversal attempts', () => {
|
||||
const agent = lazyLoadAgent(tmpDir, '../etc/passwd');
|
||||
assert.strictEqual(agent, null);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('lazyLoadAgent rejects names with invalid characters', () => {
|
||||
const agent = lazyLoadAgent(tmpDir, 'foo/bar');
|
||||
assert.strictEqual(agent, null);
|
||||
const agent2 = lazyLoadAgent(tmpDir, 'foo bar');
|
||||
assert.strictEqual(agent2, null);
|
||||
})) passed++; else failed++;
|
||||
|
||||
// --- Real agents directory ---
|
||||
|
||||
const realAgentsDir = path.resolve(__dirname, '../../agents');
|
||||
if (test('buildAgentCatalog works with real agents directory', () => {
|
||||
if (!fs.existsSync(realAgentsDir)) return; // skip if not present
|
||||
const result = buildAgentCatalog(realAgentsDir, { mode: 'catalog' });
|
||||
assert.ok(result.agents.length > 0, 'Should find at least one agent');
|
||||
assert.ok(result.stats.compressedBytes < result.stats.originalBytes, 'Catalog should be smaller than original');
|
||||
// Verify significant compression ratio
|
||||
const ratio = result.stats.compressedBytes / result.stats.originalBytes;
|
||||
assert.ok(ratio < 0.5, `Compression ratio ${ratio.toFixed(2)} should be < 0.5`);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('catalog mode token estimate is under 5000 for real agents', () => {
|
||||
if (!fs.existsSync(realAgentsDir)) return;
|
||||
const result = buildAgentCatalog(realAgentsDir, { mode: 'catalog' });
|
||||
assert.ok(
|
||||
result.stats.compressedTokenEstimate < 5000,
|
||||
`Token estimate ${result.stats.compressedTokenEstimate} exceeds 5000`
|
||||
);
|
||||
})) passed++; else failed++;
|
||||
|
||||
// Cleanup
|
||||
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||
|
||||
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
|
||||
@@ -215,7 +215,7 @@ function runTests() {
|
||||
const result = execFileSync('node', [
|
||||
'-e', `console.log(${INLINE_RESOLVE})`,
|
||||
], {
|
||||
env: { PATH: process.env.PATH, HOME: homeDir },
|
||||
env: { PATH: process.env.PATH, HOME: homeDir, USERPROFILE: homeDir },
|
||||
encoding: 'utf8',
|
||||
}).trim();
|
||||
assert.strictEqual(result, expected);
|
||||
@@ -231,7 +231,7 @@ function runTests() {
|
||||
const result = execFileSync('node', [
|
||||
'-e', `console.log(${INLINE_RESOLVE})`,
|
||||
], {
|
||||
env: { PATH: process.env.PATH, HOME: homeDir },
|
||||
env: { PATH: process.env.PATH, HOME: homeDir, USERPROFILE: homeDir },
|
||||
encoding: 'utf8',
|
||||
}).trim();
|
||||
assert.strictEqual(result, path.join(homeDir, '.claude'));
|
||||
|
||||
@@ -23,7 +23,6 @@ const {
|
||||
} = require('../../scripts/lib/install/request');
|
||||
|
||||
const {
|
||||
loadInstallManifests,
|
||||
listInstallComponents,
|
||||
resolveInstallPlan,
|
||||
} = require('../../scripts/lib/install-manifests');
|
||||
@@ -564,7 +563,7 @@ function runTests() {
|
||||
const projectDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-install-project-'));
|
||||
|
||||
try {
|
||||
const result = execFileSync('node', [
|
||||
const _result = execFileSync('node', [
|
||||
scriptPath,
|
||||
'--profile', 'core',
|
||||
'--with', 'capability:security',
|
||||
|
||||
Reference in New Issue
Block a user