23 Commits

Author SHA1 Message Date
Affaan Mustafa
51f2297581 docs: record ECC Tools followup flood control
Record ECC-Tools PR #30 follow-up flood-control evidence in the ECC 2.0 GA roadmap.
2026-05-12 07:54:15 -04:00
Affaan Mustafa
37f2b32d69 docs: record ECC Tools reference validation evidence
Record ECC-Tools PR #29 reference-set validation evidence in the ECC 2.0 GA roadmap.
2026-05-12 07:39:18 -04:00
Affaan Mustafa
7a4c25f1df docs: record AgentShield corpus benchmark evidence
Record AgentShield PR #60 corpus benchmark evidence in the ECC 2.0 GA roadmap and update the next AgentShield slice.

Validation:
- markdownlint roadmap
- npm test: 2324 passed
- harness audit: 70/70
- harness adapters: PASS, 11 adapters
- observability readiness: 14/14
- GitHub Actions matrix green
2026-05-12 07:15:10 -04:00
Affaan Mustafa
a8c03ad350 docs: record AgentShield HTML report evidence
Records AgentShield PR #59 in the ECC 2.0 GA roadmap and moves the next AgentShield roadmap slice to the remaining prompt-injection benchmark/PDF decision work.

Validation:
- npx --yes markdownlint-cli docs/ECC-2.0-GA-ROADMAP.md
- npm test (2324 tests)
- npm run harness:audit -- --format json (70/70)
- npm run harness:adapters -- --check (PASS, 11 adapters)
- npm run observability:ready (14/14)
- GitHub Actions matrix green on PR #1796
2026-05-12 06:52:33 -04:00
Affaan Mustafa
a96787736d docs: record ECC Tools billing audit evidence (#1794) 2026-05-12 06:25:09 -04:00
Affaan Mustafa
a7699d04ba docs: record AgentShield provenance evidence (#1793) 2026-05-12 06:06:11 -04:00
Affaan Mustafa
0e40ff640c docs: record ECC Tools taxonomy evidence (#1792) 2026-05-12 05:38:35 -04:00
Affaan Mustafa
eebfd5dce2 docs: record AgentShield policy pack evidence (#1791) 2026-05-12 05:13:00 -04:00
Affaan Mustafa
1f50ab1903 docs: record cross repo roadmap evidence (#1790) 2026-05-12 04:40:17 -04:00
Affaan Mustafa
68229a8996 docs: inventory workspace legacy repos (#1789) 2026-05-12 04:08:34 -04:00
Affaan Mustafa
8cbf6763c4 docs: publish stale PR salvage ledger (#1788) 2026-05-12 03:50:34 -04:00
Affaan Mustafa
de559bddd2 docs: inventory legacy artifacts (#1787) 2026-05-12 03:34:18 -04:00
Affaan Mustafa
008ce3081b docs: add release publication readiness gate (#1786) 2026-05-12 03:16:22 -04:00
Affaan Mustafa
cdf1b03779 docs: add data-backed harness adapter scorecard (#1785)
* docs: add data-backed harness adapter scorecard

* fix: normalize adapter matrix line endings

* test: avoid doubled CRLF simulation
2026-05-12 02:59:52 -04:00
Affaan Mustafa
969acd9078 docs: add harness adapter compliance matrix (#1784) 2026-05-12 02:24:04 -04:00
Affaan Mustafa
60bd26fadf docs: refresh ECC 2.0 reference architecture (#1783) 2026-05-12 02:03:07 -04:00
Affaan Mustafa
cb2a70ce72 docs: fix motion skill examples
Fix copied example issues from the adopted #1780 motion skills: live reduced-motion config, tokenized distances/easing/springs, valid shimmer skeleton JSX, and visibility cleanup.
2026-05-12 01:47:05 -04:00
Affaan Mustafa
f219a90f20 feat: add motion system skills
Adopts the motion skill content from PR #1780 and syncs the public catalog counts for the current main surface.

Co-authored-by: Jeff <peacelord1309@gmail.com>
2026-05-12 01:30:41 -04:00
Affaan Mustafa
22aabf7d4f test: harden InsAIts wrapper fake Python shim 2026-05-12 01:13:01 -04:00
Affaan Mustafa
901e41997b test: stabilize MCP stderr probe timeout 2026-05-12 01:13:01 -04:00
Affaan Mustafa
df6078ed1e docs: mirror ECC 2.0 GA roadmap 2026-05-12 01:13:01 -04:00
Affaan Mustafa
e17f2bcb1b feat: salvage network architect agents 2026-05-12 00:32:09 -04:00
Affaan Mustafa
f8070dd640 feat: add PRD planning command flow 2026-05-12 00:06:41 -04:00
36 changed files with 4066 additions and 123 deletions

View File

@@ -11,7 +11,7 @@
{
"name": "ecc",
"source": "./",
"description": "The most comprehensive Claude Code plugin — 56 agents, 217 skills, 72 legacy command shims, selective install profiles, and production-ready hooks for TDD, security scanning, code review, and continuous learning",
"description": "The most comprehensive Claude Code plugin — 58 agents, 220 skills, 74 legacy command shims, selective install profiles, and production-ready hooks for TDD, security scanning, code review, and continuous learning",
"version": "2.0.0-rc.1",
"author": {
"name": "Affaan Mustafa",

View File

@@ -1,7 +1,7 @@
{
"name": "ecc",
"version": "2.0.0-rc.1",
"description": "Battle-tested Claude Code plugin for engineering teams — 56 agents, 217 skills, 72 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use",
"description": "Battle-tested Claude Code plugin for engineering teams — 58 agents, 220 skills, 74 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use",
"author": {
"name": "Affaan Mustafa",
"url": "https://x.com/affaanmustafa"

View File

@@ -1,6 +1,6 @@
# Everything Claude Code (ECC) — Agent Instructions
This is a **production-ready AI coding plugin** providing 56 specialized agents, 217 skills, 72 commands, and automated hook workflows for software development.
This is a **production-ready AI coding plugin** providing 58 specialized agents, 220 skills, 74 commands, and automated hook workflows for software development.
**Version:** 2.0.0-rc.1
@@ -147,9 +147,9 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
## Project Structure
```
agents/ — 56 specialized subagents
skills/ — 217 workflow skills and domain knowledge
commands/ — 72 slash commands
agents/ — 58 specialized subagents
skills/ — 220 workflow skills and domain knowledge
commands/ — 74 slash commands
hooks/ — Trigger-based automations
rules/ — Always-follow guidelines (common + per-language)
scripts/ — Cross-platform Node.js utilities

View File

@@ -358,7 +358,7 @@ If you stacked methods, clean up in this order:
/plugin list ecc@ecc
```
**That's it!** You now have access to 56 agents, 217 skills, and 72 legacy command shims.
**That's it!** You now have access to 58 agents, 220 skills, and 74 legacy command shims.
### Dashboard GUI
@@ -456,7 +456,7 @@ everything-claude-code/
| |-- plugin.json # Plugin metadata and component paths
| |-- marketplace.json # Marketplace catalog for /plugin marketplace add
|
|-- agents/ # 56 specialized subagents for delegation
|-- agents/ # 58 specialized subagents for delegation
| |-- planner.md # Feature implementation planning
| |-- architect.md # System design decisions
| |-- tdd-guide.md # Test-driven development
@@ -1360,9 +1360,9 @@ The configuration is automatically detected from `.opencode/opencode.json`.
| Feature | Claude Code | OpenCode | Status |
|---------|-------------|----------|--------|
| Agents | PASS: 56 agents | PASS: 12 agents | **Claude Code leads** |
| Commands | PASS: 72 commands | PASS: 35 commands | **Claude Code leads** |
| Skills | PASS: 217 skills | PASS: 37 skills | **Claude Code leads** |
| Agents | PASS: 58 agents | PASS: 12 agents | **Claude Code leads** |
| Commands | PASS: 74 commands | PASS: 35 commands | **Claude Code leads** |
| Skills | PASS: 220 skills | PASS: 37 skills | **Claude Code leads** |
| Hooks | PASS: 8 event types | PASS: 11 events | **OpenCode has more!** |
| Rules | PASS: 29 rules | PASS: 13 instructions | **Claude Code leads** |
| MCP Servers | PASS: 14 servers | PASS: Full | **Full parity** |
@@ -1465,9 +1465,9 @@ ECC is the **first plugin to maximize every major AI coding tool**. Here's how e
| Feature | Claude Code | Cursor IDE | Codex CLI | OpenCode |
|---------|------------|------------|-----------|----------|
| **Agents** | 56 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
| **Commands** | 72 | Shared | Instruction-based | 35 |
| **Skills** | 217 | Shared | 10 (native format) | 37 |
| **Agents** | 58 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
| **Commands** | 74 | Shared | Instruction-based | 35 |
| **Skills** | 220 | Shared | 10 (native format) | 37 |
| **Hook Events** | 8 types | 15 types | None yet | 11 types |
| **Hook Scripts** | 20+ scripts | 16 scripts (DRY adapter) | N/A | Plugin hooks |
| **Rules** | 34 (common + lang) | 34 (YAML frontmatter) | Instruction-based | 13 instructions |

View File

@@ -160,7 +160,7 @@ Copy-Item -Recurse rules/typescript "$HOME/.claude/rules/"
/plugin list ecc@ecc
```
**完成!** 你现在可以使用 56 个代理、217 个技能和 72 个命令。
**完成!** 你现在可以使用 58 个代理、220 个技能和 74 个命令。
### multi-* 命令需要额外配置

View File

@@ -197,10 +197,12 @@ commands:
- multi-plan
- multi-workflow
- plan
- plan-prd
- pm2
- projects
- promote
- project-init
- pr
- prp-commit
- prp-implement
- prp-plan

View File

@@ -0,0 +1,98 @@
---
name: homelab-architect
description: Designs home and small-lab network plans from hardware inventory, goals, and operator experience level, with safe staged changes and rollback guidance.
tools: ["Read", "Grep"]
model: sonnet
---
You are a practical homelab network architect. Turn a user's hardware inventory,
goals, and comfort level into a staged network plan that avoids lockouts and does
not assume enterprise hardware or deep networking experience.
## Scope
- Home and small-lab gateways, switches, access points, NAS devices, servers,
local DNS, DHCP, guest networks, IoT isolation, and remote access planning.
- Planning and review only. Do not present copy-paste router, firewall, DNS, or
VPN configuration unless the target platform, current topology, backup path,
console access, and rollback plan are known.
Use these focused skills when the request needs detail:
- `homelab-network-readiness` before changing VLAN, DNS, firewall, or VPN setup.
- `homelab-network-setup` for IP ranges, DHCP reservations, cabling, and role
mapping.
- `network-config-validation` when reviewing generated gateway or switch config.
- `network-interface-health` when symptoms point to links, ports, cabling, or
counters.
## Workflow
1. Inventory the hardware: gateway/router, switches, access points, servers,
NAS, DNS resolver, ISP handoff, and remote-access path.
2. Confirm goals: isolation, guest Wi-Fi, ad blocking, local services, remote
access, backups, monitoring, learning lab, or family reliability.
3. Match goals to hardware capability. If the hardware cannot support VLANs,
local DNS, or safe remote access, say so and propose a staged upgrade path.
4. Design the smallest useful topology first, then optional later phases.
5. Define rollback and access safety before any disruptive change.
6. Produce an implementation order that keeps internet, DNS, and management
access recoverable at each step.
## Safety Defaults
- Do not recommend exposing management interfaces to the internet.
- Do not recommend disabling firewall rules, authentication, DNS filtering, or
segmentation as a troubleshooting shortcut.
- Avoid changing DHCP DNS to a local resolver until the resolver has a static
address, health check, and fallback path.
- Avoid VLAN migrations unless the operator can reach the gateway, switch, and
access point after the change.
- Prefer plain-English explanations and small reversible phases.
## Output Format
```text
## Homelab Network Plan: <home or lab name>
### What You Are Building
<short description of the target network>
### Hardware Role Summary
| Device | Role | Notes |
| --- | --- | --- |
### Capability Check
| Goal | Supported now? | Requirement or upgrade |
| --- | --- | --- |
### Addressing And Segmentation
| Network | Purpose | Example range | Notes |
| --- | --- | --- | --- |
### DNS, DHCP, And Local Services
<resolver plan, static reservations, fallback, and service placement>
### Firewall And Access Rules
- <plain-English rule>
- <plain-English rule>
### Implementation Order
1. <safe first step>
2. <validation before next step>
3. <rollback point>
### Quick Wins
1. <small, high-value step>
2. <small, high-value step>
### Later Phases
- <optional future improvement>
### Risks And Rollback
<what can lock the user out and how to recover>
```
When the user is a beginner, explain terms the first time they appear. When the
user is advanced, keep the prose compact and focus on constraints, topology, and
verification.

View File

@@ -0,0 +1,97 @@
---
name: network-architect
description: Designs enterprise or multi-site network architecture from requirements, using existing network skills for focused routing, validation, automation, and troubleshooting detail.
tools: ["Read", "Grep"]
model: sonnet
---
You are a senior network architecture planner. Produce implementable network
designs from business and technical requirements, and route deeper analysis to
the focused ECC network skills instead of inventing device-specific runbooks in
the agent prompt.
## Scope
- Campus, branch, WAN, data center, cloud-adjacent, and hybrid network planning.
- IP addressing, segmentation, routing domains, management-plane access,
redundancy, monitoring, and migration sequencing.
- Design and review only. Do not apply configuration or present live commands as
diagnostics unless they are explicitly read-only.
Use these focused skills when the request needs detail:
- `network-config-validation` for pre-change config review and dangerous command
detection.
- `network-bgp-diagnostics` for BGP neighbor, route-policy, and prefix evidence.
- `network-interface-health` for link, counter, CRC, drop, and flap analysis.
- `cisco-ios-patterns` for IOS/IOS-XE syntax and safe show-command workflows.
- `netmiko-ssh-automation` for bounded read-only network automation patterns.
## Workflow
1. Restate the objective, constraints, and non-goals.
2. Identify missing requirements that materially change the architecture:
site count, user/device count, critical applications, compliance scope,
uptime target, existing hardware, budget tier, and cutover tolerance.
3. Pick the topology and explain why it fits the constraints.
4. Design routing and segmentation before discussing hardware.
5. Define the management plane, logging, monitoring, backup, and rollback model.
6. Produce a phased implementation plan with validation gates and rollback
points.
7. List residual risks and the evidence still needed from operators.
## Design Defaults
- Prefer routed boundaries over stretched layer-2 designs unless a workload
requirement proves otherwise.
- Prefer explicit segmentation for management, server, user, guest, IoT/OT, and
regulated environments.
- Avoid naming exact hardware models unless the user already supplied a vendor or
procurement standard. Recommend capacity classes, redundancy needs, port
counts, support expectations, and feature requirements instead.
- Do not assume BGP, OSPF, EVPN, SD-WAN, or microsegmentation are required. Pick
the simplest design that satisfies scale, operations, and risk.
- Treat security controls as part of the architecture, not an afterthought.
## Output Format
```text
## Network Architecture: <project or environment>
### Objective
<what this design is for>
### Assumptions And Required Follow-Up
- <assumption>
- <question that would change the design>
### Recommended Topology
<topology choice and reasoning>
### Addressing And Segmentation
| Zone / domain | Purpose | Routing boundary | Allowed flows |
| --- | --- | --- | --- |
### Routing And Connectivity
<protocols, route boundaries, summarization, failover, and cloud/WAN notes>
### Management, Observability, And Backup
<management access, logging, config backup, monitoring, and alerting>
### Implementation Phases
1. <phase with validation gate>
2. <phase with rollback point>
### Risks And Mitigations
| Risk | Impact | Mitigation |
| --- | --- | --- |
### Handoff To Focused Skills
- `network-config-validation`: <what to validate next>
- `network-bgp-diagnostics`: <if applicable>
- `network-interface-health`: <if applicable>
```
Keep the plan concrete, but label unknowns clearly. If a live change could lock
operators out, require console or out-of-band access, a backup, a maintenance
window, and rollback steps before recommending it.

View File

@@ -99,7 +99,7 @@ If PR not found, stop with error. Store PR metadata for later phases.
Build review context:
1. **Project rules** — Read `CLAUDE.md`, `.claude/docs/`, and any contributing guidelines
2. **PRP artifacts** — Check `.claude/PRPs/reports/` and `.claude/PRPs/plans/` for implementation context related to this PR
2. **Planning artifacts** — Check `.claude/prds/`, `.claude/plans/`, `.claude/reviews/`, and legacy `.claude/PRPs/{prds,plans,reports,reviews}/` for context related to this PR
3. **PR intent** — Parse PR description for goals, linked issues, test plans
4. **Changed files** — List all modified files and categorize by type (source, test, config, docs)
@@ -188,7 +188,7 @@ Special cases:
### Phase 6 — REPORT
Create review artifact at `.claude/PRPs/reviews/pr-<NUMBER>-review.md`:
Create review artifact at `.claude/reviews/pr-<NUMBER>-review.md` unless the repo already uses legacy `.claude/PRPs/reviews/` for this workstream:
```markdown
# PR Review: #<NUMBER> — <TITLE>
@@ -273,7 +273,7 @@ Issues: <critical_count> critical, <high_count> high, <medium_count> medium, <lo
Validation: <pass_count>/<total_count> checks passed
Artifacts:
Review: .claude/PRPs/reviews/pr-<NUMBER>-review.md
Review: .claude/reviews/pr-<NUMBER>-review.md
GitHub: <PR URL>
Next steps:

160
commands/plan-prd.md Normal file
View File

@@ -0,0 +1,160 @@
---
description: "Generate a lean, problem-first PRD and hand off to /plan for implementation planning."
argument-hint: "[product/feature idea] (blank = start with questions)"
---
# PRD Command
Produces a **Product Requirements Document** — the requirements-phase artifact of the SDLC. Captures *what* must be true for success and *why*, and stops before *how*. Implementation decomposition is delegated to `/plan`.
**Input**: `$ARGUMENTS`
## Scope of this command
| This command does | This command does NOT do |
|---|---|
| Frame the problem and users | Design the architecture |
| Capture success criteria and scope | Pick files or write patterns |
| List open questions and risks | Enumerate implementation tasks |
| Write `.claude/prds/{name}.prd.md` | Produce an implementation plan — that's `/plan` |
If you find yourself writing implementation detail, stop and cut it. It belongs in `/plan`.
**Anti-fluff rule**: When information is missing, write `TBD — needs validation via {method}`. Never invent plausible-sounding requirements.
## Workflow
Four phases. Each phase is a single gate — ask the questions, wait for the user, then move on. No nested loops, no parallel research ceremony.
### Phase 1 — FRAME
If `$ARGUMENTS` is empty, ask:
> What do you want to build? One or two sentences.
If provided, restate in one sentence and ask:
> I understand: *{restated}*. Correct, or should I adjust?
Then ask the framing questions in a single set:
> 1. **Who** has this problem? (specific role or segment)
> 2. **What** is the observable pain? (describe behavior, not assumed needs)
> 3. **Why** can't they solve it with what exists today?
> 4. **Why now?** — what changed that makes this worth doing?
Wait for the user. Do not proceed without answers (or explicit "skip").
### Phase 2 — GROUND
Ask for evidence. This is the shortest phase and the most load-bearing:
> What evidence do you have that this problem is real and worth solving? (user quotes, support tickets, metrics, observed behavior, failed workarounds — anything concrete)
If the user has none, record the PRD's Evidence section as `Assumption — needs validation via {user research | analytics | prototype}`. This keeps the PRD honest.
### Phase 3 — DECIDE
Scope and hypothesis in a single set:
> 1. **Hypothesis** — Complete: *We believe **{capability}** will **{solve problem}** for **{users}**. We'll know we're right when **{measurable outcome}**.*
> 2. **MVP** — The minimum needed to test the hypothesis?
> 3. **Out of scope** — What are you explicitly **not** building (even if users ask)?
> 4. **Open questions** — Uncertainties that could change the approach?
Wait for responses.
### Phase 4 — GENERATE & HAND OFF
Create the directory if needed, write the PRD, and report.
```bash
mkdir -p .claude/prds
```
**Output path**: `.claude/prds/{kebab-case-name}.prd.md`
#### PRD Template
```markdown
# {Product / Feature Name}
## Problem
{23 sentences: who has what problem, and what's the cost of leaving it unsolved?}
## Evidence
- {User quote, data point, or observation}
- {OR: "Assumption — needs validation via {method}"}
## Users
- **Primary**: {role, context, what triggers the need}
- **Not for**: {who this explicitly excludes}
## Hypothesis
We believe **{capability}** will **{solve problem}** for **{users}**.
We'll know we're right when **{measurable outcome}**.
## Success Metrics
| Metric | Target | How measured |
|---|---|---|
| {primary} | {number} | {method} |
## Scope
**MVP** — {the minimum to test the hypothesis}
**Out of scope**
- {item} — {why deferred}
## Delivery Milestones
<!-- Business outcomes, not engineering tasks. /plan turns each into a plan. -->
<!-- Status: pending | in-progress | complete -->
| # | Milestone | Outcome | Status | Plan |
|---|---|---|---|---|
| 1 | {name} | {user-visible change} | pending | — |
| 2 | {name} | {user-visible change} | pending | — |
## Open Questions
- [ ] {question that could change scope or approach}
## Risks
| Risk | Likelihood | Impact | Mitigation |
|---|---|---|---|
---
*Status: DRAFT — requirements only. Implementation planning pending via /plan.*
```
#### Report to user
```
PRD created: .claude/prds/{name}.prd.md
Problem: {one line}
Hypothesis: {one line}
MVP: {one line}
Validation status:
Problem {validated | assumption}
Users {concrete | generic — refine}
Metrics {defined | TBD}
Open questions: {count}
Next step: /plan .claude/prds/{name}.prd.md
→ /plan will pick the next pending milestone and produce an implementation plan.
```
## Integration
- `/plan <prd-path>` — consume the PRD and produce an implementation plan for the next pending milestone.
- `tdd-workflow` skill — implement the plan test-first.
- `/pr` — open a PR that references the PRD and plan.
## Success criteria
- **PROBLEM_CLEAR**: problem is specific and evidenced (or flagged as assumption).
- **USER_CONCRETE**: primary user is a specific role, not "users".
- **HYPOTHESIS_TESTABLE**: measurable outcome included.
- **SCOPE_BOUNDED**: explicit MVP and explicit out-of-scope.
- **NO_IMPLEMENTATION_DETAIL**: file paths, libraries, or task breakdowns are absent — if they appeared, move them to the `/plan` step.

View File

@@ -1,10 +1,11 @@
---
description: Restate requirements, assess risks, and create step-by-step implementation plan. WAIT for user CONFIRM before touching any code.
argument-hint: "[feature description | path/to/*.prd.md]"
---
# Plan Command
This command creates a comprehensive implementation plan before writing any code.
This command creates a comprehensive implementation plan before writing any code. It accepts either free-form requirements or a PRD markdown file.
Run inline by default. Do not call the Task tool or any subagent by default. This keeps `/plan` usable from plugin installs that ship commands without agent files.
@@ -29,11 +30,86 @@ Use `/plan` when:
The assistant will:
1. **Analyze the request** and restate requirements in clear terms
2. **Break down into phases** with specific, actionable steps
3. **Identify dependencies** between components
4. **Assess risks** and potential blockers
5. **Estimate complexity** (High/Medium/Low)
6. **Present the plan** and WAIT for your explicit confirmation
2. **Ground the plan** in relevant codebase patterns when the repo is available
3. **Break down into phases** with specific, actionable steps
4. **Identify dependencies** between components
5. **Assess risks** and potential blockers
6. **Estimate complexity** (High/Medium/Low)
7. **Present the plan** and WAIT for your explicit confirmation
## Input Modes
| Input | Mode | Behavior |
|---|---|---|
| `path/to/name.prd.md` | PRD artifact mode | Read the PRD, pick the next pending delivery milestone or implementation phase, and write `.claude/plans/{name}.plan.md` |
| Any other markdown path | Reference mode | Read the file as context and produce an inline plan |
| Free-form text | Conversational mode | Produce an inline plan |
| Empty input | Clarification mode | Ask what should be planned |
In PRD artifact mode, create `.claude/plans/` if needed. If the PRD contains a `Delivery Milestones` table, update only the selected row from `pending` to `in-progress` and set its `Plan` cell to the generated plan path. If the PRD uses the legacy `.claude/PRPs/prds/` format with `Implementation Phases`, read it without migrating paths.
## Pattern Grounding
Before writing the plan, search the codebase for conventions the implementation should mirror. Capture the top example for each relevant category with file references:
| Category | What to capture |
|---|---|
| Naming | File, function, type, command, or script naming in the affected area |
| Error handling | How failures are raised, returned, logged, or handled gracefully |
| Logging | Levels, format, and what gets logged |
| Data access | Repository, service, query, or filesystem patterns |
| Tests | Test file location, framework, fixtures, and assertion style |
If no similar code exists, state that explicitly. Do not invent a pattern.
## PRD Artifact Output
When called with a `.prd.md` file, write the plan to `.claude/plans/{kebab-case-name}.plan.md` using this structure:
````markdown
# Plan: {Feature Name}
**Source PRD**: {path}
**Selected Milestone**: {milestone or phase name}
**Complexity**: {Small | Medium | Large}
## Summary
{2-3 sentences}
## Patterns to Mirror
| Category | Source | Pattern |
|---|---|---|
| Naming | `path:line` | {short description} |
| Errors | `path:line` | {short description} |
| Tests | `path:line` | {short description} |
## Files to Change
| File | Action | Why |
|---|---|---|
| `path` | CREATE / UPDATE / DELETE | {reason} |
## Tasks
### Task 1: {name}
- **Action**: {what to do}
- **Mirror**: {pattern to follow}
- **Validate**: {command that proves correctness}
## Validation
```bash
{project-specific validation commands}
```
## Risks
| Risk | Likelihood | Mitigation |
|---|---|---|
## Acceptance
- [ ] All tasks complete
- [ ] Validation passes
- [ ] Patterns mirrored, not reinvented
````
After writing the artifact, report its path and WAIT for confirmation before writing code.
## Example Usage
@@ -108,8 +184,11 @@ After planning:
- Use the `tdd-workflow` skill to implement with test-driven development
- Use `/build-fix` if build errors occur
- Use `/code-review` to review completed implementation
- Use `/pr` or `/prp-pr` to open a pull request
> **Need deeper planning?** Use `/prp-plan` for artifact-producing planning with PRD integration, codebase analysis, and pattern extraction. Use `/prp-implement` to execute those plans with rigorous validation loops.
> **Need requirements first?** Use `/plan-prd` for a lean PRD at `.claude/prds/{name}.prd.md`.
>
> **Need the legacy PRP flow?** Use `/prp-plan` for deep PRP planning with `.claude/PRPs/` artifacts. Use `/prp-implement` to execute those plans with rigorous validation loops.
## Optional Planner Agent

184
commands/pr.md Normal file
View File

@@ -0,0 +1,184 @@
---
description: "Create a GitHub PR from current branch with unpushed commits — discovers templates, analyzes changes, pushes"
argument-hint: "[base-branch] (default: main)"
---
# Create Pull Request
**Input**: `$ARGUMENTS` — optional, may contain a base branch name and/or flags (e.g., `--draft`).
**Parse `$ARGUMENTS`**:
- Extract any recognized flags (`--draft`)
- Treat remaining non-flag text as the base branch name
- Default base branch to `main` if none specified
---
## Phase 1 — VALIDATE
Check preconditions:
```bash
git branch --show-current
git status --short
git log origin/<base>..HEAD --oneline
```
| Check | Condition | Action if Failed |
|---|---|---|
| Not on base branch | Current branch ≠ base | Stop: "Switch to a feature branch first." |
| Clean working directory | No uncommitted changes | Warn: "You have uncommitted changes. Commit or stash first." |
| Has commits ahead | `git log origin/<base>..HEAD` not empty | Stop: "No commits ahead of `<base>`. Nothing to PR." |
| No existing PR | `gh pr list --head <branch> --json number` is empty | Stop: "PR already exists: #<number>. Use `gh pr view <number> --web` to open it." |
If all checks pass, proceed.
---
## Phase 2 — DISCOVER
### PR Template
Search for PR template in order:
1. `.github/PULL_REQUEST_TEMPLATE/` directory — if exists, list files and let user choose (or use `default.md`)
2. `.github/PULL_REQUEST_TEMPLATE.md`
3. `.github/pull_request_template.md`
4. `docs/pull_request_template.md`
If found, read it and use its structure for the PR body.
### Commit Analysis
```bash
git log origin/<base>..HEAD --format="%h %s" --reverse
```
Analyze commits to determine:
- **PR title**: Use conventional commit format with type prefix — `feat: ...`, `fix: ...`, etc.
- If multiple types, use the dominant one
- If single commit, use its message as-is
- **Change summary**: Group commits by type/area
### File Analysis
```bash
git diff origin/<base>..HEAD --stat
git diff origin/<base>..HEAD --name-only
```
Categorize changed files: source, tests, docs, config, migrations.
### Planning Artifacts
Check for related artifacts produced by `/plan-prd`, `/plan`, or the legacy PRP workflow:
- `.claude/prds/` — PRDs this PR implements a milestone of
- `.claude/plans/` — Plans executed by this PR
- `.claude/PRPs/prds/` — legacy PRP PRDs
- `.claude/PRPs/plans/` — legacy PRP implementation plans
- `.claude/PRPs/reports/` — legacy PRP implementation reports
Reference these in the PR body if they exist.
---
## Phase 3 — PUSH
```bash
git push -u origin HEAD
```
If push fails due to divergence:
```bash
git fetch origin
git rebase origin/<base>
git push -u origin HEAD
```
If rebase conflicts occur, stop and inform the user.
---
## Phase 4 — CREATE
### With Template
If a PR template was found in Phase 2, fill in each section using the commit and file analysis. Preserve all template sections — leave sections as "N/A" if not applicable rather than removing them.
### Without Template
Use this default format:
```markdown
## Summary
<1-2 sentence description of what this PR does and why>
## Changes
<bulleted list of changes grouped by area>
## Files Changed
<table or list of changed files with change type: Added/Modified/Deleted>
## Testing
<description of how changes were tested, or "Needs testing">
## Related Issues
<linked issues with Closes/Fixes/Relates to #N, or "None">
```
### Create the PR
```bash
gh pr create \
--title "<PR title>" \
--base <base-branch> \
--body "<PR body>"
# Add --draft if the --draft flag was parsed from $ARGUMENTS
```
---
## Phase 5 — VERIFY
```bash
gh pr view --json number,url,title,state,baseRefName,headRefName,additions,deletions,changedFiles
gh pr checks --json name,status,conclusion 2>/dev/null || true
```
---
## Phase 6 — OUTPUT
Report to user:
```
PR #<number>: <title>
URL: <url>
Branch: <head> → <base>
Changes: +<additions> -<deletions> across <changedFiles> files
CI Checks: <status summary or "pending" or "none configured">
Artifacts referenced:
- <any PRDs/plans linked in PR body>
Next steps:
- gh pr view <number> --web → open in browser
- /code-review <number> → review the PR
- gh pr merge <number> → merge when ready
```
---
## Edge Cases
- **No `gh` CLI**: Stop with: "GitHub CLI (`gh`) is required. Install: <https://cli.github.com/>"
- **Not authenticated**: Stop with: "Run `gh auth login` first."
- **Force push needed**: If remote has diverged and rebase was done, use `git push --force-with-lease` (never `--force`).
- **Multiple PR templates**: If `.github/PULL_REQUEST_TEMPLATE/` has multiple files, list them and ask user to choose.
- **Large PR (>20 files)**: Warn about PR size. Suggest splitting if changes are logically separable.

246
docs/ECC-2.0-GA-ROADMAP.md Normal file
View File

@@ -0,0 +1,246 @@
# ECC 2.0 GA Roadmap
This roadmap is the durable repo mirror for the Linear project:
<https://linear.app/ecctools/project/ecc-20-ga-harness-os-security-platform-de2a0ecace6f>
Linear issue creation is currently blocked by the workspace active issue limit,
so the live execution truth is split across:
- the Linear project description, status updates, and milestones;
- this repo document;
- merged PR evidence;
- handoffs under `~/.cluster-swarm/handoffs/`.
## Current Evidence
As of 2026-05-12:
- Public GitHub queues are clean across `everything-claude-code`,
`agentshield`, `JARVIS`, `ECC-Tools`, and `ECC-website`.
- `npm run harness:audit -- --format json` reports 70/70 on current `main`.
- `npm run observability:ready` reports 14/14 readiness on current `main`.
- `docs/architecture/harness-adapter-compliance.md` maps Claude Code, Codex,
OpenCode, Cursor, Gemini, Zed-adjacent, dmux, Orca, Superset, Ghast, and
terminal-only support to install paths, verification commands, and risk
notes.
- `npm run harness:adapters -- --check` validates that the public adapter
matrix still matches the source data in
`scripts/lib/harness-adapter-compliance.js`.
- `docs/releases/2.0.0-rc.1/publication-readiness.md` gates GitHub release,
npm dist-tag, Claude plugin, Codex plugin, OpenCode package, billing, and
announcement publication on fresh evidence fields.
- `docs/legacy-artifact-inventory.md` records that no `_legacy-documents-*`
directories exist in the current checkout, inventories the two sibling
workspace-level `_legacy-documents-*` repos as sanitized extraction sources,
and classifies `legacy-command-shims/` as an opt-in archive/no-action
surface.
- `docs/stale-pr-salvage-ledger.md` records stale PR salvage outcomes,
skipped PRs, superseded work, and the remaining #1687 translator/manual
review tail.
- AgentShield PR #53 reduced two context-rule false positives and closed the
remaining AgentShield issues.
- AgentShield PR #55 added GitHub Action organization-policy enforcement with
`policy` / `fail-on-policy` inputs, `policy-status` /
`policy-violations` outputs, job-summary evidence, and policy violation
annotations.
- AgentShield PR #56 added SARIF/code-scanning output for organization-policy
violations as `agentshield-policy/*` results.
- AgentShield PR #57 added OSS, team, enterprise, regulated,
high-risk-hooks/MCP, and CI-enforcement policy-pack presets plus
`agentshield policy init --pack`.
- AgentShield PR #58 added MCP package provenance fields and report-level
counts for npm vs git, pinned vs unpinned, known-good, and registry-backed
supply-chain evidence.
- AgentShield PR #59 added self-contained HTML executive summaries with risk
posture, critical/high priority findings, category exposure, README/API
docs, built-CLI smoke validation, and 1,704-test coverage.
- AgentShield PR #60 added category-level built-in corpus benchmark output,
a `readyForRegressionGate` signal, terminal `--corpus` category coverage,
README/API docs, built-CLI smoke validation, and 1,705-test coverage.
- ECC PR #1778 recovered the useful stale #1413 network/homelab architect-agent
concepts.
- ECC-Tools PR #26 added cost/token-risk predictive follow-ups for AI routing,
Claude/model calls, usage limits, quota, and analysis-budget changes that lack
budget, quota, rate-limit, or cost validation evidence.
- ECC-Tools PR #27 added the non-blocking `ECC Tools / PR Risk Taxonomy`
check-run for Security Evidence, Harness Drift, Install Manifest Integrity,
CI/CD Recommendation, Cost/Token Risk, and Agent Config Review buckets.
- ECC-Tools PR #28 added billing readiness audit checks for plan limits,
entitlements, Marketplace plan shape, subscription source, seats, and
overage metering.
- ECC-Tools PR #29 added deterministic Reference Set Validation signals for
analyzer, skill, agent, command, and harness-guidance changes that lack eval,
golden trace, benchmark, or reference-set evidence.
- ECC-Tools PR #30 capped follow-up generation to three new GitHub issues and
one draft PR per run, then emits the remaining deterministic findings as a
project sync backlog for Linear/status tracking without flooding trackers.
## Operating Rules
- Keep public PRs and issues below 20, with zero as the preferred release-lane
target.
- Maintain 70/70 harness audit and 14/14 observability readiness after every
GA-readiness batch.
- Do not publish release or social announcements until the GitHub release,
npm/package state, billing state, and plugin submission surfaces are verified
with fresh evidence.
- Do not treat closed stale PRs as discarded. Pair each cleanup batch with a
salvage pass: inspect the closed diffs, port useful compatible work on
maintainer-owned branches, and credit the source PR.
- Do not create new Linear issues until the active issue limit is cleared.
## Reference Pressure
The GA roadmap is informed by these reference surfaces:
- `stablyai/orca` and `superset-sh/superset` for worktree-native parallel agent
UX, review loops, and workspace presets.
- `standardagents/dmux` and `aidenybai/ghast` for terminal/worktree
multiplexing, session grouping, and lifecycle hooks.
- `jarrodwatts/claude-hud` for always-visible status, tool, agent, todo, and
context telemetry.
- `stanford-iris-lab/meta-harness` and `greyhaven-ai/autocontext` for
evaluation-driven harness improvement, traces, playbooks, and promotion
loops.
- `NousResearch/hermes-agent` for operator shell, gateway, memory, skills, and
multi-platform command patterns.
- `anthropics/claude-code`, active `sst/opencode` / `anomalyco/opencode`, Zed,
Codex, Cursor, Gemini, and terminal-only workflows for adapter expectations.
The output of this reference work should be concrete ECC deltas, not a second
strategy memo.
## Milestones
### 1. GA Release, Naming, And Plugin Publication Readiness
Target: 2026-05-24
Acceptance:
- Naming matrix covers product name, npm package, Claude plugin, Codex plugin,
OpenCode package, marketplace metadata, docs, and migration copy.
- GitHub release, npm dist-tag, plugin publication, and announcement gates are
mapped to fresh command evidence.
- Release notes, migration guide, known issues, quickstart, X thread, LinkedIn
post, and GitHub release copy are ready but not posted before release URLs
exist.
- Plugin publication/contact paths for Claude and Codex are documented with
owner, required artifacts, and submission status.
### 2. Harness Adapter Compliance Matrix And Scorecard Onramp
Target: 2026-05-31
Acceptance:
- Adapter matrix covers Claude Code, Codex, OpenCode, Cursor, Gemini,
Zed-adjacent surfaces, dmux, Orca, Superset, Ghast, and terminal-only use.
- Each adapter has supported assets, unsupported surfaces, install path,
verification command, and risk notes.
- Harness audit remains 70/70 and gains a public onramp that explains how teams
use the scorecard.
- Reference findings are converted into concrete adapter, observability, or
operator-surface deltas.
### 3. Local Observability, HUD/Status, And Session Control Plane
Target: 2026-06-07
Acceptance:
- Observability readiness remains 14/14 and is backed by JSONL traces, status
snapshots, risk ledger, and exportable handoff contracts.
- HUD/status model covers context, tool calls, active agents, todos, checks,
cost, risk, and queue state.
- Worktree/session controls cover create, resume, status, stop, diff, PR,
merge queue, and conflict queue.
- Linear/GitHub/handoff sync model is explicit enough for real-time progress
tracking.
### 4. Self-Improving Harness Evaluation Loop
Target: 2026-06-10
Acceptance:
- Scenario specs, verifier contracts, traces, playbooks, and regression gates
are documented and at least one read-only prototype exists.
- The loop separates observation, proposal, verification, and promotion.
- Team and individual setups can be scored and improved without blindly
mutating configs.
- RAG/reference-set design covers vetted ECC patterns, team history, CI
failures, diffs, review outcomes, and harness config quality.
### 5. AgentShield Enterprise Security Platform
Target: 2026-06-14
Acceptance:
- Formal policy schema exists for org baselines, exceptions, owners,
expiration, severity, and audit trails.
- SARIF/code-scanning output is implemented and tested.
- GitHub Action policy gates expose organization policy status and violation
counts for branch-protection and CI evidence.
- Policy packs are defined for OSS, team, enterprise, regulated, high-risk
hooks/MCP, and CI enforcement.
- Supply-chain intelligence covers MCP package provenance and has an extension
path for npm/pip reputation, CVEs, typosquats, and dependency risk.
- Prompt-injection corpus and regression benchmark are ready for continuous
rule hardening with category-level coverage and regression-gate output.
- Enterprise reports include JSON plus self-contained HTML executive output
with risk posture, priority findings, and category exposure.
### 6. ECC Tools Billing, Deep Analysis, PR Checks, And Linear Sync
Target: 2026-06-21
Acceptance:
- Native GitHub Marketplace billing announcement is backed by verified
implementation and docs.
- Internal billing readiness audit covers plan limits, seats, entitlement
mapping, Marketplace plan shape, subscription state, overage hooks, and
failure modes.
- Deep analyzer covers diff patterns, CI/CD workflows, dependency/security
surface, PR review behavior, failure history, harness config, skill quality,
and reference-set/RAG comparison.
- PR check suite taxonomy includes Security Evidence, Harness Drift, Install
Manifest Integrity, CI/CD Recommendation, Cost/Token Risk, and Agent Config
Review.
- Cost/token-risk predictive follow-ups flag AI routing, model-call, usage,
quota, and budget changes when budget evidence is missing.
- Reference-set validation follow-ups flag analyzer, skill, agent, command, and
harness-guidance changes that lack eval, golden trace, benchmark, or
maintained reference-set evidence.
- Linear sync design maps findings to issues/status without flooding the
workspace.
- Follow-up generation caps automatic GitHub object creation and keeps overflow
findings in a copy-ready project sync backlog.
### 7. Legacy Audit And Stale-Work Salvage Closure
Target: 2026-06-15
Acceptance:
- Legacy directories and orphaned handoffs are inventoried.
- Each useful artifact is marked landed, Linear/project-tracked, salvage
branch, or archive/no-action.
- Workspace-level legacy repos are mined only through sanitized maintainer
branches; raw context, secrets, personal paths, local settings, and private
drafts are never imported wholesale.
- Stale PR salvage policy stays in force: close stale/conflicted PRs first,
record a salvage ledger item, then port useful compatible content on
maintainer branches with attribution.
- #1687 localization leftovers are handled only by translator/manual review,
not blind cherry-pick.
## Next Engineering Slices
1. Decide whether AgentShield PDF export adds value beyond the merged HTML
executive report and corpus benchmark output.
2. Extend ECC Tools deep analysis and Linear/project sync without flooding the
workspace.

View File

@@ -1,54 +1,238 @@
# ECC 2.0 Reference Architecture
Research summary from competitor/reference analysis (2026-03-22).
Current execution mirror:
[`ECC-2.0-GA-ROADMAP.md`](ECC-2.0-GA-ROADMAP.md).
## Competitive Landscape
This document turns the May 2026 reference sweep into concrete ECC backlog
shape. It is not a second strategy memo: every reference pressure below should
land as an adapter, check, observable signal, security policy, PR review
surface, or release-readiness gate.
| Project | Stars | Language | Type | Multi-Agent | Worktrees | Terminal-native |
|---------|-------|----------|------|-------------|-----------|-----------------|
| **ECC 2.0** | - | Rust | TUI | Yes | Yes | **Yes (SSH)** |
| superset-sh/superset | 7.7K | TypeScript | Electron | Yes | Yes | No (desktop) |
| standardagents/dmux | 1.2K | TypeScript | TUI (Ink) | Yes | Yes | Yes |
| opencode-ai/opencode | 11.5K | Go | TUI | No | No | Yes |
| smtg-ai/claude-squad | 6.5K | Go | TUI | Yes | Yes | Yes |
## Reference Baseline
## Three-Layer Architecture
Snapshot date: 2026-05-12.
```
┌─────────────────────────────────┐
│ TUI Layer (ratatui) │ User-facing dashboard
│ Panes, diff viewer, hotkeys │ Communicates via Unix socket
├─────────────────────────────────┤
│ Runtime Layer (library) │ Workspace runtime, agent registry,
│ State persistence, detection │ status detection, SQLite
├─────────────────────────────────┤
│ Daemon Layer (process) │ Persistent across TUI restarts
│ Terminal sessions, git ops, │ PTY management, heartbeats
│ agent process supervision │
└─────────────────────────────────┘
| Reference | Primary pressure on ECC 2.0 | Concrete ECC delta |
| --- | --- | --- |
| [`stablyai/orca`](https://github.com/stablyai/orca) | Worktree-native multi-agent IDE with terminals, source control, GitHub integration, SSH, notifications, design/browser mode, account switching, and per-worktree context. | Treat worktree lifecycle, review state, notification state, and account/provider identity as first-class adapter signals. |
| [`superset-sh/superset`](https://github.com/superset-sh/superset) | Desktop AI-agent workspace with parallel execution, worktree isolation, diff review, workspace presets, and broad CLI-agent compatibility. | Add workspace preset taxonomy and make ECC2 session/worktree state exportable enough for external editors to consume. |
| [`standardagents/dmux`](https://github.com/standardagents/dmux) | Tmux/worktree orchestration, lifecycle hooks, multi-select agent control, smart merging, file browser, notifications, and cleanup. | Add lifecycle-hook coverage to the harness matrix and define merge/conflict queue events. |
| [`aidenybai/ghast`](https://github.com/aidenybai/ghast) | Native macOS terminal multiplexer with cwd-grouped workspaces, panes, tabs, drag/drop, search, and notifications. | Preserve terminal-native ergonomics while adding cwd/session grouping and searchable handoff/session records. |
| [`jarrodwatts/claude-hud`](https://github.com/jarrodwatts/claude-hud) | Always-visible Claude Code statusline for context, tools, agents, todos, and transcript-backed activity. | Formalize the ECC HUD/status payload for context, cost, tool calls, active agents, todos, queue state, checks, and risk. |
| [`stanford-iris-lab/meta-harness`](https://github.com/stanford-iris-lab/meta-harness) | Automated search over task-specific harness design: what to store, retrieve, and show. | Split ECC improvement loops into scenario spec, proposer trace, verifier result, and promoted playbook. |
| [`greyhaven-ai/autocontext`](https://github.com/greyhaven-ai/autocontext) | Recursive harness improvement using traces, reports, artifacts, datasets, playbooks, and role-separated evaluators. | Store reusable traces and playbooks before mutating installed harness assets. |
| [`NousResearch/hermes-agent`](https://github.com/NousResearch/hermes-agent) | Self-improving operator shell with memories, skills, scheduler, gateways, subagents, terminal backends, and migration tooling. | Keep ECC portable across local, SSH, container, and hosted terminal backends without hiding the underlying commands. |
| [`anthropics/claude-code`](https://github.com/anthropics/claude-code), [`sst/opencode`](https://github.com/sst/opencode), Zed, Codex, Cursor, Gemini | Different agent harnesses expose different hooks, plugin surfaces, session stores, config files, and review loops. | Maintain a public adapter compliance matrix instead of treating one harness as the canonical UX. |
| Local Claude Code source review | Session, tool, permission, hook, remote, analytics, task, and context-suggestion surfaces are more structured than the public CLI UX suggests. | Model status and risk events around session messages, permission requests, tool progress, context pressure, and summary state. |
## Architecture Shape
ECC 2.0 should be a harness operating system, not only a catalog of commands,
agents, and skills.
```text
┌──────────────────────────────────────────────────────────────┐
│ Operator Surface │
│ CLI, plugin, TUI, HUD/statusline, release gates, PR checks │
├──────────────────────────────────────────────────────────────┤
│ Harness Adapter Layer │
│ Claude Code, Codex, OpenCode, Cursor, Gemini, Zed, dmux, │
│ Orca, Superset, Ghast, terminal-only │
├──────────────────────────────────────────────────────────────┤
│ Worktree, Session, And Queue Runtime │
│ worktrees, panes, sessions, todos, checks, merge/conflict │
│ queues, notification state, ownership, handoff exports │
├──────────────────────────────────────────────────────────────┤
│ Observability And Evaluation Loop │
│ JSONL traces, status snapshots, risk ledger, harness audit, │
│ scenario specs, verifiers, promoted playbooks, RAG sets │
├──────────────────────────────────────────────────────────────┤
│ Security And Commercial Platform │
│ AgentShield policies/SARIF, ECC Tools checks, billing, │
│ Linear/GitHub sync, enterprise reports │
└──────────────────────────────────────────────────────────────┘
```
## Patterns to Adopt
## Reference-To-Backlog Map
### From Superset (Electron, 7.7K stars)
- **Workspace Runtime Registry** — trait-based abstraction with capability flags
- **Persistent daemon terminal** — sessions survive restarts via IPC
- **Per-project mutex** for git operations (prevents race conditions)
- **Port allocation** per workspace for dev servers
- **Cold restore** from serialized terminal scrollback
### Worktree And Session Orchestration
### From dmux (Ink TUI, 1.2K stars)
- **Worker-per-pane status detection** — fingerprint terminal output + LLM classification
- **Agent Registry** — centralized agent definitions (install check, launch cmd, permissions)
- **Retry strategies** — different policies for destructive vs read-only operations
- **PaneLifecycleManager** — exclusive locks preventing concurrent pane races
- **Lifecycle hooks** — worktree_created, pre_merge, post_merge
- **Background cleanup queue** — async worktree deletion
Adopt from Orca, Superset, dmux, and Ghast:
## ECC 2.0 Advantages
- Terminal-native (works over SSH, unlike Superset)
- Integrates with 116-skill ecosystem
- AgentShield security scanning
- Self-improving skill evolution (continuous-learning-v2)
- Rust single binary (3.4MB, no runtime deps)
- First Rust-based agentic IDE TUI in open source
- Worktree lifecycle events: create, resume, pause, stop, diff, review, PR,
merge-ready, conflict, stale, close, salvage.
- Session grouping by repo, branch, cwd, task, owner, and harness.
- Workspace presets for release lane, PR triage lane, docs lane, security lane,
and test-writer lane.
- Notifications for blocked CI, dirty worktrees, merge conflicts, stale review,
and finished autonomous runs.
- Review loops that can annotate diffs and PRs without taking ownership away
from maintainers.
Repo work:
- `everything-claude-code`: extend the adapter compliance matrix and public
scorecard onramp.
- `ecc2`: surface session/worktree state through a stable local payload before
adding hosted telemetry.
- `ECC-Tools`: consume the same lifecycle events for PR checks, issue routing,
and Linear sync.
Verification:
- `npm run harness:audit -- --format json`
- `npm run observability:ready`
- targeted adapter matrix tests once the matrix moves from docs to data
### HUD, Status, And Observability
Adopt from Claude HUD and the Claude Code source review:
- Context pressure: usage, compaction risk, large-result warnings, and summary
state.
- Tool activity: active tool, recent tools, duration, risky operations, and
permission requests.
- Agent activity: active subagents, delegated task, branch/worktree, and wait
state.
- Queue activity: open PRs/issues, CI state, stale/conflict batches, review
state, and closed-stale salvage backlog.
- Cost/risk: token cost estimate, destructive-operation risk, hook/MCP risk,
and security scan state.
Repo work:
- Keep `docs/architecture/observability-readiness.md` as the operator-facing
readiness gate.
- Define a versioned HUD/status JSON contract that both ECC2 and ECC Tools can
consume.
- Add sample exports from `loop-status`, `session-inspect`, harness audit, and
risk ledger into a fixture directory before building visual UI.
Verification:
- `npm run observability:ready`
- fixture validation for every status payload
- cross-platform smoke test for commands that read session history
### Self-Improving Harness Loop
Adopt from Meta-Harness, Autocontext, and Hermes Agent:
- Separate the loop into observation, proposal, verification, promotion, and
rollback.
- Store every proposed improvement as trace plus artifact, not only as a final
changed file.
- Promote playbooks only after a verifier proves that they improve a scenario
without widening blast radius.
- Use RAG/reference sets for vetted ECC patterns, team history, CI failures,
review outcomes, harness config quality, and security decisions.
Repo work:
- `everything-claude-code`: document scenario specs, verifier contracts, and
playbook promotion rules.
- `ECC-Tools`: map analyzer findings to PR comments, check runs, and Linear
tasks without flooding the workspace.
- `agentshield`: feed prompt-injection and config-risk findings into regression
suites.
Verification:
- read-only prototype that emits a trace, report, candidate playbook, and
verifier result
- regression fixture proving a bad proposal is rejected
### AgentShield Enterprise Security Platform
AgentShield should move from useful scanner to enterprise security platform.
Backlog shape:
- Policy schema for org baseline, rule severity, owner, exception, expiration,
evidence, and audit trail.
- SARIF output for GitHub code scanning.
- Policy packs for OSS, team, enterprise, regulated, high-risk hooks/MCP, and
CI enforcement.
- Supply-chain intelligence for MCP packages, npm/pip provenance, CVEs,
typosquats, and dependency reputation.
- Prompt-injection corpus and regression benchmark.
- JSON plus executive HTML/PDF report output.
Verification:
- schema unit tests
- SARIF fixture tests
- policy-pack golden tests
- false-positive regression tests from the public issue history
### ECC Tools Commercial And Review Platform
ECC Tools should become the GitHub-native layer for billing, deep analysis,
PR checks, and Linear progress tracking.
Backlog shape:
- Native GitHub Marketplace billing audit before any payments announcement:
plans, seats, org/account mapping, subscription state, overage behavior,
downgrade/cancel behavior, and failure modes.
- Deep analyzer comparable in scope to the useful parts of GitGuardian,
Dependabot, CodeRabbit, and Greptile: security evidence, dependency risk,
CI/CD recommendations, PR review behavior, config quality, token/cost risk,
and harness drift.
- RAG/reference set over vetted ECC patterns, historical PR outcomes,
dependency advisories, CI failures, review decisions, and team-specific
conventions.
- Linear sync that maps findings to project status, milestone evidence, and
owner-ready issues without exhausting issue limits.
Verification:
- check-run fixture tests
- billing webhook replay tests
- analyzer golden PR fixtures
- Linear sync dry-run fixture
### Closed-Stale Salvage Lane
Closing stale PRs keeps the public queue usable, but useful work should not be
lost because a contributor no longer has time to rebase.
Execution rule:
1. Close stale, conflicted, or obsolete PRs with a clear courtesy comment.
2. Record them in a salvage ledger with source PR, author, reason closed,
useful files/concepts, risk, and recommended maintainer action.
3. After the cleanup batch, inspect each closed PR diff manually.
4. Cherry-pick only when the patch still applies cleanly and preserves current
architecture. Otherwise reimplement the useful idea in a fresh maintainer
branch.
5. Preserve attribution in the commit body or PR body.
6. Comment back on the source PR when useful work lands, linking the maintainer
PR or merged commit.
7. Mark the ledger item as landed, superseded, Linear-tracked, or no-action.
Required safeguards:
- Never blind cherry-pick generated churn, bulk localization, or dependency
major-version changes.
- Prefer small maintainer PRs over one salvage megabranch.
- Run the same validation gates as normal code, docs, or catalog changes.
- Keep contributor credit even when the final implementation is rewritten.
## Near-Term Implementation Order
1. Extend the harness adapter matrix and public scorecard onramp.
2. Add the release/name/plugin publication checklist with evidence fields.
3. Define the HUD/status JSON contract and fixture directory.
4. Start AgentShield policy schema plus SARIF fixtures.
5. Audit ECC Tools billing and check-run surfaces.
6. Inventory legacy folders and closed-stale PRs into the salvage ledger.
7. Port useful stale work in small attributed maintainer PRs.
## Non-Goals
- Hosted telemetry before the local event model is useful and testable.
- Automatic mutation of user harness configs without verifier evidence.
- Treating any one agent harness as the canonical interface.
- Release or payments announcements before command, package, marketplace, and
billing evidence is fresh.

154
docs/PLAN-PRD-PATTERN.md Normal file
View File

@@ -0,0 +1,154 @@
# Plan-PRD Pattern: Markdown-Staged Planning Flow
A lightweight, SDLC-aligned planning workflow where each phase of the lifecycle produces a committable markdown **staging file** that the next command consumes.
> Short version: `/plan-prd` writes a PRD, `/plan` writes a plan, the `tdd-workflow` skill implements it, and `/pr` ships it. Each arrow is a file on disk, not a conversation in memory.
## Feature: Markdown Staging Files
Every planning artifact is a plain `.md` file under `.claude/`:
```
.claude/
prds/ # Product Requirements Documents from /plan-prd
plans/ # Implementation plans from /plan
reviews/ # Code review artifacts from /code-review
```
These files are:
- **Plain markdown** — readable by humans, diffable in PRs, grep-able at CLI.
- **Committable** — check them in alongside code so the intent travels with the implementation.
- **Composable** — each command accepts the previous stage's file as its `$ARGUMENTS`, so the toolchain composes via paths rather than in-context state.
- **Resumable** — close the session, open a new one tomorrow, pass the file path back in.
## Flow
```
┌───────────────────────────┐
│ /plan-prd "<idea>" │ Requirements phase
│ → .claude/prds/X.prd.md │ Problem · Users · Hypothesis · Scope
└─────────────┬─────────────┘
┌───────────────────────────┐
│ /plan <prd-path> │ Design phase
│ → .claude/plans/X.plan.md│ Patterns · Files · Tasks · Validation
└─────────────┬─────────────┘
┌───────────────────────────┐
│ tdd-workflow skill │ Implementation phase
│ → code + tests │ Test-first, minimal diff
└─────────────┬─────────────┘
┌───────────────────────────┐
│ /pr │ Delivery phase
│ → GitHub PR │ Links back to PRD + plan
└───────────────────────────┘
```
Each box is a **gate**. You can:
- Stop between gates — the artifact persists.
- Restart from any gate using the artifact path.
- Skip gates for small work — feed `/plan` free-form text and ignore `/plan-prd`.
- Run a gate standalone — `/plan "refactor X"` produces a conversational plan with no artifact.
## Why `/plan-prd` Is Additional to `/plan`
They answer different questions. Mixing them causes scope creep.
| Command | Answers | SDLC Phase | Artifact |
|---|---|---|---|
| `/plan-prd` | *What problem? For whom? How do we know we're done?* | Requirements | `.claude/prds/{name}.prd.md` |
| `/plan` | *What files, patterns, and tasks satisfy the requirement?* | Design + Implementation strategy | `.claude/plans/{name}.plan.md` (PRD mode) or inline (text mode) |
### Why not combine them?
- **Separation of concerns.** PRDs ask *why*; plans ask *how*. Bundling them creates one oversized command that does both poorly, as the old `/prp-prd``/prp-plan` pair demonstrated (8-phase interrogation with implementation-phase tables mixed into requirements).
- **Different audiences.** A stakeholder reviewing a PRD does not care about file paths or type-check commands. An engineer reading a plan does not need the market-research phase.
- **Different lifespans.** A PRD can remain stable while its plan is rewritten multiple times as implementation assumptions change.
- **Optional step.** Many changes (bug fixes, small refactors, single-file additions) don't need a PRD. `/plan` alone is enough. Forcing a PRD on every change is bureaucracy.
### When to use each
Use `/plan-prd` when:
- Scope is unclear or contested.
- Multiple stakeholders need to align on the problem before solutioning.
- The change is large enough that writing down the hypothesis is cheaper than relitigating scope mid-implementation.
Use `/plan` directly when:
- Requirements are already clear (a bug report, a scoped refactor, a known migration).
- The work is small enough that a conversational plan + confirmation gate is sufficient.
- You already have a PRD — pass it to `/plan` and skip `/plan-prd`.
## Usage
### Full flow (feature with unclear scope)
```bash
# 1. Draft the PRD
/plan-prd "Per-user rate limits on the public API"
# → .claude/prds/per-user-rate-limits.prd.md created
# Answer the framing questions, provide evidence, define hypothesis and scope.
# 2. Pick the next pending milestone and produce a plan
/plan .claude/prds/per-user-rate-limits.prd.md
# → .claude/plans/per-user-rate-limits.plan.md created
# The plan includes patterns to mirror, files to change, and validation commands.
# PRD's Delivery Milestones table updates the selected row to `in-progress`.
# 3. Implement test-first
Use the tdd-workflow skill
# 4. Open the PR
/pr
# → PR body auto-references .claude/prds/... and .claude/plans/...
```
### Quick flow (scope already clear)
```bash
/plan "Add retry with exponential backoff to the notifier"
# Conversational planning, no artifact.
# Confirm, then use the tdd-workflow skill.
```
### Reference an existing PRD from elsewhere
```bash
# PRD was written by someone else, lives in your repo
/plan docs/rfcs/0042-rate-limiting.prd.md
```
`/plan` detects any `.prd.md` path and switches to artifact mode, parsing the Delivery Milestones table.
## Why staging files beat in-context state
- **Transferable**: drop the PRD path into a fresh session and you're caught up — no replaying a long conversation.
- **Auditable**: the PR reviewer sees *what you intended* next to *what you built*.
- **Versioned**: the staging file evolves in git history, same as code.
- **Machine-parseable**: `/plan` programmatically picks the next pending milestone; `/pr` programmatically links artifacts in the PR body. No prompt engineering required.
## Related commands
- `/plan-prd` — requirements (this pattern entry point).
- `/plan` — planning (consumes PRDs or free-form text).
- `tdd-workflow` skill — test-first implementation.
- `/pr` — open a PR that references PRDs and plans.
- `/code-review` — reviews local diffs or PRs; auto-detects `.claude/prds/` and `.claude/plans/` as context.
## Compatibility
This pattern adds ECC-native staging-file commands alongside the existing `prp-*` command set. The legacy PRP commands remain available for deeper PRP workflows and for users who already have `.claude/PRPs/` artifacts.
- `/plan-prd` is the lean requirements entry point for `.claude/prds/`.
- `/plan` can consume `.prd.md` files and produce `.claude/plans/` artifacts without requiring the legacy PRP directory layout.
- `/pr` is the ECC-native PR creation command and can reference `.claude/prds/` and `.claude/plans/`.
- `/prp-prd`, `/prp-plan`, `/prp-implement`, `/prp-commit`, and `/prp-pr` remain valid legacy/deep workflow commands.

View File

@@ -13,6 +13,9 @@ The goal is to keep the durable parts of agentic work in one repo:
Claude Code, Codex, OpenCode, Cursor, Gemini, and future harnesses should adapt those assets at the edge instead of requiring a new workflow model for every tool.
For the operator-facing support matrix and scorecard workflow, see
[Harness Adapter Compliance Matrix](harness-adapter-compliance.md).
## Portability Model
| Surface | Shared Source | Harness Adapter | Current Status |

View File

@@ -0,0 +1,105 @@
# Harness Adapter Compliance Matrix
This matrix is the public onramp for teams that want to use ECC across more
than one coding harness. It turns the cross-harness architecture into a
practical scorecard: what works today, what is instruction-only, what needs an
adapter, and what evidence an operator should collect before trusting a setup.
ECC's durable units stay in shared sources:
- `skills/*/SKILL.md`
- `rules/`
- `commands/`
- `hooks/hooks.json`
- `scripts/hooks/`
- MCP reference configs
- session and observability contracts
Harness-specific files should only adapt loading, event shape, command names,
or platform limits.
## Compliance States
| State | Meaning |
| --- | --- |
| Native | ECC can install or verify the surface directly for this harness. |
| Adapter-backed | ECC has a thin adapter, plugin, or package surface, but parity differs by harness. |
| Instruction-backed | ECC can provide the guidance and files, but the harness does not expose the runtime hook/session surface ECC needs for enforcement. |
| Reference-only | The tool is useful as a design pressure or external runtime, but ECC does not yet ship a direct installer or adapter for it. |
## Matrix
The matrix below is rendered from
`scripts/lib/harness-adapter-compliance.js` and verified by
`npm run harness:adapters -- --check`.
<!-- harness-adapter-compliance:matrix-start -->
| Harness or runtime | State | Supported assets | Unsupported or different surfaces | Install or onramp | Verification command | Risk notes |
| --- | --- | --- | --- | --- | --- | --- |
| Claude Code | Native | Claude plugin assets; skills; commands; hooks; MCP config; local rules; statusline-oriented workflows | Claude-native hooks do not imply parity in other harnesses | `./install.sh --profile minimal --target claude`; Claude plugin install | `npm run harness:audit -- --format json`; `node scripts/session-inspect.js --list-adapters` | Avoid loading every skill by default; keep hooks opt-in and inspectable. |
| Codex | Instruction-backed | `AGENTS.md`; Codex plugin metadata; skills; MCP reference config; command patterns | Native hook enforcement and Claude slash-command semantics are not equivalent | `./install.sh --profile minimal --target codex`; repo-local `AGENTS.md` review | `npm run harness:audit -- --format json` | Treat hooks as policy text unless a native Codex hook surface exists. |
| OpenCode | Adapter-backed | OpenCode package/plugin metadata; shared skills; MCP config; event adapter patterns | Event names, plugin packaging, and command dispatch differ from Claude Code | OpenCode package or plugin surface from this repo | `node tests/scripts/build-opencode.test.js`; `npm run harness:audit -- --format json` | Keep hook logic in shared scripts and adapt only event shape at the edge. |
| Cursor | Adapter-backed | Cursor rules; project-local skills; hook adapter; shared scripts | Cursor hook events and rule loading differ from Claude Code | `./install.sh --profile minimal --target cursor` | `node tests/lib/install-targets.test.js`; `npm run harness:audit -- --format json` | Cursor adapters must preserve existing project rules and avoid silent overwrite. |
| Gemini | Instruction-backed | Gemini project-local instructions; shared skills; rules; compatibility docs | No full ECC hook parity; ecosystem ports must document drift from upstream ECC | `./install.sh --profile minimal --target gemini` | `node tests/lib/install-targets.test.js` | Treat Gemini ports as ecosystem adapters until validated end to end inside Gemini CLI. |
| Zed-adjacent workflows | Instruction-backed | shared skills; `AGENTS.md` style project instructions; verification loops | Zed agent surfaces vary; no first-party ECC installer is shipped today | Manual copy from shared ECC sources until adapter requirements settle | `npm run harness:audit -- --format json` | Do not claim native Zed support before a real adapter and verification path exist. |
| dmux | Adapter-backed | session snapshots; tmux/worktree orchestration status; handoff exports | dmux is an orchestration runtime, not an install target for skills/rules | `node scripts/session-inspect.js --list-adapters`; dmux session target inspection | `node tests/lib/session-adapters.test.js` | Treat dmux events as session/runtime signals, not as a replacement for repo validation. |
| Orca | Reference-only | worktree lifecycle; review state; notification; provider-identity design pressure | No ECC installer or direct adapter today | Use as a comparison target for worktree/session state requirements | `npm run observability:ready` | Do not import product-specific assumptions; convert lessons into ECC event fields. |
| Superset | Reference-only | workspace presets; parallel-agent review loops; worktree isolation design pressure | No ECC installer or direct adapter today | Use as a comparison target for workspace preset taxonomy | `npm run observability:ready` | Keep ECC portable; do not require a desktop workspace to get basic value. |
| Ghast | Reference-only | terminal-native pane grouping; cwd grouping; search; notifications | No ECC installer or direct adapter today | Use as a comparison target for terminal-first session grouping | `node scripts/session-inspect.js --list-adapters` | Preserve terminal ergonomics before adding visual UI assumptions. |
| Terminal-only | Native | skills; rules; commands; scripts; harness audit; observability readiness; handoffs | No external UI, no automatic session control unless scripts are run explicitly | Clone repo; run commands directly; use minimal profile for project installs | `npm run harness:audit -- --format json`; `npm run observability:ready` | This is the fallback contract; every higher-level adapter should degrade to it. |
<!-- harness-adapter-compliance:matrix-end -->
## Scorecard Onramp
Use this sequence before asking ECC to make a team or repo setup more
autonomous:
```bash
npm run harness:adapters -- --check
npm run harness:audit -- --format json
npm run observability:ready
node scripts/session-inspect.js --list-adapters
node scripts/loop-status.js --json --write-dir .ecc/loop-status
```
Read the result as a setup scorecard, not a product badge:
- `harness:adapters -- --check` proves this public matrix still matches the
adapter source data and required evidence fields.
- `harness:audit` scores tool coverage, context efficiency, quality gates,
memory persistence, eval coverage, security guardrails, and cost efficiency.
- `observability:ready` proves the repo still exposes the local status,
session, tool-activity, risk-ledger, and release-onramp signals.
- `session-inspect --list-adapters` shows which session surfaces are actually
inspectable in the current environment.
- `loop-status --json` creates a machine-readable handoff/status payload for
longer autonomous runs.
## Data-Backed Scorecard Contract
Each adapter record exposes:
- `id`
- `state`
- `supported_assets`
- `unsupported_surfaces`
- `install_or_onramp`
- `verification_commands`
- `risk_notes`
- `last_verified_at`
- `owner`
- `source_docs`
The validator fails if a public adapter claim has no install path,
verification command, risk note, owner, source doc, or verification date.
## Operating Rules
- Prefer small, additive adapters over harness-specific forks of the same
workflow.
- Do not call a harness native until the adapter has an install path and a
verification command.
- Keep Codex, Gemini, and Zed surfaces honest when enforcement is
instruction-backed rather than runtime-backed.
- Treat reference-only tools as design pressure until ECC has a direct adapter.
- Keep the terminal-only path healthy; it is the portability floor.

View File

@@ -0,0 +1,108 @@
# Legacy Artifact Inventory
This inventory keeps legacy and stale-work cleanup from becoming implicit. Each
artifact should be classified as landed, milestone-tracked, salvage branch, or
archive/no-action before release work treats the queue as clean.
## Classification States
| State | Meaning |
| --- | --- |
| Landed | Useful work has already been ported to current `main` and verified. |
| Milestone-tracked | Useful work remains, but belongs to a named roadmap milestone. |
| Salvage branch | Useful work should be ported through a fresh maintainer branch with attribution. |
| Translator/manual review | Content may be useful, but cannot be safely imported automatically. |
| Archive/no-action | Artifact is intentionally retained or skipped; no active port is planned. |
## Current Repository Scan
As of 2026-05-12, the tracked repo has no `_legacy-documents-*` directories.
Fresh check:
```sh
find . -type d -name '_legacy-documents-*' -print
```
Expected result: no output.
The only tracked legacy directory currently found by filename scan is
`legacy-command-shims/`.
The umbrella ECC workspace also contains sibling legacy git repositories outside
this tracked checkout. These are intentionally inventoried separately because
they can contain raw operator context, local settings, private drafts, or
untracked files that should not be copied into the public repo wholesale.
Fresh workspace-level check from the ECC umbrella directory:
```sh
find .. -maxdepth 1 -type d -name '_legacy-documents-*' -print | sort
```
Expected result:
```text
../_legacy-documents-ecc-context-2026-04-30
../_legacy-documents-ecc-everything-claude-code-2026-04-30
```
## Inventory
| Artifact | State | Evidence | Action |
| --- | --- | --- | --- |
| `_legacy-documents-*` directories | Archive/no-action | No matching directories exist in the tracked checkout as of 2026-05-12. | Re-run the scan before release. If any appear, add each directory to this table before publishing. |
| `legacy-command-shims/` | Archive/no-action | `legacy-command-shims/README.md` states these retired short-name shims are opt-in and no longer loaded by the default plugin command surface. | Keep as an explicit compatibility archive. Do not move these back into the default plugin surface without a migration decision. |
| Closed-stale PR salvage ledger | Landed | `docs/stale-pr-salvage-ledger.md` records useful stale work recovered through maintainer PRs. | Continue using the ledger pattern for future stale closures. |
| #1687 zh-CN localization tail | Translator/manual review | Large safe subsets landed in #1746-#1752; remaining pieces require translator/manual review per salvage ledger. | Do not blindly cherry-pick. Split by docs, commands, agents, and skills if a translator review lane opens. |
## Workspace-Level Legacy Repos
These sibling repositories live outside the tracked `everything-claude-code`
checkout. They are source material for future salvage passes, not installable
release assets.
| Artifact | State | Evidence | Action |
| --- | --- | --- | --- |
| `../_legacy-documents-ecc-everything-claude-code-2026-04-30` | Archive/no-action | Separate legacy checkout on `fix/configure-ecc-skill-copy-paths-1483` at `b78ddbd0`; useful configure-ecc and install-path concepts have been superseded by current install docs and tests. The checkout also has untracked localized project-guidelines examples and a Finder duplicate `skills/social-graph-ranker/SKILL 2.md`. | Do not import wholesale. If configure-ecc copy-root regressions reappear, use this branch only as source-attributed archaeology and port through a fresh maintainer branch. Leave Finder duplicates out of source control. |
| `../_legacy-documents-ecc-context-2026-04-30` | Milestone-tracked | Archived `ECC-context` repo is four commits ahead of its origin and contains context, gameplan, knowledge, marketing, AgentShield, and ECC Tools planning material. It also contains local/private surfaces such as `.env` and local settings. | Keep as a sanitized extraction source for roadmap, launch, AgentShield, and ECC Tools work. Never copy raw context, secrets, personal paths, private settings, or unpublished drafts into this repo. Port only focused, public-safe content with attribution. |
## Workspace Legacy Import Rules
When mining workspace-level legacy repos:
1. Do not read, print, stage, or copy `.env` files, tokens, OAuth secrets,
local settings, personal paths, or private operator context.
2. Do not import raw marketing drafts, gameplans, or chat/context dumps.
3. Extract only focused, public-safe ideas into current docs or code.
4. Attribute the source legacy repo, branch, commit, or stale PR in the new PR.
5. Validate the result with the same tests and release checks as native work.
## Legacy Command Shim Contents
The compatibility archive currently contains 12 retired command shims:
| Shim | Preferred current direction |
| --- | --- |
| `agent-sort.md` | Use maintained command or skill routing where available. |
| `claw.md` | Use maintained `scripts/claw.js` / `npm run claw` surfaces. |
| `context-budget.md` | Use maintained token/context budgeting skills. |
| `devfleet.md` | Use maintained agent/harness orchestration docs and skills. |
| `docs.md` | Use current documentation and release checklist workflows. |
| `e2e.md` | Use maintained E2E testing skills and test scripts. |
| `eval.md` | Use eval-harness and verification-loop skills. |
| `orchestrate.md` | Use maintained orchestration status and worktree scripts. |
| `prompt-optimize.md` | Use prompt-optimizer skill. |
| `rules-distill.md` | Use current rules and skill extraction workflows. |
| `tdd.md` | Use tdd-workflow and language-specific testing skills. |
| `verify.md` | Use verification-loop and package-specific verification skills. |
## Release Rule
Before any GA or rc publication pass:
1. Re-run the `_legacy-documents-*` scan.
2. Re-run the closed-stale salvage ledger check.
3. Confirm every newly discovered legacy artifact is represented in this file.
4. Port useful work through fresh maintainer PRs with source attribution.
5. Leave archive/no-action artifacts out of default install and plugin loading.

View File

@@ -3,6 +3,7 @@
## Repo
- verify local `main` is synced to `origin/main`
- verify `docs/ECC-2.0-GA-ROADMAP.md` reflects the current Linear milestone plan
- verify `docs/HERMES-SETUP.md` is present
- verify `docs/architecture/cross-harness.md` is present
- verify this release directory is committed
@@ -12,6 +13,7 @@
- verify package, plugin, marketplace, OpenCode, and agent metadata stays at `2.0.0-rc.1`
- verify `ecc2/Cargo.toml` stays at `0.1.0` for rc.1; `ecc2/` remains an alpha control-plane scaffold
- complete `publication-readiness.md` with fresh evidence before any GitHub release, npm publish, plugin submission, or announcement post
- update release metadata in one dedicated release-version PR
- run the root test suite
- run `cd ecc2 && cargo test`

View File

@@ -0,0 +1,73 @@
# ECC v2.0.0-rc.1 Publication Readiness
This checklist is the release gate for public publication surfaces. Do not use
it as evidence by itself. Fill the evidence fields with fresh command output or
URLs from the exact commit being released.
## Release Identity Matrix
| Surface | Expected value | Source of truth | Fresh check | Evidence artifact | Owner | Status |
| --- | --- | --- | --- | --- | --- | --- |
| Product name | Everything Claude Code / ECC | `README.md`, `CHANGELOG.md`, release notes | `rg -n "Everything Claude Code" README.md CHANGELOG.md docs/releases/2.0.0-rc.1` | Pending | Release owner | Pending |
| GitHub repo | `affaan-m/everything-claude-code` | Git remote and release URLs | `git remote get-url origin` | Pending | Release owner | Pending |
| Git tag | `v2.0.0-rc.1` | GitHub releases | `gh release view v2.0.0-rc.1 --repo affaan-m/everything-claude-code` | Pending | Release owner | Pending |
| npm package | `ecc-universal` | `package.json` | `node -p "require('./package.json').name"` | Pending | Package owner | Pending |
| npm version | `2.0.0-rc.1` | `VERSION`, `package.json`, lockfiles | `node -p "require('./package.json').version"` | Pending | Package owner | Pending |
| npm dist-tag | `next` for rc, `latest` only for GA | npm registry | `npm view ecc-universal dist-tags --json` | Pending | Package owner | Pending |
| Claude plugin slug | `ecc` / `ecc@ecc` install path | `.claude-plugin/plugin.json`, `.claude-plugin/marketplace.json` | `node tests/hooks/hooks.test.js` | Pending | Plugin owner | Pending |
| Claude plugin manifest | `2.0.0-rc.1`, no unsupported `agents` or explicit `hooks` fields | `.claude-plugin/plugin.json`, `.claude-plugin/PLUGIN_SCHEMA_NOTES.md` | `claude plugin validate .claude-plugin/plugin.json` | Pending | Plugin owner | Pending |
| Codex plugin manifest | `2.0.0-rc.1` with shared skill source | `.codex-plugin/plugin.json` | `node tests/docs/ecc2-release-surface.test.js` | Pending | Plugin owner | Pending |
| OpenCode package | `ecc-universal` plugin module | `.opencode/package.json`, `.opencode/index.ts` | `npm run build:opencode` | Pending | Package owner | Pending |
| Agent metadata | `2.0.0-rc.1` | `agent.yaml`, `.agents/plugins/marketplace.json` | `node tests/scripts/catalog.test.js` | Pending | Release owner | Pending |
| Migration copy | rc.1 upgrade path, not GA claim | `release-notes.md`, `quickstart.md`, `HERMES-SETUP.md` | `npx markdownlint-cli docs/releases/2.0.0-rc.1/*.md` | Pending | Docs owner | Pending |
## Publication Gates
| Gate | Required evidence | Fresh check | Blocker field | Owner | Status |
| --- | --- | --- | --- | --- | --- |
| GitHub release | Tag exists, release notes use final URLs, assets attached if needed | `gh release view v2.0.0-rc.1 --json tagName,url,isPrerelease` | `Blocker:` | Release owner | Pending |
| npm package | `npm pack --dry-run` has expected files, version matches, rc goes to `next` | `npm pack --dry-run` and `npm publish --tag next --dry-run` where supported | `Blocker:` | Package owner | Pending |
| Claude plugin | Manifest validates, marketplace JSON points to public repo, install docs match slug | `claude plugin validate .claude-plugin/plugin.json` | `Blocker:` | Plugin owner | Pending |
| Codex plugin | Manifest version matches package and docs, hook limitations are explicit | `node tests/docs/ecc2-release-surface.test.js` | `Blocker:` | Plugin owner | Pending |
| OpenCode package | Build output is regenerated from source and package metadata is current | `npm run build:opencode` | `Blocker:` | Package owner | Pending |
| ECC Tools billing reference | Any billing claim links to verified Marketplace/App state | `gh api repos/ECC-Tools/ECC-Tools` plus app/marketplace URL check | `Blocker:` | ECC Tools owner | Pending |
| Announcement copy | X, LinkedIn, GitHub release, and longform copy point to live URLs | `rg -n "TODO" docs/releases/2.0.0-rc.1` and repeat for `TBD` | `Blocker:` | Release owner | Pending |
## Required Command Evidence
Record the exact commit SHA and command output before any publication action:
| Evidence | Command | Required result | Recorded output |
| --- | --- | --- | --- |
| Clean release branch | `git status --short --branch` | On intended release commit; no unrelated files | Pending |
| Harness audit | `npm run harness:audit -- --format json` | 70/70 passing | Pending |
| Adapter scorecard | `npm run harness:adapters -- --check` | PASS | Pending |
| Observability readiness | `npm run observability:ready` | 14/14 passing | Pending |
| Root suite | `node tests/run-all.js` | 0 failures | Pending |
| Markdown lint | `npx markdownlint-cli '**/*.md' --ignore node_modules` | 0 failures | Pending |
| Package surface | `node tests/scripts/npm-publish-surface.test.js` | 0 failures | Pending |
| Release surface | `node tests/docs/ecc2-release-surface.test.js` | 0 failures | Pending |
| Optional Rust surface | `cd ecc2 && cargo test` | 0 failures or explicit deferral | Pending |
## Do Not Publish If
- `main` has unreviewed release-surface changes after the evidence was recorded.
- `npm view ecc-universal dist-tags --json` contradicts the intended rc/GA tag.
- Claude plugin validation is unavailable and no manual install smoke test is
recorded.
- Release notes or announcement drafts still contain placeholder URLs,
`TODO`, `TBD`, private workspace paths, or personal operator references.
- Billing, Marketplace, or plugin-submission copy claims a live surface before
the live URL exists.
- Stale PR salvage work is mid-flight on the same branch.
## Announcement Order
1. Merge the release-version PR.
2. Record the required command evidence from the release commit.
3. Create or verify the GitHub prerelease.
4. Publish npm with the rc dist-tag.
5. Submit or update plugin marketplace surfaces.
6. Update release notes with final live URLs.
7. Publish GitHub release copy.
8. Publish X, LinkedIn, and longform copy only after the public URLs work.

View File

@@ -0,0 +1,99 @@
# Stale PR Salvage Ledger
This ledger records useful work recovered from stale, conflicted, or closed PRs.
The rule is simple: queue cleanup closes stale PRs, but it does not discard
useful work. Maintainers should inspect the closed diff, port compatible pieces
on fresh branches, and credit the source PR.
## Classification States
| State | Meaning |
| --- | --- |
| Salvaged | Useful work was ported to current `main` through a maintainer PR. |
| Already present | Current `main` already contained the useful work before salvage. |
| Superseded | Current `main` solved the same problem differently. |
| Skipped | The PR was accidental, too broad, unsafe, or too low-signal to port. |
| Translator/manual review | Content may be useful, but needs human language/domain review before import. |
## Salvaged Into Current Main
| Source PR | Original contribution | Salvage result |
| --- | --- | --- |
| #1309 | Trading/community project material | Salvaged in #1761 as a neutral community-project README listing. |
| #1322 | Vietnamese README translation | Salvaged in #1764 as `docs/vi-VN/README.md` plus selector updates. |
| #1326 | Angular developer skill and rules | Salvaged in #1763 with current skill, rules, install wiring, and catalog updates. |
| #1328 | Continuous-learning Windows UTF-8 stdout fix | Salvaged in #1761. |
| #1329 | Plugin install detection hardening | Salvaged in #1761 through current harness audit detection support. |
| #1334 | Windows desktop E2E skill | Salvaged in #1762 with install, package, and catalog wiring. |
| #1352 | Qwen install target | Salvaged in #1738 through the current Qwen install target. |
| #1413 | Network and homelab skills/agents | Salvaged through #1729, #1731, #1745, and #1778. |
| #1429 | JoyCode install target | Salvaged in #1737 through the current JoyCode install target. |
| #1467 | Scientific skills and OpenCode discovery work | Useful USPTO and gget pieces salvaged in #1740; stale generated claims were not copied. |
| #1493 | SessionStart context scoping | Salvaged in #1774 with current hook semantics and tests. |
| #1498 | PRD planning flow | Salvaged in #1777. |
| #1504 | Statusline/context monitor hooks | Salvaged in #1776 with current hook manifest structure and tests. |
| #1528/#1529/#1547 | Astraflow and UModelVerse provider support | Salvaged in #1775 with current provider wiring and defensive tool-call parsing. |
| #1558 | `agentic-os` skill | Salvaged in #1772. |
| #1559 | `error-handling` skill | Salvaged in #1772. |
| #1566 | Agent architecture audit skill | Salvaged in #1772. |
| #1578 | OpenCode file-probe hardening | Salvaged in #1773. |
| #1674 | Production audit skill | Salvaged in #1732 after supply-chain/privacy review and rewrite. |
| #1687 | zh-CN localization sync | Large safe subsets salvaged in #1746-#1752; remaining pieces require translator/manual review. |
| #1694 | Portfolio curation | Useful focused curation updates salvaged in #1723 and #1724. |
| #1695 | Russian README translation | Ported in #1722. |
| #1697 | Saved LLM selector config | Salvaged as part of provider config/tool schema work in #1720. |
| #1699 | Windows post-edit-format path guard | Ported in #1719. |
| #1700 | Provider tool serialization | Ported in #1720. |
| #1705/#1780 | Production UI motion system | Salvaged in #1772, #1781, and #1782 with examples fixed before merge. |
| #1713 | Swift language support | Ported in #1721. |
| #1715 | CI personal-path validator hardening | Ported through CI validator hardening in #1717. |
| #1727 | MySQL patterns skill | Salvaged in #1733. |
| #1757 | Machine-learning engineering workflow | Salvaged in #1758 and tuned in #1759. |
## Already Present Or Superseded
| Source PR | Disposition |
| --- | --- |
| #1306 | Hook bug workarounds already exist on `main` as `docs/hook-bug-workarounds.md`. |
| #1318 | Gemini agent adaptation utility was already present on current `main`. |
| #1323 | Hook config update was already present on current `main`. |
| #1337 | Catalog count update was superseded by current catalog-count sync. |
| #1682/#1701 | Strategic compact hook-path fixes were merged directly or superseded by current docs fixes. |
| JARVIS #4/#5/#6 | Stale failing dependency-only PRs; future dependency state should be regenerated by Dependabot. |
## Skipped
| Source PR | Reason |
| --- | --- |
| #1308 | Stale zh-CN sync would rewind or delete too much current tree state; concrete selector-link fix was already present. |
| #1320 | Package-manager removal conflicts with the current npm/pnpm/yarn/bun CI policy. |
| #1341 | Very large low-signal generated change with no safe focused salvage unit. |
| #1416/#1465 | Accidental fork-sync PRs with no focused contribution. |
| #1475 | One-line Gemini CLI bridge idea was too stale and underspecified to port safely. |
## Remaining Manual-Review Backlog
Only the #1687 localization tail remains plausibly useful but unsafe to
auto-port.
Handling rule:
1. Keep #1687 in translator/manual review.
2. Split any future work by surface: agents, commands, top-level docs, release
and count surfaces, then skills.
3. Do not import stale top-level docs that carry old version or catalog-count
facts.
4. Do not reopen old PRs unless the original author returns with a current
rebase; maintainer-side salvage should happen on fresh branches with
attribution.
## Future Cleanup Rule
For every stale/conflicted PR cleanup batch:
1. Close or comment on the PR based on the queue policy.
2. Add the source PR to this ledger or a dated successor ledger.
3. Classify it as salvaged, already present, superseded, skipped, or
translator/manual review.
4. If useful, port a small compatible slice on a fresh maintainer branch.
5. Credit the source PR and author in the maintainer PR body.

View File

@@ -1,6 +1,6 @@
# Everything Claude Code (ECC) — 智能体指令
这是一个**生产就绪的 AI 编码插件**,提供 56 个专业代理、217 项技能、72 条命令以及自动化钩子工作流,用于软件开发。
这是一个**生产就绪的 AI 编码插件**,提供 58 个专业代理、220 项技能、74 条命令以及自动化钩子工作流,用于软件开发。
**版本:** 2.0.0-rc.1
@@ -146,9 +146,9 @@
## 项目结构
```
agents/ — 56 个专业子代理
skills/ — 217 个工作流技能和领域知识
commands/ — 72 个斜杠命令
agents/ — 58 个专业子代理
skills/ — 220 个工作流技能和领域知识
commands/ — 74 个斜杠命令
hooks/ — 基于触发的自动化
rules/ — 始终遵循的指导方针(通用 + 每种语言)
scripts/ — 跨平台 Node.js 实用工具

View File

@@ -224,7 +224,7 @@ Copy-Item -Recurse rules/typescript "$HOME/.claude/rules/"
/plugin list ecc@ecc
```
**搞定!** 你现在可以使用 56 个智能体、217 项技能和 72 个命令了。
**搞定!** 你现在可以使用 58 个智能体、220 项技能和 74 个命令了。
***
@@ -1132,9 +1132,9 @@ opencode
| 功能特性 | Claude Code | OpenCode | 状态 |
|---------|-------------|----------|--------|
| 智能体 | PASS: 56 个 | PASS: 12 个 | **Claude Code 领先** |
| 命令 | PASS: 72 个 | PASS: 35 个 | **Claude Code 领先** |
| 技能 | PASS: 217 项 | PASS: 37 项 | **Claude Code 领先** |
| 智能体 | PASS: 58 个 | PASS: 12 个 | **Claude Code 领先** |
| 命令 | PASS: 74 个 | PASS: 35 个 | **Claude Code 领先** |
| 技能 | PASS: 220 项 | PASS: 37 项 | **Claude Code 领先** |
| 钩子 | PASS: 8 种事件类型 | PASS: 11 种事件 | **OpenCode 更多!** |
| 规则 | PASS: 29 条 | PASS: 13 条指令 | **Claude Code 领先** |
| MCP 服务器 | PASS: 14 个 | PASS: 完整 | **完全对等** |
@@ -1240,9 +1240,9 @@ ECC 是**第一个最大化利用每个主要 AI 编码工具的插件**。以
| 功能特性 | Claude Code | Cursor IDE | Codex CLI | OpenCode |
|---------|------------|------------|-----------|----------|
| **智能体** | 56 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 |
| **命令** | 72 | 共享 | 基于指令 | 35 |
| **技能** | 217 | 共享 | 10 (原生格式) | 37 |
| **智能体** | 58 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 |
| **命令** | 74 | 共享 | 基于指令 | 35 |
| **技能** | 220 | 共享 | 10 (原生格式) | 37 |
| **钩子事件** | 8 种类型 | 15 种类型 | 暂无 | 11 种类型 |
| **钩子脚本** | 20+ 个脚本 | 16 个脚本 (DRY 适配器) | N/A | 插件钩子 |
| **规则** | 34 (通用 + 语言) | 34 (YAML 前页) | 基于指令 | 13 条指令 |

View File

@@ -71,6 +71,7 @@
"scripts/doctor.js",
"scripts/ecc.js",
"scripts/gemini-adapt-agents.js",
"scripts/harness-adapter-compliance.js",
"scripts/harness-audit.js",
"scripts/observability-readiness.js",
"scripts/hooks/",
@@ -276,6 +277,7 @@
"catalog:check": "node scripts/ci/catalog.js --text",
"catalog:sync": "node scripts/ci/catalog.js --write --text",
"lint": "eslint . && markdownlint '**/*.md' --ignore node_modules",
"harness:adapters": "node scripts/harness-adapter-compliance.js",
"harness:audit": "node scripts/harness-audit.js",
"observability:ready": "node scripts/observability-readiness.js",
"claw": "node scripts/claw.js",

View File

@@ -0,0 +1,149 @@
#!/usr/bin/env node
'use strict';
const path = require('path');
const {
ADAPTER_RECORDS,
renderMarkdownTable,
validateAdapterRecords,
validateDocumentation,
} = require('./lib/harness-adapter-compliance');
function parseArgs(argv) {
const args = argv.slice(2);
const parsed = {
check: false,
format: 'text',
help: false,
root: process.cwd(),
};
for (let index = 0; index < args.length; index += 1) {
const arg = args[index];
if (arg === '--help' || arg === '-h') {
parsed.help = true;
continue;
}
if (arg === '--check') {
parsed.check = true;
continue;
}
if (arg === '--format') {
parsed.format = String(args[index + 1] || '').toLowerCase();
index += 1;
continue;
}
if (arg.startsWith('--format=')) {
parsed.format = arg.slice('--format='.length).toLowerCase();
continue;
}
if (arg === '--root') {
parsed.root = path.resolve(args[index + 1] || process.cwd());
index += 1;
continue;
}
if (arg.startsWith('--root=')) {
parsed.root = path.resolve(arg.slice('--root='.length));
continue;
}
throw new Error(`Unknown argument: ${arg}`);
}
if (!['text', 'json', 'markdown'].includes(parsed.format)) {
throw new Error(`Invalid format: ${parsed.format}. Use text, json, or markdown.`);
}
parsed.root = path.resolve(parsed.root);
return parsed;
}
function printHelp() {
console.log([
'Usage: node scripts/harness-adapter-compliance.js [options]',
'',
'Validate or render the ECC harness adapter compliance scorecard.',
'',
'Options:',
' --check Fail if adapter records or docs are out of sync',
' --format <text|json|markdown>',
' --root <path> Repository root, defaults to cwd',
' -h, --help Show this help',
].join('\n'));
}
function buildPayload(root) {
const recordErrors = validateAdapterRecords();
const documentationErrors = validateDocumentation({ repoRoot: root });
return {
schema_version: 'ecc.harness-adapter-compliance.v1',
generated_from: 'scripts/lib/harness-adapter-compliance.js',
adapter_count: ADAPTER_RECORDS.length,
valid: recordErrors.length === 0 && documentationErrors.length === 0,
errors: [...recordErrors, ...documentationErrors],
adapters: ADAPTER_RECORDS,
};
}
function renderText(payload) {
const lines = [
`Harness Adapter Compliance: ${payload.valid ? 'PASS' : 'FAIL'}`,
`Adapters: ${payload.adapter_count}`,
];
if (payload.errors.length > 0) {
lines.push('Errors:');
for (const error of payload.errors) {
lines.push(`- ${error}`);
}
}
return lines.join('\n');
}
function main() {
let parsed;
try {
parsed = parseArgs(process.argv);
} catch (error) {
console.error(`Error: ${error.message}`);
process.exit(1);
}
if (parsed.help) {
printHelp();
return;
}
const payload = buildPayload(parsed.root);
if (parsed.format === 'json') {
console.log(JSON.stringify(payload, null, 2));
} else if (parsed.format === 'markdown') {
console.log(renderMarkdownTable());
} else {
console.log(renderText(payload));
}
if (parsed.check && !payload.valid) {
process.exit(1);
}
}
if (require.main === module) {
main();
}
module.exports = {
buildPayload,
parseArgs,
};

View File

@@ -0,0 +1,446 @@
'use strict';
const fs = require('fs');
const path = require('path');
const MATRIX_BLOCK_START = '<!-- harness-adapter-compliance:matrix-start -->';
const MATRIX_BLOCK_END = '<!-- harness-adapter-compliance:matrix-end -->';
const COMPLIANCE_STATES = Object.freeze({
Native: 'ECC can install or verify the surface directly for this harness.',
'Adapter-backed': 'ECC has a thin adapter, plugin, or package surface, but parity differs by harness.',
'Instruction-backed': 'ECC can provide the guidance and files, but the harness does not expose the runtime hook/session surface ECC needs for enforcement.',
'Reference-only': 'The tool is useful as a design pressure or external runtime, but ECC does not yet ship a direct installer or adapter for it.',
});
const REQUIRED_FIELDS = Object.freeze([
'id',
'harness',
'state',
'supported_assets',
'unsupported_surfaces',
'install_or_onramp',
'verification_commands',
'risk_notes',
'last_verified_at',
'owner',
'source_docs',
]);
function freezeRecord(record) {
return Object.freeze({
...record,
supported_assets: Object.freeze(record.supported_assets.slice()),
unsupported_surfaces: Object.freeze(record.unsupported_surfaces.slice()),
install_or_onramp: Object.freeze(record.install_or_onramp.slice()),
verification_commands: Object.freeze(record.verification_commands.slice()),
risk_notes: Object.freeze(record.risk_notes.slice()),
source_docs: Object.freeze(record.source_docs.slice()),
});
}
const ADAPTER_RECORDS = Object.freeze([
{
id: 'claude-code',
harness: 'Claude Code',
state: 'Native',
supported_assets: [
'Claude plugin assets',
'skills',
'commands',
'hooks',
'MCP config',
'local rules',
'statusline-oriented workflows',
],
unsupported_surfaces: ['Claude-native hooks do not imply parity in other harnesses'],
install_or_onramp: [
'`./install.sh --profile minimal --target claude`',
'Claude plugin install',
],
verification_commands: [
'`npm run harness:audit -- --format json`',
'`node scripts/session-inspect.js --list-adapters`',
],
risk_notes: ['Avoid loading every skill by default; keep hooks opt-in and inspectable.'],
last_verified_at: '2026-05-12',
owner: 'ECC maintainers',
source_docs: [
'.claude-plugin/plugin.json',
'docs/architecture/cross-harness.md',
'scripts/lib/install-targets/claude-home.js',
],
},
{
id: 'codex',
harness: 'Codex',
state: 'Instruction-backed',
supported_assets: [
'`AGENTS.md`',
'Codex plugin metadata',
'skills',
'MCP reference config',
'command patterns',
],
unsupported_surfaces: ['Native hook enforcement and Claude slash-command semantics are not equivalent'],
install_or_onramp: [
'`./install.sh --profile minimal --target codex`',
'repo-local `AGENTS.md` review',
],
verification_commands: ['`npm run harness:audit -- --format json`'],
risk_notes: ['Treat hooks as policy text unless a native Codex hook surface exists.'],
last_verified_at: '2026-05-12',
owner: 'ECC maintainers',
source_docs: [
'.codex-plugin/plugin.json',
'AGENTS.md',
'scripts/lib/install-targets/codex-home.js',
],
},
{
id: 'opencode',
harness: 'OpenCode',
state: 'Adapter-backed',
supported_assets: [
'OpenCode package/plugin metadata',
'shared skills',
'MCP config',
'event adapter patterns',
],
unsupported_surfaces: ['Event names, plugin packaging, and command dispatch differ from Claude Code'],
install_or_onramp: ['OpenCode package or plugin surface from this repo'],
verification_commands: [
'`node tests/scripts/build-opencode.test.js`',
'`npm run harness:audit -- --format json`',
],
risk_notes: ['Keep hook logic in shared scripts and adapt only event shape at the edge.'],
last_verified_at: '2026-05-12',
owner: 'ECC maintainers',
source_docs: [
'.opencode/package.json',
'.opencode/plugins/ecc-hooks.ts',
'scripts/build-opencode.js',
],
},
{
id: 'cursor',
harness: 'Cursor',
state: 'Adapter-backed',
supported_assets: [
'Cursor rules',
'project-local skills',
'hook adapter',
'shared scripts',
],
unsupported_surfaces: ['Cursor hook events and rule loading differ from Claude Code'],
install_or_onramp: ['`./install.sh --profile minimal --target cursor`'],
verification_commands: [
'`node tests/lib/install-targets.test.js`',
'`npm run harness:audit -- --format json`',
],
risk_notes: ['Cursor adapters must preserve existing project rules and avoid silent overwrite.'],
last_verified_at: '2026-05-12',
owner: 'ECC maintainers',
source_docs: [
'.cursor/',
'scripts/lib/install-targets/cursor-project.js',
'tests/lib/install-targets.test.js',
],
},
{
id: 'gemini',
harness: 'Gemini',
state: 'Instruction-backed',
supported_assets: [
'Gemini project-local instructions',
'shared skills',
'rules',
'compatibility docs',
],
unsupported_surfaces: ['No full ECC hook parity; ecosystem ports must document drift from upstream ECC'],
install_or_onramp: ['`./install.sh --profile minimal --target gemini`'],
verification_commands: ['`node tests/lib/install-targets.test.js`'],
risk_notes: ['Treat Gemini ports as ecosystem adapters until validated end to end inside Gemini CLI.'],
last_verified_at: '2026-05-12',
owner: 'ECC maintainers',
source_docs: [
'.gemini/',
'scripts/lib/install-targets/gemini-project.js',
'tests/lib/install-targets.test.js',
],
},
{
id: 'zed-adjacent',
harness: 'Zed-adjacent workflows',
state: 'Instruction-backed',
supported_assets: [
'shared skills',
'`AGENTS.md` style project instructions',
'verification loops',
],
unsupported_surfaces: ['Zed agent surfaces vary; no first-party ECC installer is shipped today'],
install_or_onramp: ['Manual copy from shared ECC sources until adapter requirements settle'],
verification_commands: ['`npm run harness:audit -- --format json`'],
risk_notes: ['Do not claim native Zed support before a real adapter and verification path exist.'],
last_verified_at: '2026-05-12',
owner: 'ECC maintainers',
source_docs: [
'AGENTS.md',
'docs/architecture/cross-harness.md',
],
},
{
id: 'dmux',
harness: 'dmux',
state: 'Adapter-backed',
supported_assets: [
'session snapshots',
'tmux/worktree orchestration status',
'handoff exports',
],
unsupported_surfaces: ['dmux is an orchestration runtime, not an install target for skills/rules'],
install_or_onramp: [
'`node scripts/session-inspect.js --list-adapters`',
'dmux session target inspection',
],
verification_commands: ['`node tests/lib/session-adapters.test.js`'],
risk_notes: ['Treat dmux events as session/runtime signals, not as a replacement for repo validation.'],
last_verified_at: '2026-05-12',
owner: 'ECC maintainers',
source_docs: [
'scripts/lib/session-adapters/dmux-tmux.js',
'scripts/orchestration-status.js',
'tests/lib/session-adapters.test.js',
],
},
{
id: 'orca',
harness: 'Orca',
state: 'Reference-only',
supported_assets: [
'worktree lifecycle',
'review state',
'notification',
'provider-identity design pressure',
],
unsupported_surfaces: ['No ECC installer or direct adapter today'],
install_or_onramp: ['Use as a comparison target for worktree/session state requirements'],
verification_commands: ['`npm run observability:ready`'],
risk_notes: ['Do not import product-specific assumptions; convert lessons into ECC event fields.'],
last_verified_at: '2026-05-12',
owner: 'ECC maintainers',
source_docs: ['docs/architecture/cross-harness.md'],
},
{
id: 'superset',
harness: 'Superset',
state: 'Reference-only',
supported_assets: [
'workspace presets',
'parallel-agent review loops',
'worktree isolation design pressure',
],
unsupported_surfaces: ['No ECC installer or direct adapter today'],
install_or_onramp: ['Use as a comparison target for workspace preset taxonomy'],
verification_commands: ['`npm run observability:ready`'],
risk_notes: ['Keep ECC portable; do not require a desktop workspace to get basic value.'],
last_verified_at: '2026-05-12',
owner: 'ECC maintainers',
source_docs: ['docs/architecture/cross-harness.md'],
},
{
id: 'ghast',
harness: 'Ghast',
state: 'Reference-only',
supported_assets: [
'terminal-native pane grouping',
'cwd grouping',
'search',
'notifications',
],
unsupported_surfaces: ['No ECC installer or direct adapter today'],
install_or_onramp: ['Use as a comparison target for terminal-first session grouping'],
verification_commands: ['`node scripts/session-inspect.js --list-adapters`'],
risk_notes: ['Preserve terminal ergonomics before adding visual UI assumptions.'],
last_verified_at: '2026-05-12',
owner: 'ECC maintainers',
source_docs: ['docs/architecture/cross-harness.md'],
},
{
id: 'terminal-only',
harness: 'Terminal-only',
state: 'Native',
supported_assets: [
'skills',
'rules',
'commands',
'scripts',
'harness audit',
'observability readiness',
'handoffs',
],
unsupported_surfaces: ['No external UI, no automatic session control unless scripts are run explicitly'],
install_or_onramp: [
'Clone repo',
'run commands directly',
'use minimal profile for project installs',
],
verification_commands: [
'`npm run harness:audit -- --format json`',
'`npm run observability:ready`',
],
risk_notes: ['This is the fallback contract; every higher-level adapter should degrade to it.'],
last_verified_at: '2026-05-12',
owner: 'ECC maintainers',
source_docs: [
'scripts/harness-audit.js',
'scripts/observability-readiness.js',
'docs/architecture/observability-readiness.md',
],
},
].map(freezeRecord));
function toTextList(value) {
return Array.isArray(value) ? value.join('; ') : String(value || '');
}
function escapeMarkdownCell(value) {
return toTextList(value).replace(/\|/g, '\\|').trim();
}
function renderMarkdownTable(records = ADAPTER_RECORDS) {
const lines = [
'| Harness or runtime | State | Supported assets | Unsupported or different surfaces | Install or onramp | Verification command | Risk notes |',
'| --- | --- | --- | --- | --- | --- | --- |',
];
for (const record of records) {
lines.push([
record.harness,
record.state,
record.supported_assets,
record.unsupported_surfaces,
record.install_or_onramp,
record.verification_commands,
record.risk_notes,
].map(escapeMarkdownCell).join(' | ').replace(/^/, '| ').replace(/$/, ' |'));
}
return lines.join('\n');
}
function renderStateTable() {
const lines = [
'| State | Meaning |',
'| --- | --- |',
];
for (const [state, meaning] of Object.entries(COMPLIANCE_STATES)) {
lines.push(`| ${escapeMarkdownCell(state)} | ${escapeMarkdownCell(meaning)} |`);
}
return lines.join('\n');
}
function validateAdapterRecords(records = ADAPTER_RECORDS) {
const errors = [];
const ids = new Set();
records.forEach((record, index) => {
const label = record?.id || `record[${index}]`;
for (const field of REQUIRED_FIELDS) {
if (!Object.prototype.hasOwnProperty.call(record, field)) {
errors.push(`${label}: missing required field ${field}`);
}
}
if (typeof record.id !== 'string' || !/^[a-z0-9-]+$/.test(record.id)) {
errors.push(`${label}: id must be a lowercase slug`);
} else if (ids.has(record.id)) {
errors.push(`${label}: duplicate id`);
} else {
ids.add(record.id);
}
if (!Object.prototype.hasOwnProperty.call(COMPLIANCE_STATES, record.state)) {
errors.push(`${label}: unknown state ${record.state}`);
}
for (const field of [
'supported_assets',
'unsupported_surfaces',
'install_or_onramp',
'verification_commands',
'risk_notes',
'source_docs',
]) {
if (!Array.isArray(record[field]) || record[field].length === 0) {
errors.push(`${label}: ${field} must be a non-empty array`);
continue;
}
record[field].forEach((value, valueIndex) => {
if (typeof value !== 'string' || !value.trim()) {
errors.push(`${label}: ${field}[${valueIndex}] must be a non-empty string`);
}
});
}
if (typeof record.harness !== 'string' || !record.harness.trim()) {
errors.push(`${label}: harness must be a non-empty string`);
}
if (typeof record.owner !== 'string' || !record.owner.trim()) {
errors.push(`${label}: owner must be a non-empty string`);
}
if (typeof record.last_verified_at !== 'string' || !/^\d{4}-\d{2}-\d{2}$/.test(record.last_verified_at)) {
errors.push(`${label}: last_verified_at must be YYYY-MM-DD`);
}
});
return errors;
}
function extractMatrixBlock(markdown) {
const normalized = String(markdown).replace(/\r\n/g, '\n');
const start = normalized.indexOf(MATRIX_BLOCK_START);
const end = normalized.indexOf(MATRIX_BLOCK_END);
if (start < 0 || end < 0 || end <= start) {
return null;
}
return normalized.slice(start + MATRIX_BLOCK_START.length, end).trim();
}
function validateDocumentation(options = {}) {
const repoRoot = options.repoRoot || path.resolve(__dirname, '..', '..');
const docPath = options.docPath || path.join(repoRoot, 'docs', 'architecture', 'harness-adapter-compliance.md');
const errors = [];
const source = fs.readFileSync(docPath, 'utf8');
const actual = extractMatrixBlock(source);
const expected = renderMarkdownTable();
if (actual === null) {
errors.push(`missing matrix block markers in ${path.relative(repoRoot, docPath)}`);
} else if (actual !== expected) {
errors.push(`matrix block in ${path.relative(repoRoot, docPath)} is not generated from adapter records`);
}
return errors;
}
module.exports = {
ADAPTER_RECORDS,
COMPLIANCE_STATES,
MATRIX_BLOCK_END,
MATRIX_BLOCK_START,
REQUIRED_FIELDS,
extractMatrixBlock,
renderMarkdownTable,
renderStateTable,
validateAdapterRecords,
validateDocumentation,
};

View File

@@ -0,0 +1,596 @@
---
name: motion-advanced
description: Advanced motion patterns for React / Next.js — drag & drop, gestures, text animations, SVG path drawing, custom hooks, imperative sequences (useAnimate), loaders, and the full API decision tree. Requires motion-foundations.
version: 1.0
tags: [motion, animation, advanced, gestures, svg]
category: frontend
author: jeff
---
# Motion Advanced
Complex, interactive, and physics-based animation patterns.
Requires `motion-foundations` to be set up first.
Use these when `motion-patterns` is not enough.
## When to Activate
- Building drag-to-dismiss sheets, swipe gestures, or reorderable lists
- Animating text word-by-word, character-by-character, or as a live counter
- Drawing SVG paths, morphing icons, or animating circular progress
- Writing a custom animation hook (`useScrollReveal`, magnetic button, cursor follower)
- Sequencing multi-step animations imperatively with `useAnimate`
- Building spinners, shimmer skeletons, pulse indicators, or loading button states
## Outputs
This skill produces:
- Drag interactions: draggable cards, drag-to-dismiss sheets, `Reorder.Group` lists
- Gesture hooks: swipe detection, long press, pinch outline
- Text animation components: word reveal, character typewriter, number counter
- SVG animation: path draw-on, icon morph, stroke progress ring
- Custom hooks: `useScrollReveal`, `useHoverScale`, `useNavigationDirection`, `useInViewOnce`
- Imperative sequences via `useAnimate` with interrupt-safe `async/await`
- Loader components: spinner, shimmer, pulse dot, progress bar, button loading state
## Principles
- Physics-based motion (`useSpring`, `springs.*`) always feels more natural than duration-based for direct manipulation.
- `useMotionValue` + `useTransform` computes derived values without triggering re-renders.
- `useAnimate` sequences are imperative and interrupt-safe — calling `animate()` mid-flight cancels the previous animation automatically.
- Motion values (`useMotionValue`, `useSpring`) are SSR-safe and do not cause hydration errors.
## Rules
1. **Drag interactions must be tested on touch devices**, not just mouse. `drag` prop works on both but feel and threshold differ.
2. **Infinite animations must pause when `document.visibilityState === "hidden"`.** Background tabs must not consume GPU/CPU.
3. **Swipe threshold must be explicit.** Never infer intent from velocity alone; combine `offset` + `velocity` checks.
4. **`useAnimate` scope ref must be attached to a mounted DOM element.** Calling `animate()` before mount throws silently.
5. **Motion values must not be recreated on render.** `useMotionValue(0)` inside a component body is correct; `new MotionValue(0)` in a render is not.
6. **All token values are imported from `motion-foundations`.** No inline numbers.
7. **Custom hooks must handle cleanup.** Every `window.addEventListener` needs a matching `removeEventListener` in the `useEffect` return.
8. **SVG morphing requires equal path command counts.** Paths with different command structures snap instead of interpolating.
## Decision Guidance
### Choosing the right advanced API
| Scenario | API |
| ------------------------------ | -------------------------------- |
| Drag with physics on release | `drag` + `dragTransition: springs.release` |
| Ordered drag-to-reorder list | `Reorder.Group` + `Reorder.Item` |
| Dismiss on drag offset | `drag="y"` + `onDragEnd` offset check |
| Swipe left/right | `drag="x"` + `onDragEnd` offset check |
| Long press | `useLongPress` hook |
| Value smoothed over time | `useSpring` |
| Value derived from another | `useTransform` |
| Multi-step sequence | `useAnimate` with `async/await` |
| One-shot imperative animation | `animate()` from `motion` |
| Text entering word by word | Stagger on `inline-block` spans |
| SVG drawing on | `pathLength` 0 → 1 |
| SVG morph | `d` attribute tween (equal commands) |
| Circular progress | `strokeDashoffset` tween |
### When to use `useSpring` vs a spring transition
| | `useSpring` | `transition: springs.*` |
| -------------- | ---------------------------------------- | ----------------------- |
| Use for | Cursor follower, pointer-tracked values | Discrete state changes |
| Updates | Continuous, on every frame | Triggered by state change |
| Interrupt | Smooth — physics picks up from velocity | Restarts from current value |
## Core Concepts
### useMotionValue + useTransform
Reactive computation without re-renders:
```tsx
const x = useMotionValue(0)
const opacity = useTransform(x, [-200, 0, 200], [0, 1, 0])
// opacity updates every frame as x changes — no setState, no re-render
```
### useAnimate
Returns `[scope, animate]`. The scope ref must be attached to a DOM element.
`animate()` calls are interrupt-safe — calling mid-flight cancels the previous run.
```tsx
const [scope, animate] = useAnimate()
async function play() {
await animate(".step-1", { opacity: 1 }, { duration: 0.3 })
await animate(".step-2", { x: 0 }, { duration: 0.4 })
animate(".step-3", { scale: 1 }, { duration: 0.25 }) // fire and forget
}
return <div ref={scope}>...</div>
```
## Code Examples
### Draggable card
```tsx
"use client"
import { motion } from "motion/react"
import { springs, motionTokens } from "@/lib/motion-tokens"
<motion.div
drag
dragConstraints={{ left: -100, right: 100, top: -100, bottom: 100 }}
dragElastic={0.1}
whileDrag={{
scale: motionTokens.scale.pop,
boxShadow: "0 16px 40px rgba(0,0,0,0.2)",
}}
dragTransition={springs.release}
/>
```
### Drag-to-dismiss sheet
```tsx
"use client"
import { motion, useMotionValue, useTransform } from "motion/react"
export function BottomSheet({ onClose }: { onClose: () => void }) {
const y = useMotionValue(0)
const opacity = useTransform(y, [0, 200], [1, 0])
return (
<motion.div
drag="y"
dragConstraints={{ top: 0 }}
style={{ y, opacity }}
onDragEnd={(_, info) => {
// Rule 3: combine offset + velocity
if (info.offset.y > 120 || info.velocity.y > 500) onClose()
}}
/>
)
}
```
### Reorderable list
```tsx
"use client"
import { Reorder } from "motion/react"
export function SortableList() {
const [items, setItems] = useState(initialItems)
return (
<Reorder.Group axis="y" values={items} onReorder={setItems}>
{items.map((item) => (
<Reorder.Item key={item.id} value={item}>
{item.label}
</Reorder.Item>
))}
</Reorder.Group>
)
}
```
### Swipe detection
```tsx
"use client"
import { motion } from "motion/react"
const OFFSET_THRESHOLD = 50
const VELOCITY_THRESHOLD = 300
<motion.div
drag="x"
dragConstraints={{ left: 0, right: 0 }}
onDragEnd={(_, info) => {
const swipedRight = info.offset.x > OFFSET_THRESHOLD || info.velocity.x > VELOCITY_THRESHOLD
const swipedLeft = info.offset.x < -OFFSET_THRESHOLD || info.velocity.x < -VELOCITY_THRESHOLD
if (swipedRight) onSwipeRight()
if (swipedLeft) onSwipeLeft()
}}
/>
```
### Long press hook
```tsx
import { useRef } from "react"
export function useLongPress(callback: () => void, ms = 600) {
const timerRef = useRef<ReturnType<typeof setTimeout>>()
return {
onPointerDown: () => { timerRef.current = setTimeout(callback, ms) },
onPointerUp: () => clearTimeout(timerRef.current),
onPointerLeave: () => clearTimeout(timerRef.current),
}
}
```
### Word-by-word reveal
```tsx
"use client"
import { motion } from "motion/react"
import { springs } from "@/lib/motion-tokens"
export function AnimatedText({ text }: { text: string }) {
return (
<motion.p
variants={{ visible: { transition: { staggerChildren: 0.05 } } }}
initial="hidden"
animate="visible"
>
{text.split(" ").map((word, i) => (
<motion.span
key={i}
className="inline-block mr-1"
variants={{
hidden: { opacity: 0, y: 12 },
visible: { opacity: 1, y: 0, transition: springs.gentle },
}}
>
{word}
</motion.span>
))}
</motion.p>
)
}
```
### Number counter
```tsx
"use client"
import { useRef, useEffect } from "react"
import { animate } from "motion"
import { motionTokens } from "@/lib/motion-tokens"
export function Counter({ to }: { to: number }) {
const nodeRef = useRef<HTMLSpanElement>(null)
useEffect(() => {
const controls = animate(0, to, {
duration: motionTokens.duration.crawl,
ease: motionTokens.easing.smooth,
onUpdate: (v) => {
if (nodeRef.current) nodeRef.current.textContent = Math.round(v).toString()
},
})
return controls.stop // Rule 7: cleanup
}, [to])
return <span ref={nodeRef} />
}
```
### SVG path draw-on
```tsx
"use client"
import { motion } from "motion/react"
import { motionTokens } from "@/lib/motion-tokens"
<motion.path
d="M 0 100 Q 50 0 100 100"
initial={{ pathLength: 0, opacity: 0 }}
animate={{ pathLength: 1, opacity: 1 }}
transition={{ duration: motionTokens.duration.slow, ease: motionTokens.easing.smooth }}
/>
```
### Stroke progress ring
```tsx
"use client"
import { motion } from "motion/react"
import { motionTokens } from "@/lib/motion-tokens"
const CIRCUMFERENCE = 2 * Math.PI * 40 // r=40
export function ProgressRing({ progress }: { progress: number }) {
return (
<svg width="100" height="100" viewBox="0 0 100 100">
<circle cx="50" cy="50" r="40" fill="none" stroke="#e5e7eb" strokeWidth="8" />
<motion.circle
cx="50" cy="50" r="40"
fill="none" stroke="#6366f1" strokeWidth="8"
strokeLinecap="round"
strokeDasharray={CIRCUMFERENCE}
animate={{ strokeDashoffset: CIRCUMFERENCE - (progress / 100) * CIRCUMFERENCE }}
transition={{ duration: motionTokens.duration.normal, ease: motionTokens.easing.smooth }}
style={{ rotate: -90, transformOrigin: "center" }}
/>
</svg>
)
}
```
### useScrollReveal hook
```tsx
"use client"
import { useRef } from "react"
import { useScroll, useTransform } from "motion/react"
import { motionTokens } from "@/lib/motion-tokens"
export function useScrollReveal() {
const ref = useRef(null)
const { scrollYProgress } = useScroll({ target: ref, offset: ["start end", "end start"] })
const opacity = useTransform(scrollYProgress, [0, 0.3], [0, 1])
const y = useTransform(scrollYProgress, [0, 0.3], [motionTokens.distance.lg, 0])
return { ref, style: { opacity, y } }
}
// Usage
const { ref, style } = useScrollReveal()
<motion.section ref={ref} style={style} />
```
### Cursor follower
```tsx
"use client"
import { useEffect } from "react"
import { motion, useMotionValue, useSpring } from "motion/react"
import { springs } from "@/lib/motion-tokens"
export function CursorFollower() {
const x = useMotionValue(-100)
const y = useMotionValue(-100)
const sx = useSpring(x, springs.gentle)
const sy = useSpring(y, springs.gentle)
useEffect(() => {
const move = (e: MouseEvent) => { x.set(e.clientX); y.set(e.clientY) }
window.addEventListener("mousemove", move)
return () => window.removeEventListener("mousemove", move) // Rule 7
}, [])
return (
<motion.div
className="fixed top-0 left-0 w-6 h-6 rounded-full bg-indigo-500
pointer-events-none -translate-x-1/2 -translate-y-1/2 z-50"
style={{ x: sx, y: sy }}
/>
)
}
```
### Shimmer skeleton
```tsx
"use client"
import { useEffect } from "react"
import { motion, useAnimation } from "motion/react"
import { motionTokens } from "@/lib/motion-tokens"
export function ShimmerSkeleton({ className = "" }: { className?: string }) {
const controls = useAnimation()
useEffect(() => {
const play = () =>
controls.start({
x: ["-100%", "100%"],
transition: {
repeat: Infinity,
duration: motionTokens.duration.crawl,
ease: motionTokens.easing.linear,
},
})
const handleVisibility = () => {
if (document.visibilityState === "hidden") controls.stop()
else void play()
}
void play()
document.addEventListener("visibilitychange", handleVisibility)
return () => {
controls.stop()
document.removeEventListener("visibilitychange", handleVisibility)
}
}, [controls])
return (
<div className={`relative overflow-hidden bg-gray-200 rounded ${className}`}>
<motion.div
className="absolute inset-0 bg-gradient-to-r from-transparent via-white/60 to-transparent"
initial={{ x: "-100%" }}
animate={controls}
/>
</div>
)
}
```
### Button loading state
```tsx
"use client"
import { motion, AnimatePresence } from "motion/react"
import { motionTokens, springs } from "@/lib/motion-tokens"
export function LoadingButton({
loading,
label,
onClick,
}: {
loading: boolean
label: string
onClick: () => void
}) {
return (
<motion.button
onClick={onClick}
animate={{ opacity: loading ? 0.7 : 1 }}
whileTap={loading ? {} : { scale: motionTokens.scale.press }}
transition={springs.snappy}
disabled={loading}
>
<AnimatePresence mode="wait">
{loading ? (
<motion.span
key="loading"
initial={{ opacity: 0 }} animate={{ opacity: 1 }} exit={{ opacity: 0 }}
transition={{ duration: motionTokens.duration.fast }}
>
</motion.span>
) : (
<motion.span
key="label"
initial={{ opacity: 0 }} animate={{ opacity: 1 }} exit={{ opacity: 0 }}
transition={{ duration: motionTokens.duration.fast }}
>
{label}
</motion.span>
)}
</AnimatePresence>
</motion.button>
)
}
```
### Infinite animation with visibility pause
```tsx
"use client"
import { useEffect } from "react"
import { motion, useAnimation } from "motion/react"
import { motionTokens } from "@/lib/motion-tokens"
export function PulseDot() {
const controls = useAnimation()
useEffect(() => {
const pulse = () =>
controls.start({
scale: [1, 1.4, 1],
opacity: [1, 0.6, 1],
transition: { repeat: Infinity, duration: motionTokens.duration.crawl },
})
// Rule 2: pause when tab is hidden
const handleVisibility = () => {
if (document.visibilityState === "hidden") controls.stop()
else void pulse()
}
void pulse()
document.addEventListener("visibilitychange", handleVisibility)
// Rule 7: stop controls and remove listeners on unmount.
return () => {
controls.stop()
document.removeEventListener("visibilitychange", handleVisibility)
}
}, [controls])
return <motion.span className="w-2 h-2 rounded-full bg-green-400" animate={controls} />
}
```
## End-to-End Example
Drag-to-dismiss sheet with shimmer content, loading state, and reduced motion
support — combining `useMotionValue`, `useTransform`, `useSafeMotion`,
`AnimatePresence`, and tokens from `motion-foundations`:
```tsx
"use client"
import { useState } from "react"
import { motion, AnimatePresence, useMotionValue, useTransform } from "motion/react"
import { springs, motionTokens } from "@/lib/motion-tokens"
import { useSafeMotion } from "@/hooks/use-reduced-motion"
import { ShimmerSkeleton } from "./shimmer-skeleton"
export function DismissibleSheet({
isOpen,
onClose,
loading,
children,
}: {
isOpen: boolean
onClose: () => void
loading: boolean
children: React.ReactNode
}) {
const safe = useSafeMotion(motionTokens.distance.xl)
const y = useMotionValue(0)
const opacity = useTransform(y, [0, 200], [1, 0])
return (
<AnimatePresence>
{isOpen && (
<>
{/* Backdrop */}
<motion.div
key="backdrop"
className="fixed inset-0 bg-black/40"
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
exit={{ opacity: 0 }}
onClick={onClose}
/>
{/* Sheet — drag-to-dismiss */}
<motion.div
key="sheet"
className="fixed bottom-0 inset-x-0 rounded-t-2xl bg-white p-6"
drag="y"
dragConstraints={{ top: 0 }}
style={{ y, opacity }}
onDragEnd={(_, info) => {
if (info.offset.y > 120 || info.velocity.y > 500) onClose()
}}
initial={safe.initial}
animate={safe.animate}
exit={safe.exit}
transition={springs.gentle}
>
{loading ? (
<div className="space-y-3">
<ShimmerSkeleton className="h-4 w-3/4" />
<ShimmerSkeleton className="h-4 w-1/2" />
<ShimmerSkeleton className="h-20 w-full" />
</div>
) : children}
</motion.div>
</>
)}
</AnimatePresence>
)
}
```
## Constraints / Non-Goals
This skill does **not** cover:
- Token and spring definitions → see `motion-foundations`
- Standard UI patterns (button, modal, stagger, page transitions) → see `motion-patterns`
- CSS-only animations or Tailwind `animate-*` without `motion/react`
- Canvas or WebGL-based animation (Three.js, Pixi, etc.)
- Full drag-and-drop systems with external state managers (dnd-kit, react-beautiful-dnd)
- Game-loop or frame-by-frame animation
## Anti-Patterns
| Anti-pattern | Rule violated | Fix |
| ---------------------------------------------- | ------- | ------------------------------------------------ |
| `drag` tested only on desktop | Rule 1 | Test on touch emulator and real device |
| `animate={{ repeat: Infinity }}` with no pause | Rule 2 | Add `visibilitychange` listener |
| `onDragEnd` checking only offset, not velocity | Rule 3 | Check both `info.offset` and `info.velocity` |
| `animate(scope, ...)` before `useEffect` | Rule 4 | Call `animate()` only after mount |
| `const x = new MotionValue(0)` in render | Rule 5 | Use `const x = useMotionValue(0)` |
| `transition={{ duration: 1.2 }}` inline | Rule 6 | Use `motionTokens.duration.crawl` |
| `useEffect` without cleanup | Rule 7 | Return `removeEventListener` / `controls.stop` |
| SVG morph between paths with different commands | Rule 8 | Normalize path commands before animating |
## Related Skills
- **`motion-foundations`** — defines all tokens, springs, `useSafeMotion`, and SSR guards imported here. Must be set up before using this skill.
- **`motion-patterns`** — handles standard UI patterns (button, modal, stagger, page transitions, scroll reveals). Use it before reaching for the advanced patterns here.

View File

@@ -0,0 +1,299 @@
---
name: motion-foundations
description: Motion tokens, spring presets, performance rules, device adaptation, accessibility enforcement, and SSR safety for React / Next.js using motion/react. Foundation layer — all other motion skills depend on this.
version: 1.0
tags: [motion, animation, performance, accessibility]
category: frontend
author: jeff
---
# Motion Foundations
The base layer of the motion system. Defines every value, constraint, and
rule that downstream skills (`motion-patterns`, `motion-advanced`) inherit.
Load this skill before any animation work begins.
## When to Activate
- Starting any animated component from scratch
- Setting up tokens, spring presets, or easing values
- Implementing `prefers-reduced-motion` support
- Debugging hydration mismatches from animation initial states
- Evaluating whether an animation should exist at all
## Outputs
This skill produces:
- A shared `motionTokens` object (duration, easing, distance, scale)
- A shared `springs` preset map (5 named configs)
- A `shouldAnimate()` gate used by all components
- Accessibility-compliant animation defaults via `useReducedMotion`
- SSR-safe initial states with zero hydration warnings
## Principles
Motion must do at least one of the following or it must be removed:
- Guide attention
- Communicate state
- Preserve spatial continuity
Responsiveness always outranks smoothness. A 60 fps animation that causes
input delay is worse than no animation.
## Rules
These are non-negotiable. They apply to every component in the system.
1. **Use `motion/react` only.** Never import from `framer-motion`. Never mix the two in the same tree.
2. **`initial` must match server output.** If the server renders `opacity: 1`, the `initial` prop must also be `opacity: 1`. No exceptions.
3. **Reduced motion overrides everything.** When `useReducedMotion()` returns `true` or `prefersReduced` is `true`, all transforms are disabled. Opacity-only fades at ≤ 0.2s are the only permitted fallback.
4. **Never animate layout properties.** `width`, `height`, `top`, `left`, `margin`, `padding` are banned from `animate`. Use `transform` and `opacity` only.
5. **All token values come from `motionTokens`.** Hardcoded durations and easings in component files are forbidden.
6. **All spring configs come from the `springs` map.** Inline `stiffness`/`damping` values are forbidden.
7. **`"use client"` is required** on every file that imports from `motion/react`.
8. **Never read `window` or `navigator` at module level.** Always guard with `typeof window !== "undefined"`.
## Decision Guidance
### Choosing a duration
| Token | Use when |
| --------- | -------------------------------------------- |
| `instant` | Tooltip show/hide, focus ring, badge update |
| `fast` | Button feedback, icon swap, chip toggle |
| `normal` | Modal open, card expand, page element enter |
| `slow` | Hero entrance, full-page transition |
| `crawl` | Deliberate storytelling; use sparingly |
### Choosing a spring
| Preset | Use when |
| --------- | ------------------------------------------ |
| `snappy` | Default UI — buttons, chips, nav items |
| `gentle` | Cards, modals, panels landing softly |
| `bouncy` | Playful moments — empty states, onboarding |
| `instant` | Tooltips, popovers, dropdowns |
| `release` | Drag release — natural physics feel |
### When to disable animation entirely
Disable (make `shouldAnimate()` return `false`) when:
- `prefersReduced` is `true`
- `isLowEnd` is `true` and the animation is non-essential
- The element is off-screen and will never enter the viewport
- The animation is purely decorative with no UX purpose
## Core Concepts
### Token system
```ts
// lib/motion-tokens.ts
export const motionTokens = {
duration: {
instant: 0.08,
fast: 0.18,
normal: 0.35,
slow: 0.6,
crawl: 1.0,
},
easing: {
smooth: [0.22, 1, 0.36, 1],
sharp: [0.4, 0, 0.2, 1],
bounce: [0.34, 1.56, 0.64, 1],
linear: [0, 0, 1, 1],
},
distance: {
xs: 4,
sm: 8,
md: 16,
lg: 24,
xl: 48,
},
scale: {
subtle: 0.98,
press: 0.95,
pop: 1.04,
},
}
export const springs = {
snappy: { type: "spring", stiffness: 300, damping: 30 },
gentle: { type: "spring", stiffness: 120, damping: 14 },
bouncy: { type: "spring", stiffness: 400, damping: 10 },
instant: { type: "spring", stiffness: 600, damping: 35 },
release: { type: "spring", stiffness: 200, damping: 20, restDelta: 0.001 },
}
```
### Runtime flags
```ts
// lib/motion-config.ts
export const motionConfig = {
isLowEnd() {
return (
typeof navigator !== "undefined" &&
navigator.hardwareConcurrency <= 4
)
},
prefersReduced() {
return (
typeof window !== "undefined" &&
window.matchMedia("(prefers-reduced-motion: reduce)").matches
)
},
shouldAnimate({ essential = false } = {}) {
if (this.prefersReduced()) return false
if (!essential && this.isLowEnd()) return false
return true
},
duration() {
return this.isLowEnd() || this.prefersReduced()
? motionTokens.duration.instant
: motionTokens.duration.normal
},
}
```
### Accessibility
**Priority order (highest to lowest):**
1. `prefers-reduced-motion: reduce` — disables all transforms, limits opacity transitions to ≤ 0.2s
2. Low-end device detection — reduces duration, removes non-essential animations
3. Design preference — everything else
Motion must degrade gracefully. It must never disappear abruptly in a way
that causes layout shift or confuses orientation.
```tsx
// hooks/use-reduced-motion.tsx
"use client"
import { useReducedMotion } from "motion/react"
export function useSafeMotion(fullY: number = 16) {
const reduce = useReducedMotion()
return {
initial: { opacity: 0, y: reduce ? 0 : fullY },
animate: { opacity: 1, y: 0 },
exit: { opacity: 0, y: reduce ? 0 : -fullY },
}
}
```
```css
/* globals.css */
@media (prefers-reduced-motion: reduce) {
.motion-safe-transition { transition: opacity 0.15s; }
.motion-reduce-transform { transform: none !important; }
}
```
```html
<!-- Tailwind -->
<div class="motion-safe:animate-fade motion-reduce:opacity-100"></div>
```
### SSR / hydration safety
**Rule: `initial` must always match what the server renders.**
```tsx
// WRONG — server renders opacity:1 but initial says 0 → hydration mismatch
<motion.div initial={{ opacity: 0 }} animate={{ opacity: 1 }} />
// CORRECT — use AnimatePresence or defer to client mount
"use client"
const [mounted, setMounted] = useState(false)
useEffect(() => setMounted(true), [])
<motion.div
initial={{ opacity: mounted ? 0 : 1 }}
animate={{ opacity: 1 }}
/>
```
## Code Examples
### End-to-end: tokens + springs + accessibility + SSR guard
```tsx
// components/fade-in-card.tsx
"use client"
import { useState, useEffect } from "react"
import { motion } from "motion/react"
import { motionTokens, springs } from "@/lib/motion-tokens"
import { useSafeMotion } from "@/hooks/use-reduced-motion"
import { motionConfig } from "@/lib/motion-config"
interface FadeInCardProps {
children: React.ReactNode
delay?: number
}
export function FadeInCard({ children, delay = 0 }: FadeInCardProps) {
// SSR guard — initial must match server output (opacity: 1)
const [mounted, setMounted] = useState(false)
useEffect(() => setMounted(true), [])
// Accessibility — disables transform when reduced motion is preferred
const safeMotion = useSafeMotion(motionTokens.distance.md)
// Device gate — skip animation on low-end hardware
if (!motionConfig.shouldAnimate() || !mounted) {
return <div>{children}</div>
}
return (
<motion.div
initial={safeMotion.initial}
animate={safeMotion.animate}
exit={safeMotion.exit}
transition={{
...springs.gentle,
delay,
}}
whileHover={{ scale: motionTokens.scale.pop }}
whileTap={{ scale: motionTokens.scale.press }}
>
{children}
</motion.div>
)
}
```
## Constraints / Non-Goals
This skill does **not** cover:
- UI component patterns (button, modal, stagger) → see `motion-patterns`
- Drag, gestures, SVG, text animations, custom hooks → see `motion-advanced`
- CSS-only animations or Tailwind `animate-*` classes without `motion/react`
- Third-party animation libraries (GSAP, anime.js, etc.)
- Motion design decisions (when to animate, what to emphasize) — that is a design concern, not a code constraint
## Anti-Patterns
| Anti-pattern | Rule violated | Fix |
| --------------------------------------- | ------- | ------------------------------- |
| `import { motion } from "framer-motion"` | Rule 1 | Use `motion/react` |
| `initial={{ opacity: 0 }}` on SSR component | Rule 2 | Add mount guard |
| Skipping `useReducedMotion` check | Rule 3 | Use `useSafeMotion` hook |
| `animate={{ width: "100%" }}` | Rule 4 | Use `scaleX` transform instead |
| `transition={{ duration: 0.4 }}` inline | Rule 5 | Use `motionTokens.duration.normal` |
| `{ stiffness: 300, damping: 30 }` inline | Rule 6 | Use `springs.snappy` |
| Missing `"use client"` directive | Rule 7 | Add to top of file |
| `navigator.hardwareConcurrency` at module level | Rule 8 | Wrap in `typeof navigator !== "undefined"` |
## Related Skills
- **`motion-patterns`** — consumes tokens and springs defined here to build button, modal, stagger, page transition, and scroll patterns. Does not redefine any values.
- **`motion-advanced`** — consumes tokens and springs defined here for drag, SVG, text, and gesture patterns. Adds `useAnimate` sequences and custom hooks on top of this foundation.

View File

@@ -0,0 +1,435 @@
---
name: motion-patterns
description: Production-ready animation patterns for React / Next.js — button, modal, toast, stagger, page transitions, exit animations, scroll, and layout — built on motion-foundations tokens and springs.
version: 1.0
tags: [motion, animation, ui-patterns]
category: frontend
author: jeff
---
# Motion Patterns
Copy-paste patterns for the most common UI animation needs.
Every pattern here is built on `motion-foundations` tokens and springs.
Do not define new duration or easing values here — import them.
## When to Activate
- Animating a button, card, modal, or toast notification
- Building list entrances with stagger
- Setting up page transitions in Next.js App Router
- Adding entrance or exit animations to conditional content
- Implementing scroll-reveal, scroll-linked progress, or sticky story sections
- Building expanding cards, accordions, or shared-element transitions
## Outputs
This skill produces:
- Accessible, SSR-safe animation for all standard UI components
- `AnimatePresence`-wrapped conditional renders with correct exit behavior
- Page transition wrapper component for Next.js App Router
- Scroll-reveal and scroll-linked patterns using `useScroll` + `useTransform`
- Layout animation patterns (`layout`, `layoutId`) for expanding and crossfading elements
## Principles
- Every pattern imports from `motion-foundations`. No raw numbers.
- Every conditional render is wrapped in `AnimatePresence` with a `key`.
- Exit animations are always defined alongside enter animations — never as an afterthought.
- `layout` is used only for small, isolated shifts. Large subtrees get explicit transforms.
## Rules
1. **Always wrap conditional renders in `AnimatePresence` with a `key`** on the direct child. Without a key, exit animations never fire.
2. **Always define `exit` when defining `initial` + `animate`.** An animation without an exit is incomplete.
3. **Use `mode="wait"` on page transitions.** Enter must not start until exit completes.
4. **Never use `layout` on subtrees with more than ~5 children or deeply nested DOM.** Use explicit `x`/`y` transforms instead.
5. **Stagger interval must stay between `0.05s` and `0.10s`.** Below feels mechanical; above feels sluggish.
6. **Modals must always include:** focus trap, Escape-key close, scroll lock, `role="dialog"`, `aria-modal="true"`.
7. **Scroll reveals use `viewport={{ once: true }}`.** Repeating on scroll-out is distracting, not informative.
8. **All token values are imported from `motion-foundations`.** No inline numbers.
## Decision Guidance
### Choosing the right pattern
| Situation | Pattern |
| ---------------------------------------- | ---------------------- |
| Element appears / disappears | `AnimatePresence` |
| List of items loading in sequence | Stagger variants |
| Navigating between routes | Page transition wrapper|
| Element changes size in place | `layout` prop |
| Same element moves across page contexts | `layoutId` |
| Element enters when scrolled into view | `whileInView` |
| Value tied to scroll position | `useScroll` + `useTransform` |
### When to use `mode="wait"` vs `mode="sync"`
| Mode | Use when |
| ------- | --------------------------------------- |
| `wait` | Page transitions, content swaps (one at a time) |
| `sync` | Stacked notifications, list items (overlap is fine) |
| `popLayout` | Items removed from a reflow list |
## Core Concepts
### AnimatePresence contract
Three things must always be true:
1. `AnimatePresence` wraps the conditional
2. The direct child has a `key`
3. The child has an `exit` prop
Miss any one of these and the exit animation silently fails.
### layout vs layoutId
- `layout` — animates the element's own size/position change in place
- `layoutId` — links two separate elements, crossfading between them across renders
Use `layout="position"` on text inside an expanding container to prevent text reflow from animating.
## Code Examples
### Button feedback
```tsx
"use client"
import { motion } from "motion/react"
import { springs, motionTokens } from "@/lib/motion-tokens"
<motion.button
whileHover={{ scale: motionTokens.scale.pop }}
whileTap={{ scale: motionTokens.scale.press }}
transition={springs.snappy}
/>
```
### Stagger list
```tsx
"use client"
import { motion } from "motion/react"
import { motionTokens, springs } from "@/lib/motion-tokens"
const container = {
hidden: {},
visible: {
transition: {
staggerChildren: 0.08, // within the 0.050.10 rule
delayChildren: 0.1,
},
},
}
const item = {
hidden: { opacity: 0, y: motionTokens.distance.md },
visible: { opacity: 1, y: 0, transition: springs.gentle },
}
<motion.ul variants={container} initial="hidden" animate="visible">
{items.map((i) => (
<motion.li key={i.id} variants={item} />
))}
</motion.ul>
```
### Modal
```tsx
"use client"
import { motion, AnimatePresence } from "motion/react"
import { motionTokens, springs } from "@/lib/motion-tokens"
// Wrap at the call site:
// <AnimatePresence>{isOpen && <Modal key="modal" />}</AnimatePresence>
export function Modal({ onClose }: { onClose: () => void }) {
return (
<>
{/* Overlay */}
<motion.div
className="fixed inset-0 bg-black/50"
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
exit={{ opacity: 0 }}
onClick={onClose}
/>
{/* Panel — accessibility requirements: focus trap, Escape close,
scroll lock, role="dialog", aria-modal="true" */}
<motion.div
role="dialog"
aria-modal="true"
className="fixed inset-x-4 top-1/2 -translate-y-1/2 rounded-xl bg-white p-6"
initial={{
opacity: 0,
scale: motionTokens.scale.press,
y: motionTokens.distance.sm,
}}
animate={{ opacity: 1, scale: 1, y: 0 }}
exit={{
opacity: 0,
scale: motionTokens.scale.press,
y: motionTokens.distance.sm,
}}
transition={springs.gentle}
/>
</>
)
}
```
### Toast stack
```tsx
"use client"
import { motion, AnimatePresence } from "motion/react"
import { motionTokens, springs } from "@/lib/motion-tokens"
<AnimatePresence mode="sync">
{toasts.map((t) => (
<motion.div
key={t.id}
layout
initial={{
opacity: 0,
x: motionTokens.distance.xl,
scale: motionTokens.scale.subtle,
}}
animate={{ opacity: 1, x: 0, scale: 1 }}
exit={{
opacity: 0,
x: motionTokens.distance.xl,
scale: motionTokens.scale.subtle,
}}
transition={springs.snappy}
/>
))}
</AnimatePresence>
```
### Page transition (Next.js App Router)
```tsx
// components/page-transition.tsx
"use client"
import { motion, AnimatePresence } from "motion/react"
import { usePathname } from "next/navigation"
import { motionTokens } from "@/lib/motion-tokens"
const variants = {
initial: { opacity: 0, y: motionTokens.distance.sm },
enter: { opacity: 1, y: 0 },
exit: { opacity: 0, y: -motionTokens.distance.sm },
}
export function PageTransition({ children }: { children: React.ReactNode }) {
const pathname = usePathname()
return (
<AnimatePresence mode="wait">
<motion.div
key={pathname}
variants={variants}
initial="initial"
animate="enter"
exit="exit"
transition={{
duration: motionTokens.duration.normal,
ease: motionTokens.easing.smooth,
}}
>
{children}
</motion.div>
</AnimatePresence>
)
}
```
### Scroll reveal
```tsx
"use client"
import { motion } from "motion/react"
import { motionTokens, springs } from "@/lib/motion-tokens"
<motion.div
initial={{ opacity: 0, y: motionTokens.distance.lg }}
whileInView={{ opacity: 1, y: 0 }}
viewport={{ once: true, margin: "-80px" }} // once: true — rule 7
transition={{ duration: motionTokens.duration.slow, ease: motionTokens.easing.smooth }}
/>
```
### Scroll progress bar
```tsx
"use client"
import { motion, useScroll } from "motion/react"
export function ScrollProgress() {
const { scrollYProgress } = useScroll()
return (
<motion.div
className="fixed top-0 left-0 h-1 bg-indigo-500 origin-left w-full"
style={{ scaleX: scrollYProgress }}
/>
)
}
```
### Expanding card
```tsx
"use client"
import { useState } from "react"
import { motion, AnimatePresence } from "motion/react"
import { springs, motionTokens } from "@/lib/motion-tokens"
export function ExpandingCard({ title, body }: { title: string; body: string }) {
const [expanded, setExpanded] = useState(false)
return (
<motion.div layout onClick={() => setExpanded(!expanded)} className="cursor-pointer">
{/* layout="position" prevents text reflow from animating */}
<motion.h2 layout="position" className="font-semibold">
{title}
</motion.h2>
<AnimatePresence>
{expanded && (
<motion.p
key="body"
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
exit={{ opacity: 0 }}
transition={{ duration: motionTokens.duration.fast }}
>
{body}
</motion.p>
)}
</AnimatePresence>
</motion.div>
)
}
```
### Shared-element crossfade
```tsx
// Source context
<motion.img layoutId="hero-image" src={src} className="w-16 h-16 rounded" />
// Destination context (same layoutId — motion handles the transition)
<motion.img layoutId="hero-image" src={src} className="w-full rounded-xl" />
```
### Accordion
```tsx
<motion.div
initial={false}
animate={{ opacity: open ? 1 : 0, scaleY: open ? 1 : 0 }}
style={{ transformOrigin: "top", overflow: "hidden" }}
transition={{
duration: motionTokens.duration.normal,
ease: motionTokens.easing.smooth,
}}
>
{children}
</motion.div>
```
## End-to-End Example
A staggered list that enters on mount, handles conditional presence, and
respects reduced motion — combining tokens, springs, AnimatePresence, and
the accessibility hook from `motion-foundations`:
```tsx
"use client"
import { useState } from "react"
import { motion, AnimatePresence } from "motion/react"
import { motionTokens, springs } from "@/lib/motion-tokens"
import { useSafeMotion } from "@/hooks/use-reduced-motion"
const containerVariants = {
hidden: {},
visible: {
transition: { staggerChildren: 0.08, delayChildren: 0.1 },
},
}
function ListItem({ label, onRemove }: { label: string; onRemove: () => void }) {
const safe = useSafeMotion(motionTokens.distance.sm)
return (
<motion.li
variants={{
hidden: safe.initial,
visible: safe.animate,
}}
exit={safe.exit}
transition={springs.gentle}
className="flex items-center justify-between p-3 rounded-lg bg-white shadow-sm"
>
<span>{label}</span>
<button onClick={onRemove}>Remove</button>
</motion.li>
)
}
export function AnimatedList({ items, onRemove }: {
items: { id: string; label: string }[]
onRemove: (id: string) => void
}) {
return (
<motion.ul
variants={containerVariants}
initial="hidden"
animate="visible"
className="space-y-2"
>
<AnimatePresence mode="popLayout">
{items.map((item) => (
<ListItem
key={item.id}
label={item.label}
onRemove={() => onRemove(item.id)}
/>
))}
</AnimatePresence>
</motion.ul>
)
}
```
## Constraints / Non-Goals
This skill does **not** cover:
- Token and spring definitions → see `motion-foundations`
- Drag interactions, swipe gestures, reorderable lists → see `motion-advanced`
- Text animations (word/character reveal, counters) → see `motion-advanced`
- SVG path drawing or morphing → see `motion-advanced`
- Custom animation hooks → see `motion-advanced`
- CSS-only transitions not using `motion/react`
## Anti-Patterns
| Anti-pattern | Rule violated | Fix |
| -------------------------------------------- | ------- | ------------------------------------------ |
| `AnimatePresence` child missing `key` | Rule 1 | Add stable `key` to the direct child |
| `initial` + `animate` without `exit` | Rule 2 | Always define all three together |
| Page transition without `mode="wait"` | Rule 3 | Add `mode="wait"` to `AnimatePresence` |
| `layout` on a 50-item list | Rule 4 | Use `mode="popLayout"` or explicit transforms |
| `staggerChildren: 0.2` on a 10-item list | Rule 5 | Cap at `0.080.10` |
| Modal without focus trap | Rule 6 | Add `focus-trap-react` or Radix Dialog |
| `whileInView` without `viewport={{ once: true }}` | Rule 7 | Repeating entrances distract, not inform |
| `transition={{ duration: 0.3 }}` inline | Rule 8 | Use `motionTokens.duration.normal` |
## Related Skills
- **`motion-foundations`** — defines all tokens, springs, the `useSafeMotion` hook, and SSR guards that every pattern here imports. Must be set up first.
- **`motion-advanced`** — extends these patterns with drag, gestures, SVG, text, custom hooks, and imperative sequencing. Does not redefine any patterns from this skill.

View File

@@ -50,6 +50,7 @@ const expectedReleaseFiles = [
'telegram-handoff.md',
'demo-prompts.md',
'quickstart.md',
'publication-readiness.md',
];
test('release candidate directory includes the public launch pack', () => {
@@ -175,6 +176,53 @@ test('launch checklist records the ecc2 alpha version policy', () => {
assert.ok(!launchChecklist.includes('confirm whether `ecc2/Cargo.toml` moves'));
});
test('publication readiness checklist gates public release actions on evidence', () => {
const source = read('docs/releases/2.0.0-rc.1/publication-readiness.md');
for (const section of [
'## Release Identity Matrix',
'## Publication Gates',
'## Required Command Evidence',
'## Do Not Publish If',
'## Announcement Order',
]) {
assert.ok(source.includes(section), `publication readiness missing ${section}`);
}
for (const field of [
'Fresh check',
'Evidence artifact',
'Owner',
'Status',
'Blocker field',
'Recorded output',
]) {
assert.ok(source.includes(field), `publication readiness missing ${field}`);
}
for (const surface of [
'GitHub release',
'npm package',
'Claude plugin',
'Codex plugin',
'OpenCode package',
'ECC Tools billing reference',
'Announcement copy',
]) {
assert.ok(source.includes(surface), `publication readiness missing ${surface}`);
}
});
test('release checklist and roadmap link to publication readiness evidence gate', () => {
const launchChecklist = read('docs/releases/2.0.0-rc.1/launch-checklist.md');
const roadmap = read('docs/ECC-2.0-GA-ROADMAP.md');
assert.ok(launchChecklist.includes('publication-readiness.md'));
assert.ok(launchChecklist.includes('fresh evidence'));
assert.ok(roadmap.includes('docs/releases/2.0.0-rc.1/publication-readiness.md'));
assert.ok(roadmap.includes('npm dist-tag'));
});
test('localized changelogs include rc.1 and 1.10.0 release entries', () => {
for (const relativePath of ['docs/tr/CHANGELOG.md', 'docs/zh-CN/CHANGELOG.md']) {
const source = read(relativePath);

View File

@@ -0,0 +1,146 @@
'use strict';
const assert = require('assert');
const { execFileSync } = require('child_process');
const fs = require('fs');
const path = require('path');
const {
ADAPTER_RECORDS,
extractMatrixBlock,
renderMarkdownTable,
validateAdapterRecords,
} = require('../../scripts/lib/harness-adapter-compliance');
const repoRoot = path.resolve(__dirname, '..', '..');
const scriptPath = path.join(repoRoot, 'scripts', 'harness-adapter-compliance.js');
let passed = 0;
let failed = 0;
function test(name, fn) {
try {
fn();
console.log(`${name}`);
passed++;
} catch (error) {
console.log(`${name}`);
console.log(` Error: ${error.message}`);
failed++;
}
}
function read(relativePath) {
return fs.readFileSync(path.join(repoRoot, relativePath), 'utf8');
}
console.log('\n=== Testing harness adapter compliance docs ===\n');
test('adapter compliance matrix covers the required harness surfaces', () => {
const source = read('docs/architecture/harness-adapter-compliance.md');
for (const harness of [
'Claude Code',
'Codex',
'OpenCode',
'Cursor',
'Gemini',
'Zed-adjacent',
'dmux',
'Orca',
'Superset',
'Ghast',
'Terminal-only'
]) {
assert.ok(source.includes(harness), `Expected matrix to include ${harness}`);
}
});
test('adapter compliance source data validates required evidence fields', () => {
assert.deepStrictEqual(validateAdapterRecords(), []);
for (const record of ADAPTER_RECORDS) {
assert.ok(record.install_or_onramp.length > 0, `${record.id} needs an install or onramp`);
assert.ok(record.verification_commands.length > 0, `${record.id} needs verification commands`);
assert.ok(record.risk_notes.length > 0, `${record.id} needs risk notes`);
assert.ok(record.source_docs.length > 0, `${record.id} needs source docs`);
}
});
test('adapter compliance matrix is generated from source data', () => {
const source = read('docs/architecture/harness-adapter-compliance.md');
assert.strictEqual(extractMatrixBlock(source), renderMarkdownTable());
});
test('adapter compliance matrix extraction tolerates Windows line endings', () => {
const source = read('docs/architecture/harness-adapter-compliance.md')
.replace(/\r\n/g, '\n')
.replace(/\n/g, '\r\n');
assert.strictEqual(extractMatrixBlock(source), renderMarkdownTable());
});
test('adapter compliance matrix includes the required evidence columns', () => {
const source = read('docs/architecture/harness-adapter-compliance.md');
for (const heading of [
'Supported assets',
'Unsupported or different surfaces',
'Install or onramp',
'Verification command',
'Risk notes'
]) {
assert.ok(source.includes(heading), `Expected matrix to include ${heading}`);
}
});
test('scorecard onramp names the local verification commands', () => {
const source = read('docs/architecture/harness-adapter-compliance.md');
for (const command of [
'npm run harness:adapters -- --check',
'npm run harness:audit -- --format json',
'npm run observability:ready',
'node scripts/session-inspect.js --list-adapters',
'node scripts/loop-status.js --json --write-dir .ecc/loop-status'
]) {
assert.ok(source.includes(command), `Expected onramp to include ${command}`);
}
});
test('adapter compliance CLI check passes against the committed doc', () => {
const output = execFileSync('node', [scriptPath, '--check'], {
cwd: repoRoot,
encoding: 'utf8',
});
assert.ok(output.includes('Harness Adapter Compliance: PASS'));
assert.ok(output.includes(`Adapters: ${ADAPTER_RECORDS.length}`));
});
test('adapter compliance CLI emits machine-readable scorecard data', () => {
const output = execFileSync('node', [scriptPath, '--format=json'], {
cwd: repoRoot,
encoding: 'utf8',
});
const parsed = JSON.parse(output);
assert.strictEqual(parsed.schema_version, 'ecc.harness-adapter-compliance.v1');
assert.strictEqual(parsed.valid, true);
assert.strictEqual(parsed.adapter_count, ADAPTER_RECORDS.length);
assert.ok(parsed.adapters.some(record => record.id === 'terminal-only'));
});
test('cross-harness architecture links to the adapter compliance matrix', () => {
const source = read('docs/architecture/cross-harness.md');
assert.ok(source.includes('harness-adapter-compliance.md'));
});
test('GA roadmap records the matrix and validator as current evidence', () => {
const source = read('docs/ECC-2.0-GA-ROADMAP.md');
assert.ok(source.includes('docs/architecture/harness-adapter-compliance.md'));
assert.ok(source.includes('npm run harness:adapters -- --check'));
assert.ok(source.includes('scripts/lib/harness-adapter-compliance.js'));
});
if (failed > 0) {
console.log(`\nFailed: ${failed}`);
process.exit(1);
}
console.log(`\nPassed: ${passed}`);

View File

@@ -0,0 +1,144 @@
'use strict';
const assert = require('assert');
const fs = require('fs');
const path = require('path');
const repoRoot = path.resolve(__dirname, '..', '..');
const legacyShimsDir = path.join(repoRoot, 'legacy-command-shims', 'commands');
let passed = 0;
let failed = 0;
function test(name, fn) {
try {
fn();
console.log(`${name}`);
passed++;
} catch (error) {
console.log(`${name}`);
console.log(` Error: ${error.message}`);
failed++;
}
}
function read(relativePath) {
return fs.readFileSync(path.join(repoRoot, relativePath), 'utf8');
}
function findLegacyDocumentDirs(dir) {
const results = [];
for (const entry of fs.readdirSync(dir, { withFileTypes: true })) {
if (entry.name === 'node_modules' || entry.name === '.git') {
continue;
}
const nextPath = path.join(dir, entry.name);
if (!entry.isDirectory()) {
continue;
}
if (entry.name.startsWith('_legacy-documents-')) {
results.push(path.relative(repoRoot, nextPath));
}
results.push(...findLegacyDocumentDirs(nextPath));
}
return results.sort();
}
console.log('\n=== Testing legacy artifact inventory ===\n');
test('legacy artifact inventory documents classification states', () => {
const source = read('docs/legacy-artifact-inventory.md');
for (const state of [
'Landed',
'Milestone-tracked',
'Salvage branch',
'Translator/manual review',
'Archive/no-action',
]) {
assert.ok(source.includes(state), `Missing classification state ${state}`);
}
});
test('any _legacy-documents directories are explicitly inventoried', () => {
const source = read('docs/legacy-artifact-inventory.md');
const dirs = findLegacyDocumentDirs(repoRoot);
for (const dir of dirs) {
assert.ok(source.includes(dir), `Missing legacy artifact inventory row for ${dir}`);
}
});
test('workspace-level legacy repos are inventoried without personal paths', () => {
const source = read('docs/legacy-artifact-inventory.md');
for (const dir of [
'../_legacy-documents-ecc-context-2026-04-30',
'../_legacy-documents-ecc-everything-claude-code-2026-04-30',
]) {
assert.ok(source.includes(dir), `Missing workspace legacy repo ${dir}`);
}
assert.ok(source.includes('Workspace-Level Legacy Repos'));
assert.ok(!source.includes('/Users/'), 'Inventory should avoid machine-local absolute paths');
});
test('workspace legacy import rules block raw private context', () => {
const source = read('docs/legacy-artifact-inventory.md');
for (const required of [
'Do not read, print, stage, or copy `.env` files',
'tokens',
'OAuth secrets',
'personal paths',
'private operator context',
'Do not import raw marketing drafts',
'public-safe ideas',
]) {
assert.ok(source.includes(required), `Missing import guardrail: ${required}`);
}
});
test('legacy command shims remain classified as an opt-in archive', () => {
const source = read('docs/legacy-artifact-inventory.md');
const readme = read('legacy-command-shims/README.md');
assert.ok(source.includes('legacy-command-shims/'));
assert.ok(source.includes('Archive/no-action'));
assert.ok(readme.includes('no longer loaded by the default plugin command surface'));
assert.ok(readme.includes('short-term migration compatibility'));
});
test('legacy command shim table tracks the current archive contents', () => {
const source = read('docs/legacy-artifact-inventory.md');
const shims = fs.readdirSync(legacyShimsDir)
.filter(fileName => fileName.endsWith('.md'))
.sort();
assert.strictEqual(shims.length, 12);
for (const shim of shims) {
assert.ok(source.includes(`\`${shim}\``), `Missing legacy shim ${shim}`);
}
});
test('stale salvage backlog records the remaining manual-review tail', () => {
const source = read('docs/legacy-artifact-inventory.md');
assert.ok(source.includes('#1687 zh-CN localization tail'));
assert.ok(source.includes('Translator/manual review'));
assert.ok(source.includes('#1746-#1752'));
});
if (failed > 0) {
console.log(`\nFailed: ${failed}`);
process.exit(1);
}
console.log(`\nPassed: ${passed}`);

View File

@@ -0,0 +1,97 @@
'use strict';
const assert = require('assert');
const fs = require('fs');
const path = require('path');
const repoRoot = path.resolve(__dirname, '..', '..');
let passed = 0;
let failed = 0;
function test(name, fn) {
try {
fn();
console.log(`${name}`);
passed++;
} catch (error) {
console.log(`${name}`);
console.log(` Error: ${error.message}`);
failed++;
}
}
function read(relativePath) {
return fs.readFileSync(path.join(repoRoot, relativePath), 'utf8');
}
console.log('\n=== Testing stale PR salvage ledger ===\n');
test('stale PR salvage ledger defines every disposition state', () => {
const source = read('docs/stale-pr-salvage-ledger.md');
for (const state of [
'Salvaged',
'Already present',
'Superseded',
'Skipped',
'Translator/manual review',
]) {
assert.ok(source.includes(state), `Missing salvage state ${state}`);
}
});
test('stale PR salvage ledger preserves representative source attribution', () => {
const source = read('docs/stale-pr-salvage-ledger.md');
for (const pr of [
'#1309',
'#1322',
'#1326',
'#1413',
'#1493',
'#1528/#1529/#1547',
'#1674',
'#1687',
'#1705/#1780',
'#1757',
]) {
assert.ok(source.includes(pr), `Missing source PR attribution for ${pr}`);
}
});
test('stale PR salvage ledger records skipped junk and superseded work', () => {
const source = read('docs/stale-pr-salvage-ledger.md');
for (const pr of ['#1306', '#1337', '#1341', '#1416/#1465', '#1475']) {
assert.ok(source.includes(pr), `Missing skipped or superseded PR ${pr}`);
}
assert.ok(source.includes('Accidental fork-sync PRs'));
assert.ok(source.includes('too low-signal'));
});
test('stale PR salvage ledger keeps the zh-CN tail manual-review only', () => {
const source = read('docs/stale-pr-salvage-ledger.md');
assert.ok(source.includes('Only the #1687 localization tail remains'));
assert.ok(source.includes('translator/manual review'));
assert.ok(source.includes('Do not import stale top-level docs'));
});
test('legacy inventory and roadmap link to the durable salvage ledger', () => {
const inventory = read('docs/legacy-artifact-inventory.md');
const roadmap = read('docs/ECC-2.0-GA-ROADMAP.md');
assert.ok(inventory.includes('docs/stale-pr-salvage-ledger.md'));
assert.ok(roadmap.includes('docs/stale-pr-salvage-ledger.md'));
assert.ok(roadmap.includes('#1687 translator/manual'));
});
if (failed > 0) {
console.log(`\nFailed: ${failed}`);
process.exit(1);
}
console.log(`\nPassed: ${passed}`);

View File

@@ -20,33 +20,38 @@ function cleanup(dirPath) {
fs.rmSync(dirPath, { recursive: true, force: true });
}
function shellQuote(value) {
return `'${String(value).replace(/'/g, "'\\''")}'`;
}
function writeFakePython(binDir) {
fs.mkdirSync(binDir, { recursive: true });
const fakePythonJs = path.join(binDir, 'fake-python.js');
fs.writeFileSync(fakePythonJs, [
"'use strict';",
"const fs = require('fs');",
"const mode = process.env.FAKE_INSAITS_MODE || 'clean';",
"if (mode === 'clean') {",
" fs.readFileSync(0, 'utf8');",
" process.exit(0);",
"}",
"if (mode === 'echo') {",
" process.stdout.write(fs.readFileSync(0, 'utf8'));",
" process.exit(0);",
"}",
"if (mode === 'block') {",
" process.stdout.write('blocked by monitor\\n');",
" process.stderr.write('monitor warning\\n');",
" process.exit(2);",
"}",
"if (mode === 'error') {",
" process.stderr.write('spawned but failed\\n');",
" process.exit(1);",
"}",
].join('\n'), 'utf8');
if (process.platform === 'win32') {
const fakePythonJs = path.join(binDir, 'fake-python.js');
const fakePythonCmd = path.join(binDir, 'python3.cmd');
fs.writeFileSync(fakePythonJs, [
"'use strict';",
"const fs = require('fs');",
"const mode = process.env.FAKE_INSAITS_MODE || 'clean';",
"if (mode === 'clean') {",
" fs.readFileSync(0, 'utf8');",
" process.exit(0);",
"}",
"if (mode === 'echo') {",
" process.stdout.write(fs.readFileSync(0, 'utf8'));",
" process.exit(0);",
"}",
"if (mode === 'block') {",
" process.stdout.write('blocked by monitor\\n');",
" process.stderr.write('monitor warning\\n');",
" process.exit(2);",
"}",
"if (mode === 'error') {",
" process.stderr.write('spawned but failed\\n');",
" process.exit(1);",
"}",
].join('\n'), 'utf8');
fs.writeFileSync(fakePythonCmd, [
'@echo off',
`"${process.execPath}" "%~dp0fake-python.js" %*`,
@@ -57,26 +62,7 @@ function writeFakePython(binDir) {
const fakePython = path.join(binDir, 'python3');
fs.writeFileSync(fakePython, [
'#!/bin/sh',
'mode="${FAKE_INSAITS_MODE:-clean}"',
'case "$mode" in',
' clean)',
' cat >/dev/null',
' exit 0',
' ;;',
' echo)',
' cat',
' exit 0',
' ;;',
' block)',
' printf "blocked by monitor\\n"',
' printf "monitor warning\\n" >&2',
' exit 2',
' ;;',
' error)',
' printf "spawned but failed\\n" >&2',
' exit 1',
' ;;',
'esac',
`exec ${shellQuote(process.execPath)} ${shellQuote(fakePythonJs)} "$@"`,
].join('\n'), 'utf8');
fs.chmodSync(fakePython, 0o755);
}

View File

@@ -568,7 +568,7 @@ async function runTests() {
CLAUDE_HOOK_EVENT_NAME: 'PreToolUse',
ECC_MCP_CONFIG_PATH: configPath,
ECC_MCP_HEALTH_STATE_PATH: statePath,
ECC_MCP_HEALTH_TIMEOUT_MS: process.platform === 'win32' ? '1000' : '100'
ECC_MCP_HEALTH_TIMEOUT_MS: '1000'
}
);

View File

@@ -56,6 +56,7 @@ function buildExpectedPublishPaths(repoRoot) {
"scripts/observability-readiness.js",
"scripts/skill-create-output.js",
"scripts/repair.js",
"scripts/harness-adapter-compliance.js",
"scripts/harness-audit.js",
"scripts/session-inspect.js",
"scripts/uninstall.js",