From 975100db55398d247af5ba1b89dcd7225e149d50 Mon Sep 17 00:00:00 2001 From: Affaan Mustafa Date: Wed, 1 Apr 2026 02:25:42 -0700 Subject: [PATCH] refactor: collapse legacy command bodies into skills --- AGENTS.md | 4 +- README.md | 6 +- README.zh-CN.md | 2 +- WORKING-CONTEXT.md | 4 +- commands/claw.md | 54 +--- commands/context-budget.md | 30 +-- commands/devfleet.md | 95 +------ commands/docs.md | 34 +-- commands/e2e.md | 123 +-------- commands/eval.md | 129 ++------- commands/orchestrate.md | 124 +-------- commands/prompt-optimize.md | 41 +-- commands/rules-distill.md | 19 +- commands/tdd.md | 123 +-------- commands/verify.md | 68 ++--- docs/zh-CN/AGENTS.md | 4 +- docs/zh-CN/README.md | 6 +- manifests/install-modules.json | 1 + skills/ui-demo/SKILL.md | 465 +++++++++++++++++++++++++++++++++ 19 files changed, 630 insertions(+), 702 deletions(-) create mode 100644 skills/ui-demo/SKILL.md diff --git a/AGENTS.md b/AGENTS.md index e2a98807..3fca216e 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -1,6 +1,6 @@ # Everything Claude Code (ECC) — Agent Instructions -This is a **production-ready AI coding plugin** providing 36 specialized agents, 142 skills, 68 commands, and automated hook workflows for software development. +This is a **production-ready AI coding plugin** providing 36 specialized agents, 143 skills, 68 commands, and automated hook workflows for software development. **Version:** 1.9.0 @@ -146,7 +146,7 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat ``` agents/ — 36 specialized subagents -skills/ — 142 workflow skills and domain knowledge +skills/ — 143 workflow skills and domain knowledge commands/ — 68 slash commands hooks/ — Trigger-based automations rules/ — Always-follow guidelines (common + per-language) diff --git a/README.md b/README.md index 5a6aaf04..68e91c77 100644 --- a/README.md +++ b/README.md @@ -225,7 +225,7 @@ For manual install instructions see the README in the `rules/` folder. When copy /plugin list everything-claude-code@everything-claude-code ``` -**That's it!** You now have access to 36 agents, 142 skills, and 68 legacy command shims. +**That's it!** You now have access to 36 agents, 143 skills, and 68 legacy command shims. ### Multi-model commands require additional setup @@ -1120,7 +1120,7 @@ The configuration is automatically detected from `.opencode/opencode.json`. |---------|-------------|----------|--------| | Agents | PASS: 36 agents | PASS: 12 agents | **Claude Code leads** | | Commands | PASS: 68 commands | PASS: 31 commands | **Claude Code leads** | -| Skills | PASS: 142 skills | PASS: 37 skills | **Claude Code leads** | +| Skills | PASS: 143 skills | PASS: 37 skills | **Claude Code leads** | | Hooks | PASS: 8 event types | PASS: 11 events | **OpenCode has more!** | | Rules | PASS: 29 rules | PASS: 13 instructions | **Claude Code leads** | | MCP Servers | PASS: 14 servers | PASS: Full | **Full parity** | @@ -1229,7 +1229,7 @@ ECC is the **first plugin to maximize every major AI coding tool**. Here's how e |---------|------------|------------|-----------|----------| | **Agents** | 36 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 | | **Commands** | 68 | Shared | Instruction-based | 31 | -| **Skills** | 142 | Shared | 10 (native format) | 37 | +| **Skills** | 143 | Shared | 10 (native format) | 37 | | **Hook Events** | 8 types | 15 types | None yet | 11 types | | **Hook Scripts** | 20+ scripts | 16 scripts (DRY adapter) | N/A | Plugin hooks | | **Rules** | 34 (common + lang) | 34 (YAML frontmatter) | Instruction-based | 13 instructions | diff --git a/README.zh-CN.md b/README.zh-CN.md index 6d7047b7..4c116bef 100644 --- a/README.zh-CN.md +++ b/README.zh-CN.md @@ -106,7 +106,7 @@ cp -r everything-claude-code/rules/perl ~/.claude/rules/ /plugin list everything-claude-code@everything-claude-code ``` -**完成!** 你现在可以使用 36 个代理、142 个技能和 68 个命令。 +**完成!** 你现在可以使用 36 个代理、143 个技能和 68 个命令。 ### multi-* 命令需要额外配置 diff --git a/WORKING-CONTEXT.md b/WORKING-CONTEXT.md index c74f5a85..fd255ea5 100644 --- a/WORKING-CONTEXT.md +++ b/WORKING-CONTEXT.md @@ -57,9 +57,9 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa - `#1043` C# reviewer and .NET skills - Direct-port candidates landed after audit: - `#1078` hook-id dedupe for managed Claude hook reinstalls + - `#844` ui-demo skill - Port or rebuild inside ECC after full audit: - `#894` Jira integration - - `#844` ui-demo skill - `#814` + `#808` rebuild as a single consolidated notifications lane for Opencode and cross-harness surfaces ## Interfaces @@ -93,3 +93,5 @@ Keep this file detailed for only the current sprint, blockers, and next actions. - 2026-04-01: Core English repo surfaces were shifted to a skills-first posture. README, AGENTS, plugin metadata, and contributor instructions now treat `skills/` as canonical and `commands/` as legacy slash-entry compatibility during migration. - 2026-04-01: Follow-up bundle cleanup closed `#1080` and `#1079`, which were generated `.claude/` bundle PRs duplicating command-first scaffolding instead of shipping canonical ECC source changes. - 2026-04-01: Ported the useful core of `#1078` directly into `main`, but tightened the implementation so legacy no-id hook installs deduplicate cleanly on the first reinstall instead of the second. Added stable hook ids to `hooks/hooks.json`, semantic fallback aliases in `mergeHookEntries()`, and a regression test covering upgrade from pre-id settings. +- 2026-04-01: Collapsed the obvious command/skill duplicates into thin legacy shims so `skills/` now hold the maintained bodies for NanoClaw, context-budget, DevFleet, docs lookup, E2E, evals, orchestration, prompt optimization, rules distillation, TDD, and verification. +- 2026-04-01: Ported the self-contained core of `#844` directly into `main` as `skills/ui-demo/SKILL.md` and registered it under the `media-generation` install module instead of merging the PR wholesale. diff --git a/commands/claw.md b/commands/claw.md index ebc25ba6..6086abdf 100644 --- a/commands/claw.md +++ b/commands/claw.md @@ -1,51 +1,23 @@ --- -description: Start NanoClaw v2 — ECC's persistent, zero-dependency REPL with model routing, skill hot-load, branching, compaction, export, and metrics. +description: Legacy slash-entry shim for the nanoclaw-repl skill. Prefer the skill directly. --- -# Claw Command +# Claw Command (Legacy Shim) -Start an interactive AI agent session with persistent markdown history and operational controls. +Use this only if you still reach for `/claw` from muscle memory. The maintained implementation lives in `skills/nanoclaw-repl/SKILL.md`. -## Usage +## Canonical Surface -```bash -node scripts/claw.js -``` +- Prefer the `nanoclaw-repl` skill directly. +- Keep this file only as a compatibility entry point while command-first usage is retired. -Or via npm: +## Arguments -```bash -npm run claw -``` +`$ARGUMENTS` -## Environment Variables +## Delegation -| Variable | Default | Description | -|----------|---------|-------------| -| `CLAW_SESSION` | `default` | Session name (alphanumeric + hyphens) | -| `CLAW_SKILLS` | *(empty)* | Comma-separated skills loaded at startup | -| `CLAW_MODEL` | `sonnet` | Default model for the session | - -## REPL Commands - -```text -/help Show help -/clear Clear current session history -/history Print full conversation history -/sessions List saved sessions -/model [name] Show/set model -/load Hot-load a skill into context -/branch Branch current session -/search Search query across sessions -/compact Compact old turns, keep recent context -/export [path] Export session -/metrics Show session metrics -exit Quit -``` - -## Notes - -- NanoClaw remains zero-dependency. -- Sessions are stored at `~/.claude/claw/.md`. -- Compaction keeps the most recent turns and writes a compaction header. -- Export supports markdown, JSON turns, and plain text. +Apply the `nanoclaw-repl` skill and keep the response focused on operating or extending `scripts/claw.js`. +- If the user wants to run it, use `node scripts/claw.js` or `npm run claw`. +- If the user wants to extend it, preserve the zero-dependency and markdown-backed session model. +- If the request is really about long-running orchestration rather than NanoClaw itself, redirect to `dmux-workflows` or `autonomous-agent-harness`. diff --git a/commands/context-budget.md b/commands/context-budget.md index 30ec234b..871b7d58 100644 --- a/commands/context-budget.md +++ b/commands/context-budget.md @@ -1,29 +1,23 @@ --- -description: Analyze context window usage across agents, skills, MCP servers, and rules to find optimization opportunities. Helps reduce token overhead and avoid performance warnings. +description: Legacy slash-entry shim for the context-budget skill. Prefer the skill directly. --- -# Context Budget Optimizer +# Context Budget Optimizer (Legacy Shim) -Analyze your Claude Code setup's context window consumption and produce actionable recommendations to reduce token overhead. +Use this only if you still invoke `/context-budget`. The maintained workflow lives in `skills/context-budget/SKILL.md`. -## Usage +## Canonical Surface -``` -/context-budget [--verbose] -``` +- Prefer the `context-budget` skill directly. +- Keep this file only as a compatibility entry point. -- Default: summary with top recommendations -- `--verbose`: full breakdown per component +## Arguments $ARGUMENTS -## What to Do +## Delegation -Run the **context-budget** skill (`skills/context-budget/SKILL.md`) with the following inputs: - -1. Pass `--verbose` flag if present in `$ARGUMENTS` -2. Assume a 200K context window (Claude Sonnet default) unless the user specifies otherwise -3. Follow the skill's four phases: Inventory → Classify → Detect Issues → Report -4. Output the formatted Context Budget Report to the user - -The skill handles all scanning logic, token estimation, issue detection, and report formatting. +Apply the `context-budget` skill. +- Pass through `--verbose` if the user supplied it. +- Assume a 200K context window unless the user specified otherwise. +- Return the skill's inventory, issue detection, and prioritized savings report without re-implementing the scan here. diff --git a/commands/devfleet.md b/commands/devfleet.md index 7dbef64b..c9abd7b2 100644 --- a/commands/devfleet.md +++ b/commands/devfleet.md @@ -1,92 +1,23 @@ --- -description: Orchestrate parallel Claude Code agents via Claude DevFleet — plan projects from natural language, dispatch agents in isolated worktrees, monitor progress, and read structured reports. +description: Legacy slash-entry shim for the claude-devfleet skill. Prefer the skill directly. --- -# DevFleet — Multi-Agent Orchestration +# DevFleet (Legacy Shim) -Orchestrate parallel Claude Code agents via Claude DevFleet. Each agent runs in an isolated git worktree with full tooling. +Use this only if you still call `/devfleet`. The maintained workflow lives in `skills/claude-devfleet/SKILL.md`. -Requires the DevFleet MCP server: `claude mcp add devfleet --transport http http://localhost:18801/mcp` +## Canonical Surface -## Flow +- Prefer the `claude-devfleet` skill directly. +- Keep this file only as a compatibility entry point while command-first usage is retired. -``` -User describes project - → plan_project(prompt) → mission DAG with dependencies - → Show plan, get approval - → dispatch_mission(M1) → Agent spawns in worktree - → M1 completes → auto-merge → M2 auto-dispatches (depends_on M1) - → M2 completes → auto-merge - → get_report(M2) → files_changed, what_done, errors, next_steps - → Report summary to user -``` +## Arguments -## Workflow +`$ARGUMENTS` -1. **Plan the project** from the user's description: +## Delegation -``` -mcp__devfleet__plan_project(prompt="") -``` - -This returns a project with chained missions. Show the user: -- Project name and ID -- Each mission: title, type, dependencies -- The dependency DAG (which missions block which) - -2. **Wait for user approval** before dispatching. Show the plan clearly. - -3. **Dispatch the first mission** (the one with empty `depends_on`): - -``` -mcp__devfleet__dispatch_mission(mission_id="") -``` - -The remaining missions auto-dispatch as their dependencies complete (because `plan_project` creates them with `auto_dispatch=true`). When manually creating missions with `create_mission`, you must explicitly set `auto_dispatch=true` for this behavior. - -4. **Monitor progress** — check what's running: - -``` -mcp__devfleet__get_dashboard() -``` - -Or check a specific mission: - -``` -mcp__devfleet__get_mission_status(mission_id="") -``` - -Prefer polling with `get_mission_status` over `wait_for_mission` for long-running missions, so the user sees progress updates. - -5. **Read the report** for each completed mission: - -``` -mcp__devfleet__get_report(mission_id="") -``` - -Call this for every mission that reached a terminal state. Reports contain: files_changed, what_done, what_open, what_tested, what_untested, next_steps, errors_encountered. - -## All Available Tools - -| Tool | Purpose | -|------|---------| -| `plan_project(prompt)` | AI breaks description into chained missions with `auto_dispatch=true` | -| `create_project(name, path?, description?)` | Create a project manually, returns `project_id` | -| `create_mission(project_id, title, prompt, depends_on?, auto_dispatch?)` | Add a mission. `depends_on` is a list of mission ID strings. | -| `dispatch_mission(mission_id, model?, max_turns?)` | Start an agent | -| `cancel_mission(mission_id)` | Stop a running agent | -| `wait_for_mission(mission_id, timeout_seconds?)` | Block until done (prefer polling for long tasks) | -| `get_mission_status(mission_id)` | Check progress without blocking | -| `get_report(mission_id)` | Read structured report | -| `get_dashboard()` | System overview | -| `list_projects()` | Browse projects | -| `list_missions(project_id, status?)` | List missions | - -## Guidelines - -- Always confirm the plan before dispatching unless the user said "go ahead" -- Include mission titles and IDs when reporting status -- If a mission fails, read its report to understand errors before retrying -- Agent concurrency is configurable (default: 3). Excess missions queue and auto-dispatch as slots free up. Check `get_dashboard()` for slot availability. -- Dependencies form a DAG — never create circular dependencies -- Each agent auto-merges its worktree on completion. If a merge conflict occurs, the changes remain on the worktree branch for manual resolution. +Apply the `claude-devfleet` skill. +- Plan from the user's description, show the DAG, and get approval before dispatch unless the user already said to proceed. +- Prefer polling status over blocking waits for long missions. +- Report mission IDs, files changed, failures, and next steps from structured mission reports. diff --git a/commands/docs.md b/commands/docs.md index 398b360e..42d8a871 100644 --- a/commands/docs.md +++ b/commands/docs.md @@ -1,31 +1,23 @@ --- -description: Look up current documentation for a library or topic via Context7. +description: Legacy slash-entry shim for the documentation-lookup skill. Prefer the skill directly. --- -# /docs +# Docs Command (Legacy Shim) -## Purpose +Use this only if you still reach for `/docs`. The maintained workflow lives in `skills/documentation-lookup/SKILL.md`. -Look up up-to-date documentation for a library, framework, or API and return a summarized answer with relevant code snippets. Uses the Context7 MCP (resolve-library-id and query-docs) so answers reflect current docs, not training data. +## Canonical Surface -## Usage +- Prefer the `documentation-lookup` skill directly. +- Keep this file only as a compatibility entry point. -``` -/docs [library name] [question] -``` +## Arguments -Use quotes for multi-word arguments so they are parsed as a single token. Example: `/docs "Next.js" "How do I configure middleware?"` +`$ARGUMENTS` -If library or question is omitted, prompt the user for: -1. The library or product name (e.g. Next.js, Prisma, Supabase). -2. The specific question or task (e.g. "How do I set up middleware?", "Auth methods"). +## Delegation -## Workflow - -1. **Resolve library ID** — Call the Context7 tool `resolve-library-id` with the library name and the user's question to get a Context7-compatible library ID (e.g. `/vercel/next.js`). -2. **Query docs** — Call `query-docs` with that library ID and the user's question. -3. **Summarize** — Return a concise answer and include relevant code examples from the fetched documentation. Mention the library (and version if relevant). - -## Output - -The user receives a short, accurate answer backed by current docs, plus any code snippets that help. If Context7 is not available, say so and answer from training data with a note that docs may be outdated. +Apply the `documentation-lookup` skill. +- If the library or the question is missing, ask for the missing part. +- Use live documentation through Context7 instead of training data. +- Return only the current answer and the minimum code/example surface needed. diff --git a/commands/e2e.md b/commands/e2e.md index b83b1644..4fd5ad16 100644 --- a/commands/e2e.md +++ b/commands/e2e.md @@ -1,123 +1,26 @@ --- -description: Generate and run end-to-end tests with Playwright. Creates test journeys, runs tests, captures screenshots/videos/traces, and uploads artifacts. +description: Legacy slash-entry shim for the e2e-testing skill. Prefer the skill directly. --- -# E2E Command +# E2E Command (Legacy Shim) -This command invokes the **e2e-runner** agent to generate, maintain, and execute end-to-end tests using Playwright. +Use this only if you still invoke `/e2e`. The maintained workflow lives in `skills/e2e-testing/SKILL.md`. -## What This Command Does +## Canonical Surface -1. **Generate Test Journeys** - Create Playwright tests for user flows -2. **Run E2E Tests** - Execute tests across browsers -3. **Capture Artifacts** - Screenshots, videos, traces on failures -4. **Upload Results** - HTML reports and JUnit XML -5. **Identify Flaky Tests** - Quarantine unstable tests +- Prefer the `e2e-testing` skill directly. +- Keep this file only as a compatibility entry point. -## When to Use +## Arguments -Use `/e2e` when: -- Testing critical user journeys (login, trading, payments) -- Verifying multi-step flows work end-to-end -- Testing UI interactions and navigation -- Validating integration between frontend and backend -- Preparing for production deployment +`$ARGUMENTS` -## How It Works +## Delegation -The e2e-runner agent will: - -1. **Analyze user flow** and identify test scenarios -2. **Generate Playwright test** using Page Object Model pattern -3. **Run tests** across multiple browsers (Chrome, Firefox, Safari) -4. **Capture failures** with screenshots, videos, and traces -5. **Generate report** with results and artifacts -6. **Identify flaky tests** and recommend fixes - -## Example Usage - -``` -User: /e2e Test the market search and view flow - -Agent (e2e-runner): -# E2E Test Generation: Market Search and View Flow - -## Test Scenario Identified - -**User Journey:** Market Search → View Results → Click Market → View Details - -**Test Coverage:** -1. Navigate to markets page -2. Perform semantic search -3. Verify search results -4. Click on first result -5. Verify market details page loads -6. Verify chart renders - -## Generated Test Code - -```typescript -// tests/e2e/markets/search-and-view.spec.ts -import { test, expect } from '@playwright/test' -import { MarketsPage } from '../../pages/MarketsPage' -import { MarketDetailsPage } from '../../pages/MarketDetailsPage' - -test.describe('Market Search and View Flow', () => { - test('user can search markets and view details', async ({ page }) => { - // 1. Navigate to markets page - const marketsPage = new MarketsPage(page) - await marketsPage.goto() - - // Verify page loaded - await expect(page).toHaveTitle(/Markets/) - await expect(page.locator('h1')).toContainText('Markets') - - // 2. Perform semantic search - await marketsPage.searchMarkets('election') - - // Wait for API response - await page.waitForResponse(resp => - resp.url().includes('/api/markets/search') && resp.status() === 200 - ) - - // 3. Verify search results - const marketCards = marketsPage.marketCards - await expect(marketCards.first()).toBeVisible() - const resultCount = await marketCards.count() - expect(resultCount).toBeGreaterThan(0) - - // Take screenshot of search results - await page.screenshot({ path: 'artifacts/search-results.png' }) - - // 4. Click on first result - const firstMarketTitle = await marketCards.first().textContent() - await marketCards.first().click() - - // 5. Verify market details page loads - await expect(page).toHaveURL(/\/markets\/[a-z0-9-]+/) - - const detailsPage = new MarketDetailsPage(page) - await expect(detailsPage.marketName).toBeVisible() - await expect(detailsPage.marketDescription).toBeVisible() - - // 6. Verify chart renders - await expect(detailsPage.priceChart).toBeVisible() - - // Verify market name matches - const detailsTitle = await detailsPage.marketName.textContent() - expect(detailsTitle?.toLowerCase()).toContain( - firstMarketTitle?.toLowerCase().substring(0, 20) || '' - ) - - // Take screenshot of market details - await page.screenshot({ path: 'artifacts/market-details.png' }) - }) - - test('search with no results shows empty state', async ({ page }) => { - const marketsPage = new MarketsPage(page) - await marketsPage.goto() - - // Search for non-existent market +Apply the `e2e-testing` skill. +- Generate or update Playwright coverage for the requested user flow. +- Run only the relevant tests unless the user explicitly asked for the entire suite. +- Capture the usual artifacts and report failures, flake risk, and next fixes without duplicating the full skill body here. await marketsPage.searchMarkets('xyznonexistentmarket123456') // Verify empty state diff --git a/commands/eval.md b/commands/eval.md index 7ded11dd..09f96675 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -1,120 +1,23 @@ -# Eval Command +--- +description: Legacy slash-entry shim for the eval-harness skill. Prefer the skill directly. +--- -Manage eval-driven development workflow. +# Eval Command (Legacy Shim) -## Usage +Use this only if you still invoke `/eval`. The maintained workflow lives in `skills/eval-harness/SKILL.md`. -`/eval [define|check|report|list] [feature-name]` +## Canonical Surface -## Define Evals - -`/eval define feature-name` - -Create a new eval definition: - -1. Create `.claude/evals/feature-name.md` with template: - -```markdown -## EVAL: feature-name -Created: $(date) - -### Capability Evals -- [ ] [Description of capability 1] -- [ ] [Description of capability 2] - -### Regression Evals -- [ ] [Existing behavior 1 still works] -- [ ] [Existing behavior 2 still works] - -### Success Criteria -- pass@3 > 90% for capability evals -- pass^3 = 100% for regression evals -``` - -2. Prompt user to fill in specific criteria - -## Check Evals - -`/eval check feature-name` - -Run evals for a feature: - -1. Read eval definition from `.claude/evals/feature-name.md` -2. For each capability eval: - - Attempt to verify criterion - - Record PASS/FAIL - - Log attempt in `.claude/evals/feature-name.log` -3. For each regression eval: - - Run relevant tests - - Compare against baseline - - Record PASS/FAIL -4. Report current status: - -``` -EVAL CHECK: feature-name -======================== -Capability: X/Y passing -Regression: X/Y passing -Status: IN PROGRESS / READY -``` - -## Report Evals - -`/eval report feature-name` - -Generate comprehensive eval report: - -``` -EVAL REPORT: feature-name -========================= -Generated: $(date) - -CAPABILITY EVALS ----------------- -[eval-1]: PASS (pass@1) -[eval-2]: PASS (pass@2) - required retry -[eval-3]: FAIL - see notes - -REGRESSION EVALS ----------------- -[test-1]: PASS -[test-2]: PASS -[test-3]: PASS - -METRICS -------- -Capability pass@1: 67% -Capability pass@3: 100% -Regression pass^3: 100% - -NOTES ------ -[Any issues, edge cases, or observations] - -RECOMMENDATION --------------- -[SHIP / NEEDS WORK / BLOCKED] -``` - -## List Evals - -`/eval list` - -Show all eval definitions: - -``` -EVAL DEFINITIONS -================ -feature-auth [3/5 passing] IN PROGRESS -feature-search [5/5 passing] READY -feature-export [0/4 passing] NOT STARTED -``` +- Prefer the `eval-harness` skill directly. +- Keep this file only as a compatibility entry point. ## Arguments -$ARGUMENTS: -- `define ` - Create new eval definition -- `check ` - Run and check evals -- `report ` - Generate full report -- `list` - Show all evals -- `clean` - Remove old eval logs (keeps last 10 runs) +`$ARGUMENTS` + +## Delegation + +Apply the `eval-harness` skill. +- Support the same user intents as before: define, check, report, list, and cleanup. +- Keep evals capability-first, regression-backed, and evidence-based. +- Use the skill as the canonical evaluator instead of maintaining a separate command-specific playbook. diff --git a/commands/orchestrate.md b/commands/orchestrate.md index 3b36da93..69f94659 100644 --- a/commands/orchestrate.md +++ b/commands/orchestrate.md @@ -1,123 +1,27 @@ --- -description: Sequential and tmux/worktree orchestration guidance for multi-agent workflows. +description: Legacy slash-entry shim for dmux-workflows and autonomous-agent-harness. Prefer the skills directly. --- -# Orchestrate Command +# Orchestrate Command (Legacy Shim) -Sequential agent workflow for complex tasks. +Use this only if you still invoke `/orchestrate`. The maintained orchestration guidance lives in `skills/dmux-workflows/SKILL.md` and `skills/autonomous-agent-harness/SKILL.md`. -## Usage +## Canonical Surface -`/orchestrate [workflow-type] [task-description]` +- Prefer `dmux-workflows` for parallel panes, worktrees, and multi-agent splits. +- Prefer `autonomous-agent-harness` for longer-running loops, governance, scheduling, and control-plane style execution. +- Keep this file only as a compatibility entry point. -## Workflow Types +## Arguments -### feature -Full feature implementation workflow: -``` -planner -> tdd-guide -> code-reviewer -> security-reviewer -``` +`$ARGUMENTS` -### bugfix -Bug investigation and fix workflow: -``` -planner -> tdd-guide -> code-reviewer -``` +## Delegation -### refactor -Safe refactoring workflow: -``` -architect -> code-reviewer -> tdd-guide -``` - -### security -Security-focused review: -``` -security-reviewer -> code-reviewer -> architect -``` - -## Execution Pattern - -For each agent in the workflow: - -1. **Invoke agent** with context from previous agent -2. **Collect output** as structured handoff document -3. **Pass to next agent** in chain -4. **Aggregate results** into final report - -## Handoff Document Format - -Between agents, create handoff document: - -```markdown -## HANDOFF: [previous-agent] -> [next-agent] - -### Context -[Summary of what was done] - -### Findings -[Key discoveries or decisions] - -### Files Modified -[List of files touched] - -### Open Questions -[Unresolved items for next agent] - -### Recommendations -[Suggested next steps] -``` - -## Example: Feature Workflow - -``` -/orchestrate feature "Add user authentication" -``` - -Executes: - -1. **Planner Agent** - - Analyzes requirements - - Creates implementation plan - - Identifies dependencies - - Output: `HANDOFF: planner -> tdd-guide` - -2. **TDD Guide Agent** - - Reads planner handoff - - Writes tests first - - Implements to pass tests - - Output: `HANDOFF: tdd-guide -> code-reviewer` - -3. **Code Reviewer Agent** - - Reviews implementation - - Checks for issues - - Suggests improvements - - Output: `HANDOFF: code-reviewer -> security-reviewer` - -4. **Security Reviewer Agent** - - Security audit - - Vulnerability check - - Final approval - - Output: Final Report - -## Final Report Format - -``` -ORCHESTRATION REPORT -==================== -Workflow: feature -Task: Add user authentication -Agents: planner -> tdd-guide -> code-reviewer -> security-reviewer - -SUMMARY -------- -[One paragraph summary] - -AGENT OUTPUTS -------------- -Planner: [summary] -TDD Guide: [summary] -Code Reviewer: [summary] +Apply the orchestration skills instead of maintaining a second workflow spec here. +- Start with `dmux-workflows` for split/parallel execution. +- Pull in `autonomous-agent-harness` when the user is really asking for persistent loops, governance, or operator-layer behavior. +- Keep handoffs structured, but let the skills define the maintained sequencing rules. Security Reviewer: [summary] FILES CHANGED diff --git a/commands/prompt-optimize.md b/commands/prompt-optimize.md index b067fe45..0eca88fb 100644 --- a/commands/prompt-optimize.md +++ b/commands/prompt-optimize.md @@ -1,38 +1,23 @@ --- -description: Analyze a draft prompt and output an optimized, ECC-enriched version ready to paste and run. Does NOT execute the task — outputs advisory analysis only. +description: Legacy slash-entry shim for the prompt-optimizer skill. Prefer the skill directly. --- -# /prompt-optimize +# Prompt Optimize (Legacy Shim) -Analyze and optimize the following prompt for maximum ECC leverage. +Use this only if you still invoke `/prompt-optimize`. The maintained workflow lives in `skills/prompt-optimizer/SKILL.md`. -## Your Task +## Canonical Surface -Apply the **prompt-optimizer** skill to the user's input below. Follow the 6-phase analysis pipeline: +- Prefer the `prompt-optimizer` skill directly. +- Keep this file only as a compatibility entry point. -0. **Project Detection** — Read CLAUDE.md, detect tech stack from project files (package.json, go.mod, pyproject.toml, etc.) -1. **Intent Detection** — Classify the task type (new feature, bug fix, refactor, research, testing, review, documentation, infrastructure, design) -2. **Scope Assessment** — Evaluate complexity (TRIVIAL / LOW / MEDIUM / HIGH / EPIC), using codebase size as signal if detected -3. **ECC Component Matching** — Map to specific skills, commands, agents, and model tier -4. **Missing Context Detection** — Identify gaps. If 3+ critical items missing, ask the user to clarify before generating -5. **Workflow & Model** — Determine lifecycle position, recommend model tier, and split into multiple prompts if HIGH/EPIC +## Arguments -## Output Requirements +`$ARGUMENTS` -- Present diagnosis, recommended ECC components, and an optimized prompt using the Output Format from the prompt-optimizer skill -- Provide both **Full Version** (detailed) and **Quick Version** (compact, varied by intent type) -- Respond in the same language as the user's input -- The optimized prompt must be complete and ready to copy-paste into a new session -- End with a footer offering adjustment or a clear next step for starting a separate execution request +## Delegation -## CRITICAL - -Do NOT execute the user's task. Output ONLY the analysis and optimized prompt. -If the user asks for direct execution, explain that `/prompt-optimize` only produces advisory output and tell them to start a normal task request instead. - -Note: `blueprint` is a **skill**, not a slash command. Write "Use the blueprint skill" -instead of presenting it as a `/...` command. - -## User Input - -$ARGUMENTS +Apply the `prompt-optimizer` skill. +- Keep it advisory-only: optimize the prompt, do not execute the task. +- Return the recommended ECC components plus a ready-to-run prompt. +- If the user actually wants direct execution, say so and tell them to make a normal task request instead of staying inside the shim. diff --git a/commands/rules-distill.md b/commands/rules-distill.md index 93886a06..0dd3ac9d 100644 --- a/commands/rules-distill.md +++ b/commands/rules-distill.md @@ -1,11 +1,20 @@ --- -description: "Scan skills to extract cross-cutting principles and distill them into rules" +description: Legacy slash-entry shim for the rules-distill skill. Prefer the skill directly. --- -# /rules-distill — Distill Principles from Skills into Rules +# Rules Distill (Legacy Shim) -Scan installed skills, extract cross-cutting principles, and distill them into rules. +Use this only if you still invoke `/rules-distill`. The maintained workflow lives in `skills/rules-distill/SKILL.md`. -## Process +## Canonical Surface -Follow the full workflow defined in the `rules-distill` skill. +- Prefer the `rules-distill` skill directly. +- Keep this file only as a compatibility entry point. + +## Arguments + +`$ARGUMENTS` + +## Delegation + +Apply the `rules-distill` skill and follow its inventory, cross-read, and verdict workflow instead of duplicating that logic here. diff --git a/commands/tdd.md b/commands/tdd.md index 4f73be34..ca7d57d3 100644 --- a/commands/tdd.md +++ b/commands/tdd.md @@ -1,123 +1,26 @@ --- -description: Enforce test-driven development workflow. Scaffold interfaces, generate tests FIRST, then implement minimal code to pass. Ensure 80%+ coverage. +description: Legacy slash-entry shim for the tdd-workflow skill. Prefer the skill directly. --- -# TDD Command +# TDD Command (Legacy Shim) -This command invokes the **tdd-guide** agent to enforce test-driven development methodology. +Use this only if you still invoke `/tdd`. The maintained workflow lives in `skills/tdd-workflow/SKILL.md`. -## What This Command Does +## Canonical Surface -1. **Scaffold Interfaces** - Define types/interfaces first -2. **Generate Tests First** - Write failing tests (RED) -3. **Implement Minimal Code** - Write just enough to pass (GREEN) -4. **Refactor** - Improve code while keeping tests green (REFACTOR) -5. **Verify Coverage** - Ensure 80%+ test coverage +- Prefer the `tdd-workflow` skill directly. +- Keep this file only as a compatibility entry point. -## When to Use +## Arguments -Use `/tdd` when: -- Implementing new features -- Adding new functions/components -- Fixing bugs (write test that reproduces bug first) -- Refactoring existing code -- Building critical business logic +`$ARGUMENTS` -## How It Works +## Delegation -The tdd-guide agent will: - -1. **Define interfaces** for inputs/outputs -2. **Write tests that will FAIL** (because code doesn't exist yet) -3. **Run tests** and verify they fail for the right reason -4. **Write minimal implementation** to make tests pass -5. **Run tests** and verify they pass -6. **Refactor** code while keeping tests green -7. **Check coverage** and add more tests if below 80% - -## TDD Cycle - -``` -RED → GREEN → REFACTOR → REPEAT - -RED: Write a failing test -GREEN: Write minimal code to pass -REFACTOR: Improve code, keep tests passing -REPEAT: Next feature/scenario -``` - -## Example Usage - -``` -User: /tdd I need a function to calculate market liquidity score - -Agent (tdd-guide): -# TDD Session: Market Liquidity Score Calculator - -## Step 1: Define Interface (SCAFFOLD) - -```typescript -// lib/liquidity.ts -export interface MarketData { - totalVolume: number - bidAskSpread: number - activeTraders: number - lastTradeTime: Date -} - -export function calculateLiquidityScore(market: MarketData): number { - // TODO: Implementation - throw new Error('Not implemented') -} -``` - -## Step 2: Write Failing Test (RED) - -```typescript -// lib/liquidity.test.ts -import { calculateLiquidityScore } from './liquidity' - -describe('calculateLiquidityScore', () => { - it('should return high score for liquid market', () => { - const market = { - totalVolume: 100000, - bidAskSpread: 0.01, - activeTraders: 500, - lastTradeTime: new Date() - } - - const score = calculateLiquidityScore(market) - - expect(score).toBeGreaterThan(80) - expect(score).toBeLessThanOrEqual(100) - }) - - it('should return low score for illiquid market', () => { - const market = { - totalVolume: 100, - bidAskSpread: 0.5, - activeTraders: 2, - lastTradeTime: new Date(Date.now() - 86400000) // 1 day ago - } - - const score = calculateLiquidityScore(market) - - expect(score).toBeLessThan(30) - expect(score).toBeGreaterThanOrEqual(0) - }) - - it('should handle edge case: zero volume', () => { - const market = { - totalVolume: 0, - bidAskSpread: 0, - activeTraders: 0, - lastTradeTime: new Date() - } - - const score = calculateLiquidityScore(market) - - expect(score).toBe(0) - }) +Apply the `tdd-workflow` skill. +- Stay strict on RED -> GREEN -> REFACTOR. +- Keep tests first, coverage explicit, and checkpoint evidence clear. +- Use the skill as the maintained TDD body instead of duplicating the playbook here. }) ``` diff --git a/commands/verify.md b/commands/verify.md index 5f628b10..8a668d27 100644 --- a/commands/verify.md +++ b/commands/verify.md @@ -1,59 +1,23 @@ -# Verification Command +--- +description: Legacy slash-entry shim for the verification-loop skill. Prefer the skill directly. +--- -Run comprehensive verification on current codebase state. +# Verification Command (Legacy Shim) -## Instructions +Use this only if you still invoke `/verify`. The maintained workflow lives in `skills/verification-loop/SKILL.md`. -Execute verification in this exact order: +## Canonical Surface -1. **Build Check** - - Run the build command for this project - - If it fails, report errors and STOP - -2. **Type Check** - - Run TypeScript/type checker - - Report all errors with file:line - -3. **Lint Check** - - Run linter - - Report warnings and errors - -4. **Test Suite** - - Run all tests - - Report pass/fail count - - Report coverage percentage - -5. **Console.log Audit** - - Search for console.log in source files - - Report locations - -6. **Git Status** - - Show uncommitted changes - - Show files modified since last commit - -## Output - -Produce a concise verification report: - -``` -VERIFICATION: [PASS/FAIL] - -Build: [OK/FAIL] -Types: [OK/X errors] -Lint: [OK/X issues] -Tests: [X/Y passed, Z% coverage] -Secrets: [OK/X found] -Logs: [OK/X console.logs] - -Ready for PR: [YES/NO] -``` - -If any critical issues, list them with fix suggestions. +- Prefer the `verification-loop` skill directly. +- Keep this file only as a compatibility entry point. ## Arguments -$ARGUMENTS can be: -- `quick` - Only build + types -- `full` - All checks (default) -- `pre-commit` - Checks relevant for commits -- `pre-pr` - Full checks plus security scan +`$ARGUMENTS` + +## Delegation + +Apply the `verification-loop` skill. +- Choose the right verification depth for the user's requested mode. +- Run build, types, lint, tests, security/log checks, and diff review in the right order for the current repo. +- Report only the verdicts and blockers instead of maintaining a second verification checklist here. diff --git a/docs/zh-CN/AGENTS.md b/docs/zh-CN/AGENTS.md index cdc2c3a1..3c4dbe25 100644 --- a/docs/zh-CN/AGENTS.md +++ b/docs/zh-CN/AGENTS.md @@ -1,6 +1,6 @@ # Everything Claude Code (ECC) — 智能体指令 -这是一个**生产就绪的 AI 编码插件**,提供 36 个专业代理、142 项技能、68 条命令以及自动化钩子工作流,用于软件开发。 +这是一个**生产就绪的 AI 编码插件**,提供 36 个专业代理、143 项技能、68 条命令以及自动化钩子工作流,用于软件开发。 **版本:** 1.9.0 @@ -147,7 +147,7 @@ ``` agents/ — 36 个专业子代理 -skills/ — 142 个工作流技能和领域知识 +skills/ — 143 个工作流技能和领域知识 commands/ — 68 个斜杠命令 hooks/ — 基于触发的自动化 rules/ — 始终遵循的指导方针(通用 + 每种语言) diff --git a/docs/zh-CN/README.md b/docs/zh-CN/README.md index 019352cc..c755c0e7 100644 --- a/docs/zh-CN/README.md +++ b/docs/zh-CN/README.md @@ -209,7 +209,7 @@ npx ecc-install typescript /plugin list everything-claude-code@everything-claude-code ``` -**搞定!** 你现在可以使用 36 个智能体、142 项技能和 68 个命令了。 +**搞定!** 你现在可以使用 36 个智能体、143 项技能和 68 个命令了。 *** @@ -1096,7 +1096,7 @@ opencode |---------|-------------|----------|--------| | 智能体 | PASS: 36 个 | PASS: 12 个 | **Claude Code 领先** | | 命令 | PASS: 68 个 | PASS: 31 个 | **Claude Code 领先** | -| 技能 | PASS: 142 项 | PASS: 37 项 | **Claude Code 领先** | +| 技能 | PASS: 143 项 | PASS: 37 项 | **Claude Code 领先** | | 钩子 | PASS: 8 种事件类型 | PASS: 11 种事件 | **OpenCode 更多!** | | 规则 | PASS: 29 条 | PASS: 13 条指令 | **Claude Code 领先** | | MCP 服务器 | PASS: 14 个 | PASS: 完整 | **完全对等** | @@ -1208,7 +1208,7 @@ ECC 是**第一个最大化利用每个主要 AI 编码工具的插件**。以 |---------|------------|------------|-----------|----------| | **智能体** | 36 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 | | **命令** | 68 | 共享 | 基于指令 | 31 | -| **技能** | 142 | 共享 | 10 (原生格式) | 37 | +| **技能** | 143 | 共享 | 10 (原生格式) | 37 | | **钩子事件** | 8 种类型 | 15 种类型 | 暂无 | 11 种类型 | | **钩子脚本** | 20+ 个脚本 | 16 个脚本 (DRY 适配器) | N/A | 插件钩子 | | **规则** | 34 (通用 + 语言) | 34 (YAML 前页) | 基于指令 | 13 条指令 | diff --git a/manifests/install-modules.json b/manifests/install-modules.json index b2fd0554..f305c3ed 100644 --- a/manifests/install-modules.json +++ b/manifests/install-modules.json @@ -333,6 +333,7 @@ "paths": [ "skills/fal-ai-media", "skills/remotion-video-creation", + "skills/ui-demo", "skills/video-editing", "skills/videodb" ], diff --git a/skills/ui-demo/SKILL.md b/skills/ui-demo/SKILL.md new file mode 100644 index 00000000..5780d6d3 --- /dev/null +++ b/skills/ui-demo/SKILL.md @@ -0,0 +1,465 @@ +--- +name: ui-demo +description: Record polished UI demo videos using Playwright. Use when the user asks to create a demo, walkthrough, screen recording, or tutorial video of a web application. Produces WebM videos with visible cursor, natural pacing, and professional feel. +origin: ECC +--- + +# UI Demo Video Recorder + +Record polished demo videos of web applications using Playwright's video recording with an injected cursor overlay, natural pacing, and storytelling flow. + +## When to Use + +- User asks for a "demo video", "screen recording", "walkthrough", or "tutorial" +- User wants to showcase a feature or workflow visually +- User needs a video for documentation, onboarding, or stakeholder presentation + +## Three-Phase Process + +Every demo goes through three phases: **Discover -> Rehearse -> Record**. Never skip straight to recording. + +--- + +## Phase 1: Discover + +Before writing any script, explore the target pages to understand what is actually there. + +### Why + +You cannot script what you have not seen. Fields may be `` not `