refactor: collapse legacy command bodies into skills

This commit is contained in:
Affaan Mustafa
2026-04-01 02:25:42 -07:00
parent 6833454778
commit 975100db55
19 changed files with 630 additions and 702 deletions

View File

@@ -1,6 +1,6 @@
# Everything Claude Code (ECC) — Agent Instructions
This is a **production-ready AI coding plugin** providing 36 specialized agents, 142 skills, 68 commands, and automated hook workflows for software development.
This is a **production-ready AI coding plugin** providing 36 specialized agents, 143 skills, 68 commands, and automated hook workflows for software development.
**Version:** 1.9.0
@@ -146,7 +146,7 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
```
agents/ — 36 specialized subagents
skills/ — 142 workflow skills and domain knowledge
skills/ — 143 workflow skills and domain knowledge
commands/ — 68 slash commands
hooks/ — Trigger-based automations
rules/ — Always-follow guidelines (common + per-language)

View File

@@ -225,7 +225,7 @@ For manual install instructions see the README in the `rules/` folder. When copy
/plugin list everything-claude-code@everything-claude-code
```
**That's it!** You now have access to 36 agents, 142 skills, and 68 legacy command shims.
**That's it!** You now have access to 36 agents, 143 skills, and 68 legacy command shims.
### Multi-model commands require additional setup
@@ -1120,7 +1120,7 @@ The configuration is automatically detected from `.opencode/opencode.json`.
|---------|-------------|----------|--------|
| Agents | PASS: 36 agents | PASS: 12 agents | **Claude Code leads** |
| Commands | PASS: 68 commands | PASS: 31 commands | **Claude Code leads** |
| Skills | PASS: 142 skills | PASS: 37 skills | **Claude Code leads** |
| Skills | PASS: 143 skills | PASS: 37 skills | **Claude Code leads** |
| Hooks | PASS: 8 event types | PASS: 11 events | **OpenCode has more!** |
| Rules | PASS: 29 rules | PASS: 13 instructions | **Claude Code leads** |
| MCP Servers | PASS: 14 servers | PASS: Full | **Full parity** |
@@ -1229,7 +1229,7 @@ ECC is the **first plugin to maximize every major AI coding tool**. Here's how e
|---------|------------|------------|-----------|----------|
| **Agents** | 36 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
| **Commands** | 68 | Shared | Instruction-based | 31 |
| **Skills** | 142 | Shared | 10 (native format) | 37 |
| **Skills** | 143 | Shared | 10 (native format) | 37 |
| **Hook Events** | 8 types | 15 types | None yet | 11 types |
| **Hook Scripts** | 20+ scripts | 16 scripts (DRY adapter) | N/A | Plugin hooks |
| **Rules** | 34 (common + lang) | 34 (YAML frontmatter) | Instruction-based | 13 instructions |

View File

@@ -106,7 +106,7 @@ cp -r everything-claude-code/rules/perl ~/.claude/rules/
/plugin list everything-claude-code@everything-claude-code
```
**完成!** 你现在可以使用 36 个代理、142 个技能和 68 个命令。
**完成!** 你现在可以使用 36 个代理、143 个技能和 68 个命令。
### multi-* 命令需要额外配置

View File

@@ -57,9 +57,9 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
- `#1043` C# reviewer and .NET skills
- Direct-port candidates landed after audit:
- `#1078` hook-id dedupe for managed Claude hook reinstalls
- `#844` ui-demo skill
- Port or rebuild inside ECC after full audit:
- `#894` Jira integration
- `#844` ui-demo skill
- `#814` + `#808` rebuild as a single consolidated notifications lane for Opencode and cross-harness surfaces
## Interfaces
@@ -93,3 +93,5 @@ Keep this file detailed for only the current sprint, blockers, and next actions.
- 2026-04-01: Core English repo surfaces were shifted to a skills-first posture. README, AGENTS, plugin metadata, and contributor instructions now treat `skills/` as canonical and `commands/` as legacy slash-entry compatibility during migration.
- 2026-04-01: Follow-up bundle cleanup closed `#1080` and `#1079`, which were generated `.claude/` bundle PRs duplicating command-first scaffolding instead of shipping canonical ECC source changes.
- 2026-04-01: Ported the useful core of `#1078` directly into `main`, but tightened the implementation so legacy no-id hook installs deduplicate cleanly on the first reinstall instead of the second. Added stable hook ids to `hooks/hooks.json`, semantic fallback aliases in `mergeHookEntries()`, and a regression test covering upgrade from pre-id settings.
- 2026-04-01: Collapsed the obvious command/skill duplicates into thin legacy shims so `skills/` now hold the maintained bodies for NanoClaw, context-budget, DevFleet, docs lookup, E2E, evals, orchestration, prompt optimization, rules distillation, TDD, and verification.
- 2026-04-01: Ported the self-contained core of `#844` directly into `main` as `skills/ui-demo/SKILL.md` and registered it under the `media-generation` install module instead of merging the PR wholesale.

View File

@@ -1,51 +1,23 @@
---
description: Start NanoClaw v2 — ECC's persistent, zero-dependency REPL with model routing, skill hot-load, branching, compaction, export, and metrics.
description: Legacy slash-entry shim for the nanoclaw-repl skill. Prefer the skill directly.
---
# Claw Command
# Claw Command (Legacy Shim)
Start an interactive AI agent session with persistent markdown history and operational controls.
Use this only if you still reach for `/claw` from muscle memory. The maintained implementation lives in `skills/nanoclaw-repl/SKILL.md`.
## Usage
## Canonical Surface
```bash
node scripts/claw.js
```
- Prefer the `nanoclaw-repl` skill directly.
- Keep this file only as a compatibility entry point while command-first usage is retired.
Or via npm:
## Arguments
```bash
npm run claw
```
`$ARGUMENTS`
## Environment Variables
## Delegation
| Variable | Default | Description |
|----------|---------|-------------|
| `CLAW_SESSION` | `default` | Session name (alphanumeric + hyphens) |
| `CLAW_SKILLS` | *(empty)* | Comma-separated skills loaded at startup |
| `CLAW_MODEL` | `sonnet` | Default model for the session |
## REPL Commands
```text
/help Show help
/clear Clear current session history
/history Print full conversation history
/sessions List saved sessions
/model [name] Show/set model
/load <skill-name> Hot-load a skill into context
/branch <session-name> Branch current session
/search <query> Search query across sessions
/compact Compact old turns, keep recent context
/export <md|json|txt> [path] Export session
/metrics Show session metrics
exit Quit
```
## Notes
- NanoClaw remains zero-dependency.
- Sessions are stored at `~/.claude/claw/<session>.md`.
- Compaction keeps the most recent turns and writes a compaction header.
- Export supports markdown, JSON turns, and plain text.
Apply the `nanoclaw-repl` skill and keep the response focused on operating or extending `scripts/claw.js`.
- If the user wants to run it, use `node scripts/claw.js` or `npm run claw`.
- If the user wants to extend it, preserve the zero-dependency and markdown-backed session model.
- If the request is really about long-running orchestration rather than NanoClaw itself, redirect to `dmux-workflows` or `autonomous-agent-harness`.

View File

@@ -1,29 +1,23 @@
---
description: Analyze context window usage across agents, skills, MCP servers, and rules to find optimization opportunities. Helps reduce token overhead and avoid performance warnings.
description: Legacy slash-entry shim for the context-budget skill. Prefer the skill directly.
---
# Context Budget Optimizer
# Context Budget Optimizer (Legacy Shim)
Analyze your Claude Code setup's context window consumption and produce actionable recommendations to reduce token overhead.
Use this only if you still invoke `/context-budget`. The maintained workflow lives in `skills/context-budget/SKILL.md`.
## Usage
## Canonical Surface
```
/context-budget [--verbose]
```
- Prefer the `context-budget` skill directly.
- Keep this file only as a compatibility entry point.
- Default: summary with top recommendations
- `--verbose`: full breakdown per component
## Arguments
$ARGUMENTS
## What to Do
## Delegation
Run the **context-budget** skill (`skills/context-budget/SKILL.md`) with the following inputs:
1. Pass `--verbose` flag if present in `$ARGUMENTS`
2. Assume a 200K context window (Claude Sonnet default) unless the user specifies otherwise
3. Follow the skill's four phases: Inventory → Classify → Detect Issues → Report
4. Output the formatted Context Budget Report to the user
The skill handles all scanning logic, token estimation, issue detection, and report formatting.
Apply the `context-budget` skill.
- Pass through `--verbose` if the user supplied it.
- Assume a 200K context window unless the user specified otherwise.
- Return the skill's inventory, issue detection, and prioritized savings report without re-implementing the scan here.

View File

@@ -1,92 +1,23 @@
---
description: Orchestrate parallel Claude Code agents via Claude DevFleet — plan projects from natural language, dispatch agents in isolated worktrees, monitor progress, and read structured reports.
description: Legacy slash-entry shim for the claude-devfleet skill. Prefer the skill directly.
---
# DevFleet — Multi-Agent Orchestration
# DevFleet (Legacy Shim)
Orchestrate parallel Claude Code agents via Claude DevFleet. Each agent runs in an isolated git worktree with full tooling.
Use this only if you still call `/devfleet`. The maintained workflow lives in `skills/claude-devfleet/SKILL.md`.
Requires the DevFleet MCP server: `claude mcp add devfleet --transport http http://localhost:18801/mcp`
## Canonical Surface
## Flow
- Prefer the `claude-devfleet` skill directly.
- Keep this file only as a compatibility entry point while command-first usage is retired.
```
User describes project
→ plan_project(prompt) → mission DAG with dependencies
→ Show plan, get approval
→ dispatch_mission(M1) → Agent spawns in worktree
→ M1 completes → auto-merge → M2 auto-dispatches (depends_on M1)
→ M2 completes → auto-merge
→ get_report(M2) → files_changed, what_done, errors, next_steps
→ Report summary to user
```
## Arguments
## Workflow
`$ARGUMENTS`
1. **Plan the project** from the user's description:
## Delegation
```
mcp__devfleet__plan_project(prompt="<user's description>")
```
This returns a project with chained missions. Show the user:
- Project name and ID
- Each mission: title, type, dependencies
- The dependency DAG (which missions block which)
2. **Wait for user approval** before dispatching. Show the plan clearly.
3. **Dispatch the first mission** (the one with empty `depends_on`):
```
mcp__devfleet__dispatch_mission(mission_id="<first_mission_id>")
```
The remaining missions auto-dispatch as their dependencies complete (because `plan_project` creates them with `auto_dispatch=true`). When manually creating missions with `create_mission`, you must explicitly set `auto_dispatch=true` for this behavior.
4. **Monitor progress** — check what's running:
```
mcp__devfleet__get_dashboard()
```
Or check a specific mission:
```
mcp__devfleet__get_mission_status(mission_id="<id>")
```
Prefer polling with `get_mission_status` over `wait_for_mission` for long-running missions, so the user sees progress updates.
5. **Read the report** for each completed mission:
```
mcp__devfleet__get_report(mission_id="<mission_id>")
```
Call this for every mission that reached a terminal state. Reports contain: files_changed, what_done, what_open, what_tested, what_untested, next_steps, errors_encountered.
## All Available Tools
| Tool | Purpose |
|------|---------|
| `plan_project(prompt)` | AI breaks description into chained missions with `auto_dispatch=true` |
| `create_project(name, path?, description?)` | Create a project manually, returns `project_id` |
| `create_mission(project_id, title, prompt, depends_on?, auto_dispatch?)` | Add a mission. `depends_on` is a list of mission ID strings. |
| `dispatch_mission(mission_id, model?, max_turns?)` | Start an agent |
| `cancel_mission(mission_id)` | Stop a running agent |
| `wait_for_mission(mission_id, timeout_seconds?)` | Block until done (prefer polling for long tasks) |
| `get_mission_status(mission_id)` | Check progress without blocking |
| `get_report(mission_id)` | Read structured report |
| `get_dashboard()` | System overview |
| `list_projects()` | Browse projects |
| `list_missions(project_id, status?)` | List missions |
## Guidelines
- Always confirm the plan before dispatching unless the user said "go ahead"
- Include mission titles and IDs when reporting status
- If a mission fails, read its report to understand errors before retrying
- Agent concurrency is configurable (default: 3). Excess missions queue and auto-dispatch as slots free up. Check `get_dashboard()` for slot availability.
- Dependencies form a DAG — never create circular dependencies
- Each agent auto-merges its worktree on completion. If a merge conflict occurs, the changes remain on the worktree branch for manual resolution.
Apply the `claude-devfleet` skill.
- Plan from the user's description, show the DAG, and get approval before dispatch unless the user already said to proceed.
- Prefer polling status over blocking waits for long missions.
- Report mission IDs, files changed, failures, and next steps from structured mission reports.

View File

@@ -1,31 +1,23 @@
---
description: Look up current documentation for a library or topic via Context7.
description: Legacy slash-entry shim for the documentation-lookup skill. Prefer the skill directly.
---
# /docs
# Docs Command (Legacy Shim)
## Purpose
Use this only if you still reach for `/docs`. The maintained workflow lives in `skills/documentation-lookup/SKILL.md`.
Look up up-to-date documentation for a library, framework, or API and return a summarized answer with relevant code snippets. Uses the Context7 MCP (resolve-library-id and query-docs) so answers reflect current docs, not training data.
## Canonical Surface
## Usage
- Prefer the `documentation-lookup` skill directly.
- Keep this file only as a compatibility entry point.
```
/docs [library name] [question]
```
## Arguments
Use quotes for multi-word arguments so they are parsed as a single token. Example: `/docs "Next.js" "How do I configure middleware?"`
`$ARGUMENTS`
If library or question is omitted, prompt the user for:
1. The library or product name (e.g. Next.js, Prisma, Supabase).
2. The specific question or task (e.g. "How do I set up middleware?", "Auth methods").
## Delegation
## Workflow
1. **Resolve library ID** — Call the Context7 tool `resolve-library-id` with the library name and the user's question to get a Context7-compatible library ID (e.g. `/vercel/next.js`).
2. **Query docs** — Call `query-docs` with that library ID and the user's question.
3. **Summarize** — Return a concise answer and include relevant code examples from the fetched documentation. Mention the library (and version if relevant).
## Output
The user receives a short, accurate answer backed by current docs, plus any code snippets that help. If Context7 is not available, say so and answer from training data with a note that docs may be outdated.
Apply the `documentation-lookup` skill.
- If the library or the question is missing, ask for the missing part.
- Use live documentation through Context7 instead of training data.
- Return only the current answer and the minimum code/example surface needed.

View File

@@ -1,123 +1,26 @@
---
description: Generate and run end-to-end tests with Playwright. Creates test journeys, runs tests, captures screenshots/videos/traces, and uploads artifacts.
description: Legacy slash-entry shim for the e2e-testing skill. Prefer the skill directly.
---
# E2E Command
# E2E Command (Legacy Shim)
This command invokes the **e2e-runner** agent to generate, maintain, and execute end-to-end tests using Playwright.
Use this only if you still invoke `/e2e`. The maintained workflow lives in `skills/e2e-testing/SKILL.md`.
## What This Command Does
## Canonical Surface
1. **Generate Test Journeys** - Create Playwright tests for user flows
2. **Run E2E Tests** - Execute tests across browsers
3. **Capture Artifacts** - Screenshots, videos, traces on failures
4. **Upload Results** - HTML reports and JUnit XML
5. **Identify Flaky Tests** - Quarantine unstable tests
- Prefer the `e2e-testing` skill directly.
- Keep this file only as a compatibility entry point.
## When to Use
## Arguments
Use `/e2e` when:
- Testing critical user journeys (login, trading, payments)
- Verifying multi-step flows work end-to-end
- Testing UI interactions and navigation
- Validating integration between frontend and backend
- Preparing for production deployment
`$ARGUMENTS`
## How It Works
## Delegation
The e2e-runner agent will:
1. **Analyze user flow** and identify test scenarios
2. **Generate Playwright test** using Page Object Model pattern
3. **Run tests** across multiple browsers (Chrome, Firefox, Safari)
4. **Capture failures** with screenshots, videos, and traces
5. **Generate report** with results and artifacts
6. **Identify flaky tests** and recommend fixes
## Example Usage
```
User: /e2e Test the market search and view flow
Agent (e2e-runner):
# E2E Test Generation: Market Search and View Flow
## Test Scenario Identified
**User Journey:** Market Search → View Results → Click Market → View Details
**Test Coverage:**
1. Navigate to markets page
2. Perform semantic search
3. Verify search results
4. Click on first result
5. Verify market details page loads
6. Verify chart renders
## Generated Test Code
```typescript
// tests/e2e/markets/search-and-view.spec.ts
import { test, expect } from '@playwright/test'
import { MarketsPage } from '../../pages/MarketsPage'
import { MarketDetailsPage } from '../../pages/MarketDetailsPage'
test.describe('Market Search and View Flow', () => {
test('user can search markets and view details', async ({ page }) => {
// 1. Navigate to markets page
const marketsPage = new MarketsPage(page)
await marketsPage.goto()
// Verify page loaded
await expect(page).toHaveTitle(/Markets/)
await expect(page.locator('h1')).toContainText('Markets')
// 2. Perform semantic search
await marketsPage.searchMarkets('election')
// Wait for API response
await page.waitForResponse(resp =>
resp.url().includes('/api/markets/search') && resp.status() === 200
)
// 3. Verify search results
const marketCards = marketsPage.marketCards
await expect(marketCards.first()).toBeVisible()
const resultCount = await marketCards.count()
expect(resultCount).toBeGreaterThan(0)
// Take screenshot of search results
await page.screenshot({ path: 'artifacts/search-results.png' })
// 4. Click on first result
const firstMarketTitle = await marketCards.first().textContent()
await marketCards.first().click()
// 5. Verify market details page loads
await expect(page).toHaveURL(/\/markets\/[a-z0-9-]+/)
const detailsPage = new MarketDetailsPage(page)
await expect(detailsPage.marketName).toBeVisible()
await expect(detailsPage.marketDescription).toBeVisible()
// 6. Verify chart renders
await expect(detailsPage.priceChart).toBeVisible()
// Verify market name matches
const detailsTitle = await detailsPage.marketName.textContent()
expect(detailsTitle?.toLowerCase()).toContain(
firstMarketTitle?.toLowerCase().substring(0, 20) || ''
)
// Take screenshot of market details
await page.screenshot({ path: 'artifacts/market-details.png' })
})
test('search with no results shows empty state', async ({ page }) => {
const marketsPage = new MarketsPage(page)
await marketsPage.goto()
// Search for non-existent market
Apply the `e2e-testing` skill.
- Generate or update Playwright coverage for the requested user flow.
- Run only the relevant tests unless the user explicitly asked for the entire suite.
- Capture the usual artifacts and report failures, flake risk, and next fixes without duplicating the full skill body here.
await marketsPage.searchMarkets('xyznonexistentmarket123456')
// Verify empty state

View File

@@ -1,120 +1,23 @@
# Eval Command
---
description: Legacy slash-entry shim for the eval-harness skill. Prefer the skill directly.
---
Manage eval-driven development workflow.
# Eval Command (Legacy Shim)
## Usage
Use this only if you still invoke `/eval`. The maintained workflow lives in `skills/eval-harness/SKILL.md`.
`/eval [define|check|report|list] [feature-name]`
## Canonical Surface
## Define Evals
`/eval define feature-name`
Create a new eval definition:
1. Create `.claude/evals/feature-name.md` with template:
```markdown
## EVAL: feature-name
Created: $(date)
### Capability Evals
- [ ] [Description of capability 1]
- [ ] [Description of capability 2]
### Regression Evals
- [ ] [Existing behavior 1 still works]
- [ ] [Existing behavior 2 still works]
### Success Criteria
- pass@3 > 90% for capability evals
- pass^3 = 100% for regression evals
```
2. Prompt user to fill in specific criteria
## Check Evals
`/eval check feature-name`
Run evals for a feature:
1. Read eval definition from `.claude/evals/feature-name.md`
2. For each capability eval:
- Attempt to verify criterion
- Record PASS/FAIL
- Log attempt in `.claude/evals/feature-name.log`
3. For each regression eval:
- Run relevant tests
- Compare against baseline
- Record PASS/FAIL
4. Report current status:
```
EVAL CHECK: feature-name
========================
Capability: X/Y passing
Regression: X/Y passing
Status: IN PROGRESS / READY
```
## Report Evals
`/eval report feature-name`
Generate comprehensive eval report:
```
EVAL REPORT: feature-name
=========================
Generated: $(date)
CAPABILITY EVALS
----------------
[eval-1]: PASS (pass@1)
[eval-2]: PASS (pass@2) - required retry
[eval-3]: FAIL - see notes
REGRESSION EVALS
----------------
[test-1]: PASS
[test-2]: PASS
[test-3]: PASS
METRICS
-------
Capability pass@1: 67%
Capability pass@3: 100%
Regression pass^3: 100%
NOTES
-----
[Any issues, edge cases, or observations]
RECOMMENDATION
--------------
[SHIP / NEEDS WORK / BLOCKED]
```
## List Evals
`/eval list`
Show all eval definitions:
```
EVAL DEFINITIONS
================
feature-auth [3/5 passing] IN PROGRESS
feature-search [5/5 passing] READY
feature-export [0/4 passing] NOT STARTED
```
- Prefer the `eval-harness` skill directly.
- Keep this file only as a compatibility entry point.
## Arguments
$ARGUMENTS:
- `define <name>` - Create new eval definition
- `check <name>` - Run and check evals
- `report <name>` - Generate full report
- `list` - Show all evals
- `clean` - Remove old eval logs (keeps last 10 runs)
`$ARGUMENTS`
## Delegation
Apply the `eval-harness` skill.
- Support the same user intents as before: define, check, report, list, and cleanup.
- Keep evals capability-first, regression-backed, and evidence-based.
- Use the skill as the canonical evaluator instead of maintaining a separate command-specific playbook.

View File

@@ -1,123 +1,27 @@
---
description: Sequential and tmux/worktree orchestration guidance for multi-agent workflows.
description: Legacy slash-entry shim for dmux-workflows and autonomous-agent-harness. Prefer the skills directly.
---
# Orchestrate Command
# Orchestrate Command (Legacy Shim)
Sequential agent workflow for complex tasks.
Use this only if you still invoke `/orchestrate`. The maintained orchestration guidance lives in `skills/dmux-workflows/SKILL.md` and `skills/autonomous-agent-harness/SKILL.md`.
## Usage
## Canonical Surface
`/orchestrate [workflow-type] [task-description]`
- Prefer `dmux-workflows` for parallel panes, worktrees, and multi-agent splits.
- Prefer `autonomous-agent-harness` for longer-running loops, governance, scheduling, and control-plane style execution.
- Keep this file only as a compatibility entry point.
## Workflow Types
## Arguments
### feature
Full feature implementation workflow:
```
planner -> tdd-guide -> code-reviewer -> security-reviewer
```
`$ARGUMENTS`
### bugfix
Bug investigation and fix workflow:
```
planner -> tdd-guide -> code-reviewer
```
## Delegation
### refactor
Safe refactoring workflow:
```
architect -> code-reviewer -> tdd-guide
```
### security
Security-focused review:
```
security-reviewer -> code-reviewer -> architect
```
## Execution Pattern
For each agent in the workflow:
1. **Invoke agent** with context from previous agent
2. **Collect output** as structured handoff document
3. **Pass to next agent** in chain
4. **Aggregate results** into final report
## Handoff Document Format
Between agents, create handoff document:
```markdown
## HANDOFF: [previous-agent] -> [next-agent]
### Context
[Summary of what was done]
### Findings
[Key discoveries or decisions]
### Files Modified
[List of files touched]
### Open Questions
[Unresolved items for next agent]
### Recommendations
[Suggested next steps]
```
## Example: Feature Workflow
```
/orchestrate feature "Add user authentication"
```
Executes:
1. **Planner Agent**
- Analyzes requirements
- Creates implementation plan
- Identifies dependencies
- Output: `HANDOFF: planner -> tdd-guide`
2. **TDD Guide Agent**
- Reads planner handoff
- Writes tests first
- Implements to pass tests
- Output: `HANDOFF: tdd-guide -> code-reviewer`
3. **Code Reviewer Agent**
- Reviews implementation
- Checks for issues
- Suggests improvements
- Output: `HANDOFF: code-reviewer -> security-reviewer`
4. **Security Reviewer Agent**
- Security audit
- Vulnerability check
- Final approval
- Output: Final Report
## Final Report Format
```
ORCHESTRATION REPORT
====================
Workflow: feature
Task: Add user authentication
Agents: planner -> tdd-guide -> code-reviewer -> security-reviewer
SUMMARY
-------
[One paragraph summary]
AGENT OUTPUTS
-------------
Planner: [summary]
TDD Guide: [summary]
Code Reviewer: [summary]
Apply the orchestration skills instead of maintaining a second workflow spec here.
- Start with `dmux-workflows` for split/parallel execution.
- Pull in `autonomous-agent-harness` when the user is really asking for persistent loops, governance, or operator-layer behavior.
- Keep handoffs structured, but let the skills define the maintained sequencing rules.
Security Reviewer: [summary]
FILES CHANGED

View File

@@ -1,38 +1,23 @@
---
description: Analyze a draft prompt and output an optimized, ECC-enriched version ready to paste and run. Does NOT execute the task — outputs advisory analysis only.
description: Legacy slash-entry shim for the prompt-optimizer skill. Prefer the skill directly.
---
# /prompt-optimize
# Prompt Optimize (Legacy Shim)
Analyze and optimize the following prompt for maximum ECC leverage.
Use this only if you still invoke `/prompt-optimize`. The maintained workflow lives in `skills/prompt-optimizer/SKILL.md`.
## Your Task
## Canonical Surface
Apply the **prompt-optimizer** skill to the user's input below. Follow the 6-phase analysis pipeline:
- Prefer the `prompt-optimizer` skill directly.
- Keep this file only as a compatibility entry point.
0. **Project Detection** — Read CLAUDE.md, detect tech stack from project files (package.json, go.mod, pyproject.toml, etc.)
1. **Intent Detection** — Classify the task type (new feature, bug fix, refactor, research, testing, review, documentation, infrastructure, design)
2. **Scope Assessment** — Evaluate complexity (TRIVIAL / LOW / MEDIUM / HIGH / EPIC), using codebase size as signal if detected
3. **ECC Component Matching** — Map to specific skills, commands, agents, and model tier
4. **Missing Context Detection** — Identify gaps. If 3+ critical items missing, ask the user to clarify before generating
5. **Workflow & Model** — Determine lifecycle position, recommend model tier, and split into multiple prompts if HIGH/EPIC
## Arguments
## Output Requirements
`$ARGUMENTS`
- Present diagnosis, recommended ECC components, and an optimized prompt using the Output Format from the prompt-optimizer skill
- Provide both **Full Version** (detailed) and **Quick Version** (compact, varied by intent type)
- Respond in the same language as the user's input
- The optimized prompt must be complete and ready to copy-paste into a new session
- End with a footer offering adjustment or a clear next step for starting a separate execution request
## Delegation
## CRITICAL
Do NOT execute the user's task. Output ONLY the analysis and optimized prompt.
If the user asks for direct execution, explain that `/prompt-optimize` only produces advisory output and tell them to start a normal task request instead.
Note: `blueprint` is a **skill**, not a slash command. Write "Use the blueprint skill"
instead of presenting it as a `/...` command.
## User Input
$ARGUMENTS
Apply the `prompt-optimizer` skill.
- Keep it advisory-only: optimize the prompt, do not execute the task.
- Return the recommended ECC components plus a ready-to-run prompt.
- If the user actually wants direct execution, say so and tell them to make a normal task request instead of staying inside the shim.

View File

@@ -1,11 +1,20 @@
---
description: "Scan skills to extract cross-cutting principles and distill them into rules"
description: Legacy slash-entry shim for the rules-distill skill. Prefer the skill directly.
---
# /rules-distill — Distill Principles from Skills into Rules
# Rules Distill (Legacy Shim)
Scan installed skills, extract cross-cutting principles, and distill them into rules.
Use this only if you still invoke `/rules-distill`. The maintained workflow lives in `skills/rules-distill/SKILL.md`.
## Process
## Canonical Surface
Follow the full workflow defined in the `rules-distill` skill.
- Prefer the `rules-distill` skill directly.
- Keep this file only as a compatibility entry point.
## Arguments
`$ARGUMENTS`
## Delegation
Apply the `rules-distill` skill and follow its inventory, cross-read, and verdict workflow instead of duplicating that logic here.

View File

@@ -1,123 +1,26 @@
---
description: Enforce test-driven development workflow. Scaffold interfaces, generate tests FIRST, then implement minimal code to pass. Ensure 80%+ coverage.
description: Legacy slash-entry shim for the tdd-workflow skill. Prefer the skill directly.
---
# TDD Command
# TDD Command (Legacy Shim)
This command invokes the **tdd-guide** agent to enforce test-driven development methodology.
Use this only if you still invoke `/tdd`. The maintained workflow lives in `skills/tdd-workflow/SKILL.md`.
## What This Command Does
## Canonical Surface
1. **Scaffold Interfaces** - Define types/interfaces first
2. **Generate Tests First** - Write failing tests (RED)
3. **Implement Minimal Code** - Write just enough to pass (GREEN)
4. **Refactor** - Improve code while keeping tests green (REFACTOR)
5. **Verify Coverage** - Ensure 80%+ test coverage
- Prefer the `tdd-workflow` skill directly.
- Keep this file only as a compatibility entry point.
## When to Use
## Arguments
Use `/tdd` when:
- Implementing new features
- Adding new functions/components
- Fixing bugs (write test that reproduces bug first)
- Refactoring existing code
- Building critical business logic
`$ARGUMENTS`
## How It Works
## Delegation
The tdd-guide agent will:
1. **Define interfaces** for inputs/outputs
2. **Write tests that will FAIL** (because code doesn't exist yet)
3. **Run tests** and verify they fail for the right reason
4. **Write minimal implementation** to make tests pass
5. **Run tests** and verify they pass
6. **Refactor** code while keeping tests green
7. **Check coverage** and add more tests if below 80%
## TDD Cycle
```
RED → GREEN → REFACTOR → REPEAT
RED: Write a failing test
GREEN: Write minimal code to pass
REFACTOR: Improve code, keep tests passing
REPEAT: Next feature/scenario
```
## Example Usage
```
User: /tdd I need a function to calculate market liquidity score
Agent (tdd-guide):
# TDD Session: Market Liquidity Score Calculator
## Step 1: Define Interface (SCAFFOLD)
```typescript
// lib/liquidity.ts
export interface MarketData {
totalVolume: number
bidAskSpread: number
activeTraders: number
lastTradeTime: Date
}
export function calculateLiquidityScore(market: MarketData): number {
// TODO: Implementation
throw new Error('Not implemented')
}
```
## Step 2: Write Failing Test (RED)
```typescript
// lib/liquidity.test.ts
import { calculateLiquidityScore } from './liquidity'
describe('calculateLiquidityScore', () => {
it('should return high score for liquid market', () => {
const market = {
totalVolume: 100000,
bidAskSpread: 0.01,
activeTraders: 500,
lastTradeTime: new Date()
}
const score = calculateLiquidityScore(market)
expect(score).toBeGreaterThan(80)
expect(score).toBeLessThanOrEqual(100)
})
it('should return low score for illiquid market', () => {
const market = {
totalVolume: 100,
bidAskSpread: 0.5,
activeTraders: 2,
lastTradeTime: new Date(Date.now() - 86400000) // 1 day ago
}
const score = calculateLiquidityScore(market)
expect(score).toBeLessThan(30)
expect(score).toBeGreaterThanOrEqual(0)
})
it('should handle edge case: zero volume', () => {
const market = {
totalVolume: 0,
bidAskSpread: 0,
activeTraders: 0,
lastTradeTime: new Date()
}
const score = calculateLiquidityScore(market)
expect(score).toBe(0)
})
Apply the `tdd-workflow` skill.
- Stay strict on RED -> GREEN -> REFACTOR.
- Keep tests first, coverage explicit, and checkpoint evidence clear.
- Use the skill as the maintained TDD body instead of duplicating the playbook here.
})
```

View File

@@ -1,59 +1,23 @@
# Verification Command
---
description: Legacy slash-entry shim for the verification-loop skill. Prefer the skill directly.
---
Run comprehensive verification on current codebase state.
# Verification Command (Legacy Shim)
## Instructions
Use this only if you still invoke `/verify`. The maintained workflow lives in `skills/verification-loop/SKILL.md`.
Execute verification in this exact order:
## Canonical Surface
1. **Build Check**
- Run the build command for this project
- If it fails, report errors and STOP
2. **Type Check**
- Run TypeScript/type checker
- Report all errors with file:line
3. **Lint Check**
- Run linter
- Report warnings and errors
4. **Test Suite**
- Run all tests
- Report pass/fail count
- Report coverage percentage
5. **Console.log Audit**
- Search for console.log in source files
- Report locations
6. **Git Status**
- Show uncommitted changes
- Show files modified since last commit
## Output
Produce a concise verification report:
```
VERIFICATION: [PASS/FAIL]
Build: [OK/FAIL]
Types: [OK/X errors]
Lint: [OK/X issues]
Tests: [X/Y passed, Z% coverage]
Secrets: [OK/X found]
Logs: [OK/X console.logs]
Ready for PR: [YES/NO]
```
If any critical issues, list them with fix suggestions.
- Prefer the `verification-loop` skill directly.
- Keep this file only as a compatibility entry point.
## Arguments
$ARGUMENTS can be:
- `quick` - Only build + types
- `full` - All checks (default)
- `pre-commit` - Checks relevant for commits
- `pre-pr` - Full checks plus security scan
`$ARGUMENTS`
## Delegation
Apply the `verification-loop` skill.
- Choose the right verification depth for the user's requested mode.
- Run build, types, lint, tests, security/log checks, and diff review in the right order for the current repo.
- Report only the verdicts and blockers instead of maintaining a second verification checklist here.

View File

@@ -1,6 +1,6 @@
# Everything Claude Code (ECC) — 智能体指令
这是一个**生产就绪的 AI 编码插件**,提供 36 个专业代理、142 项技能、68 条命令以及自动化钩子工作流,用于软件开发。
这是一个**生产就绪的 AI 编码插件**,提供 36 个专业代理、143 项技能、68 条命令以及自动化钩子工作流,用于软件开发。
**版本:** 1.9.0
@@ -147,7 +147,7 @@
```
agents/ — 36 个专业子代理
skills/ — 142 个工作流技能和领域知识
skills/ — 143 个工作流技能和领域知识
commands/ — 68 个斜杠命令
hooks/ — 基于触发的自动化
rules/ — 始终遵循的指导方针(通用 + 每种语言)

View File

@@ -209,7 +209,7 @@ npx ecc-install typescript
/plugin list everything-claude-code@everything-claude-code
```
**搞定!** 你现在可以使用 36 个智能体、142 项技能和 68 个命令了。
**搞定!** 你现在可以使用 36 个智能体、143 项技能和 68 个命令了。
***
@@ -1096,7 +1096,7 @@ opencode
|---------|-------------|----------|--------|
| 智能体 | PASS: 36 个 | PASS: 12 个 | **Claude Code 领先** |
| 命令 | PASS: 68 个 | PASS: 31 个 | **Claude Code 领先** |
| 技能 | PASS: 142 项 | PASS: 37 项 | **Claude Code 领先** |
| 技能 | PASS: 143 项 | PASS: 37 项 | **Claude Code 领先** |
| 钩子 | PASS: 8 种事件类型 | PASS: 11 种事件 | **OpenCode 更多!** |
| 规则 | PASS: 29 条 | PASS: 13 条指令 | **Claude Code 领先** |
| MCP 服务器 | PASS: 14 个 | PASS: 完整 | **完全对等** |
@@ -1208,7 +1208,7 @@ ECC 是**第一个最大化利用每个主要 AI 编码工具的插件**。以
|---------|------------|------------|-----------|----------|
| **智能体** | 36 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 |
| **命令** | 68 | 共享 | 基于指令 | 31 |
| **技能** | 142 | 共享 | 10 (原生格式) | 37 |
| **技能** | 143 | 共享 | 10 (原生格式) | 37 |
| **钩子事件** | 8 种类型 | 15 种类型 | 暂无 | 11 种类型 |
| **钩子脚本** | 20+ 个脚本 | 16 个脚本 (DRY 适配器) | N/A | 插件钩子 |
| **规则** | 34 (通用 + 语言) | 34 (YAML 前页) | 基于指令 | 13 条指令 |

View File

@@ -333,6 +333,7 @@
"paths": [
"skills/fal-ai-media",
"skills/remotion-video-creation",
"skills/ui-demo",
"skills/video-editing",
"skills/videodb"
],

465
skills/ui-demo/SKILL.md Normal file
View File

@@ -0,0 +1,465 @@
---
name: ui-demo
description: Record polished UI demo videos using Playwright. Use when the user asks to create a demo, walkthrough, screen recording, or tutorial video of a web application. Produces WebM videos with visible cursor, natural pacing, and professional feel.
origin: ECC
---
# UI Demo Video Recorder
Record polished demo videos of web applications using Playwright's video recording with an injected cursor overlay, natural pacing, and storytelling flow.
## When to Use
- User asks for a "demo video", "screen recording", "walkthrough", or "tutorial"
- User wants to showcase a feature or workflow visually
- User needs a video for documentation, onboarding, or stakeholder presentation
## Three-Phase Process
Every demo goes through three phases: **Discover -> Rehearse -> Record**. Never skip straight to recording.
---
## Phase 1: Discover
Before writing any script, explore the target pages to understand what is actually there.
### Why
You cannot script what you have not seen. Fields may be `<input>` not `<textarea>`, dropdowns may be custom components not `<select>`, and comment boxes may support `@mentions` or `#tags`. Assumptions break recordings silently.
### How
Navigate to each page in the flow and dump its interactive elements:
```javascript
// Run this for each page in the flow BEFORE writing the demo script
const fields = await page.evaluate(() => {
const els = [];
document.querySelectorAll('input, select, textarea, button, [contenteditable]').forEach(el => {
if (el.offsetParent !== null) {
els.push({
tag: el.tagName,
type: el.type || '',
name: el.name || '',
placeholder: el.placeholder || '',
text: el.textContent?.trim().substring(0, 40) || '',
contentEditable: el.contentEditable === 'true',
role: el.getAttribute('role') || '',
});
}
});
return els;
});
console.log(JSON.stringify(fields, null, 2));
```
### What to look for
- **Form fields**: Are they `<select>`, `<input>`, custom dropdowns, or comboboxes?
- **Select options**: Dump option values AND text. Placeholders often have `value="0"` or `value=""` which looks non-empty. Use `Array.from(el.options).map(o => ({ value: o.value, text: o.text }))`. Skip options where text includes "Select" or value is `"0"`.
- **Rich text**: Does the comment box support `@mentions`, `#tags`, markdown, or emoji? Check placeholder text.
- **Required fields**: Which fields block form submission? Check `required`, `*` in labels, and try submitting empty to see validation errors.
- **Dynamic content**: Do fields appear after other fields are filled?
- **Button labels**: Exact text such as `"Submit"`, `"Submit Request"`, or `"Send"`.
- **Table column headers**: For table-driven modals, map each `input[type="number"]` to its column header instead of assuming all numeric inputs mean the same thing.
### Output
A field map for each page, used to write correct selectors in the script. Example:
```text
/purchase-requests/new:
- Budget Code: <select> (first select on page, 4 options)
- Desired Delivery: <input type="date">
- Context: <textarea> (not input)
- BOM table: inline-editable cells with span.cursor-pointer -> input pattern
- Submit: <button> text="Submit"
/purchase-requests/N (detail):
- Comment: <input placeholder="Type a message..."> supports @user and #PR tags
- Send: <button> text="Send" (disabled until input has content)
```
---
## Phase 2: Rehearse
Run through all steps without recording. Verify every selector resolves.
### Why
Silent selector failures are the main reason demo recordings break. Rehearsal catches them before you waste a recording.
### How
Use `ensureVisible`, a wrapper that logs and fails loudly:
```javascript
async function ensureVisible(page, locator, label) {
const el = typeof locator === 'string' ? page.locator(locator).first() : locator;
const visible = await el.isVisible().catch(() => false);
if (!visible) {
const msg = `REHEARSAL FAIL: "${label}" not found - selector: ${typeof locator === 'string' ? locator : '(locator object)'}`;
console.error(msg);
const found = await page.evaluate(() => {
return Array.from(document.querySelectorAll('button, input, select, textarea, a'))
.filter(el => el.offsetParent !== null)
.map(el => `${el.tagName}[${el.type || ''}] "${el.textContent?.trim().substring(0, 30)}"`)
.join('\n ');
});
console.error(' Visible elements:\n ' + found);
return false;
}
console.log(`REHEARSAL OK: "${label}"`);
return true;
}
```
### Rehearsal script structure
```javascript
const steps = [
{ label: 'Login email field', selector: '#email' },
{ label: 'Login submit', selector: 'button[type="submit"]' },
{ label: 'New Request button', selector: 'button:has-text("New Request")' },
{ label: 'Budget Code select', selector: 'select' },
{ label: 'Delivery date', selector: 'input[type="date"]:visible' },
{ label: 'Description field', selector: 'textarea:visible' },
{ label: 'Add Item button', selector: 'button:has-text("Add Item")' },
{ label: 'Submit button', selector: 'button:has-text("Submit")' },
];
let allOk = true;
for (const step of steps) {
if (!await ensureVisible(page, step.selector, step.label)) {
allOk = false;
}
}
if (!allOk) {
console.error('REHEARSAL FAILED - fix selectors before recording');
process.exit(1);
}
console.log('REHEARSAL PASSED - all selectors verified');
```
### When rehearsal fails
1. Read the visible-element dump.
2. Find the correct selector.
3. Update the script.
4. Re-run rehearsal.
5. Only proceed when every selector passes.
---
## Phase 3: Record
Only after discovery and rehearsal pass should you create the recording.
### Recording Principles
#### 1. Storytelling Flow
Plan the video as a story. Follow user-specified order, or use this default:
- **Entry**: Login or navigate to the starting point
- **Context**: Pan the surroundings so viewers orient themselves
- **Action**: Perform the main workflow steps
- **Variation**: Show a secondary feature such as settings, theme, or localization
- **Result**: Show the outcome, confirmation, or new state
#### 2. Pacing
- After login: `4s`
- After navigation: `3s`
- After clicking a button: `2s`
- Between major steps: `1.5-2s`
- After the final action: `3s`
- Typing delay: `25-40ms` per character
#### 3. Cursor Overlay
Inject an SVG arrow cursor that follows mouse movements:
```javascript
async function injectCursor(page) {
await page.evaluate(() => {
if (document.getElementById('demo-cursor')) return;
const cursor = document.createElement('div');
cursor.id = 'demo-cursor';
cursor.innerHTML = `<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M5 3L19 12L12 13L9 20L5 3Z" fill="white" stroke="black" stroke-width="1.5" stroke-linejoin="round"/>
</svg>`;
cursor.style.cssText = `
position: fixed; z-index: 999999; pointer-events: none;
width: 24px; height: 24px;
transition: left 0.1s, top 0.1s;
filter: drop-shadow(1px 1px 2px rgba(0,0,0,0.3));
`;
cursor.style.left = '0px';
cursor.style.top = '0px';
document.body.appendChild(cursor);
document.addEventListener('mousemove', (e) => {
cursor.style.left = e.clientX + 'px';
cursor.style.top = e.clientY + 'px';
});
});
}
```
Call `injectCursor(page)` after every page navigation because the overlay is destroyed on navigate.
#### 4. Mouse Movement
Never teleport the cursor. Move to the target before clicking:
```javascript
async function moveAndClick(page, locator, label, opts = {}) {
const { postClickDelay = 800, ...clickOpts } = opts;
const el = typeof locator === 'string' ? page.locator(locator).first() : locator;
const visible = await el.isVisible().catch(() => false);
if (!visible) {
console.error(`WARNING: moveAndClick skipped - "${label}" not visible`);
return false;
}
try {
await el.scrollIntoViewIfNeeded();
await page.waitForTimeout(300);
const box = await el.boundingBox();
if (box) {
await page.mouse.move(box.x + box.width / 2, box.y + box.height / 2, { steps: 10 });
await page.waitForTimeout(400);
}
await el.click(clickOpts);
} catch (e) {
console.error(`WARNING: moveAndClick failed on "${label}": ${e.message}`);
return false;
}
await page.waitForTimeout(postClickDelay);
return true;
}
```
Every call should include a descriptive `label` for debugging.
#### 5. Typing
Type visibly, not instant-fill:
```javascript
async function typeSlowly(page, locator, text, label, charDelay = 35) {
const el = typeof locator === 'string' ? page.locator(locator).first() : locator;
const visible = await el.isVisible().catch(() => false);
if (!visible) {
console.error(`WARNING: typeSlowly skipped - "${label}" not visible`);
return false;
}
await moveAndClick(page, el, label);
await el.fill('');
await el.pressSequentially(text, { delay: charDelay });
await page.waitForTimeout(500);
return true;
}
```
#### 6. Scrolling
Use smooth scroll instead of jumps:
```javascript
await page.evaluate(() => window.scrollTo({ top: 400, behavior: 'smooth' }));
await page.waitForTimeout(1500);
```
#### 7. Dashboard Panning
When showing a dashboard or overview page, move the cursor across key elements:
```javascript
async function panElements(page, selector, maxCount = 6) {
const elements = await page.locator(selector).all();
for (let i = 0; i < Math.min(elements.length, maxCount); i++) {
try {
const box = await elements[i].boundingBox();
if (box && box.y < 700) {
await page.mouse.move(box.x + box.width / 2, box.y + box.height / 2, { steps: 8 });
await page.waitForTimeout(600);
}
} catch (e) {
console.warn(`WARNING: panElements skipped element ${i} (selector: "${selector}"): ${e.message}`);
}
}
}
```
#### 8. Subtitles
Inject a subtitle bar at the bottom of the viewport:
```javascript
async function injectSubtitleBar(page) {
await page.evaluate(() => {
if (document.getElementById('demo-subtitle')) return;
const bar = document.createElement('div');
bar.id = 'demo-subtitle';
bar.style.cssText = `
position: fixed; bottom: 0; left: 0; right: 0; z-index: 999998;
text-align: center; padding: 12px 24px;
background: rgba(0, 0, 0, 0.75);
color: white; font-family: -apple-system, "Segoe UI", sans-serif;
font-size: 16px; font-weight: 500; letter-spacing: 0.3px;
transition: opacity 0.3s;
pointer-events: none;
`;
bar.textContent = '';
bar.style.opacity = '0';
document.body.appendChild(bar);
});
}
async function showSubtitle(page, text) {
await page.evaluate((t) => {
const bar = document.getElementById('demo-subtitle');
if (!bar) return;
if (t) {
bar.textContent = t;
bar.style.opacity = '1';
} else {
bar.style.opacity = '0';
}
}, text);
if (text) await page.waitForTimeout(800);
}
```
Call `injectSubtitleBar(page)` alongside `injectCursor(page)` after every navigation.
Usage pattern:
```javascript
await showSubtitle(page, 'Step 1 - Logging in');
await showSubtitle(page, 'Step 2 - Dashboard overview');
await showSubtitle(page, '');
```
Guidelines:
- Keep subtitle text short, ideally under 60 characters.
- Use `Step N - Action` format for consistency.
- Clear the subtitle during long pauses where the UI can speak for itself.
## Script Template
```javascript
'use strict';
const { chromium } = require('playwright');
const path = require('path');
const fs = require('fs');
const BASE_URL = process.env.QA_BASE_URL || 'http://localhost:3000';
const VIDEO_DIR = path.join(__dirname, 'screenshots');
const OUTPUT_NAME = 'demo-FEATURE.webm';
const REHEARSAL = process.argv.includes('--rehearse');
// Paste injectCursor, injectSubtitleBar, showSubtitle, moveAndClick,
// typeSlowly, ensureVisible, and panElements here.
(async () => {
const browser = await chromium.launch({ headless: true });
if (REHEARSAL) {
const context = await browser.newContext({ viewport: { width: 1280, height: 720 } });
const page = await context.newPage();
// Navigate through the flow and run ensureVisible for each selector.
await browser.close();
return;
}
const context = await browser.newContext({
recordVideo: { dir: VIDEO_DIR, size: { width: 1280, height: 720 } },
viewport: { width: 1280, height: 720 }
});
const page = await context.newPage();
try {
await injectCursor(page);
await injectSubtitleBar(page);
await showSubtitle(page, 'Step 1 - Logging in');
// login actions
await page.goto(`${BASE_URL}/dashboard`);
await injectCursor(page);
await injectSubtitleBar(page);
await showSubtitle(page, 'Step 2 - Dashboard overview');
// pan dashboard
await showSubtitle(page, 'Step 3 - Main workflow');
// action sequence
await showSubtitle(page, 'Step 4 - Result');
// final reveal
await showSubtitle(page, '');
} catch (err) {
console.error('DEMO ERROR:', err.message);
} finally {
await context.close();
const video = page.video();
if (video) {
const src = await video.path();
const dest = path.join(VIDEO_DIR, OUTPUT_NAME);
try {
fs.copyFileSync(src, dest);
console.log('Video saved:', dest);
} catch (e) {
console.error('ERROR: Failed to copy video:', e.message);
console.error(' Source:', src);
console.error(' Destination:', dest);
}
}
await browser.close();
}
})();
```
Usage:
```bash
# Phase 2: Rehearse
node demo-script.cjs --rehearse
# Phase 3: Record
node demo-script.cjs
```
## Checklist Before Recording
- [ ] Discovery phase completed
- [ ] Rehearsal passes with all selectors OK
- [ ] Headless mode enabled
- [ ] Resolution set to `1280x720`
- [ ] Cursor and subtitle overlays re-injected after every navigation
- [ ] `showSubtitle(page, 'Step N - ...')` used at major transitions
- [ ] `moveAndClick` used for all clicks with descriptive labels
- [ ] `typeSlowly` used for visible input
- [ ] No silent catches; helpers log warnings
- [ ] Smooth scrolling used for content reveal
- [ ] Key pauses are visible to a human viewer
- [ ] Flow matches the requested story order
- [ ] Script reflects the actual UI discovered in phase 1
## Common Pitfalls
1. Cursor disappears after navigation - re-inject it.
2. Video is too fast - add pauses.
3. Cursor is a dot instead of an arrow - use the SVG overlay.
4. Cursor teleports - move before clicking.
5. Select dropdowns look wrong - show the move, then pick the option.
6. Modals feel abrupt - add a read pause before confirming.
7. Video file path is random - copy it to a stable output name.
8. Selector failures are swallowed - never use silent catch blocks.
9. Field types were assumed - discover them first.
10. Features were assumed - inspect the actual UI before scripting.
11. Placeholder select values look real - watch for `"0"` and `"Select..."`.
12. Popups create separate videos - capture popup pages explicitly and merge later if needed.