8 Commits

Author SHA1 Message Date
Affaan Mustafa
1abeff9be7 feat: add connected operator workflow skills 2026-04-01 02:37:42 -07:00
Affaan Mustafa
975100db55 refactor: collapse legacy command bodies into skills 2026-04-01 02:25:42 -07:00
Affaan Mustafa
6833454778 fix: dedupe managed hooks by semantic identity 2026-04-01 02:16:32 -07:00
Affaan Mustafa
e134e492cb docs: close bundle drift and sync plugin guidance 2026-04-01 02:13:09 -07:00
Affaan Mustafa
f3db349984 docs: shift repo guidance to skills-first workflows 2026-04-01 02:11:24 -07:00
Affaan Mustafa
5194d2000a docs: tighten voice-driven content skills 2026-04-01 01:31:40 -07:00
Affaan Mustafa
43ac81f1ac fix: harden reusable release tag validation 2026-03-31 23:00:58 -07:00
Affaan Mustafa
e1bc08fa6e fix: harden install planning and sync tracked catalogs 2026-03-31 22:57:48 -07:00
68 changed files with 2870 additions and 1383 deletions

View File

@@ -6,7 +6,7 @@ origin: ECC
# Article Writing # Article Writing
Write long-form content that sounds like a real person or brand, not generic AI output. Write long-form content that sounds like an actual person with a point of view, not an LLM smoothing itself into paste.
## When to Activate ## When to Activate
@@ -17,69 +17,90 @@ Write long-form content that sounds like a real person or brand, not generic AI
## Core Rules ## Core Rules
1. Lead with the concrete thing: example, output, anecdote, number, screenshot description, or code block. 1. Lead with the concrete thing: artifact, example, output, anecdote, number, screenshot, or code.
2. Explain after the example, not before. 2. Explain after the example, not before.
3. Prefer short, direct sentences over padded ones. 3. Keep sentences tight unless the source voice is intentionally expansive.
4. Use specific numbers when available and sourced. 4. Use proof instead of adjectives.
5. Never invent biographical facts, company metrics, or customer evidence. 5. Never invent facts, credibility, or customer evidence.
## Voice Capture Workflow ## Voice Capture Workflow
If the user wants a specific voice, collect one or more of: If the user wants a specific voice, collect one or more of:
- published articles - published articles
- newsletters - newsletters
- X / LinkedIn posts - X posts or threads
- docs or memos - docs or memos
- a short style guide - launch notes
- a style guide
Then extract: Then extract:
- sentence length and rhythm - sentence length and rhythm
- whether the voice is formal, conversational, or sharp - whether the writing is compressed, explanatory, sharp, or formal
- favored rhetorical devices such as parentheses, lists, fragments, or questions - how parentheses are used
- tolerance for humor, opinion, and contrarian framing - how often the writer asks questions
- formatting habits such as headers, bullets, code blocks, and pull quotes - whether the writer uses fragments, lists, or hard pivots
- formatting habits such as headers, bullets, code blocks, pull quotes
- what the writer clearly avoids
If no voice references are given, default to a direct, operator-style voice: concrete, practical, and low on hype. If no voice references are given, default to a sharp operator voice: concrete, unsentimental, useful.
## Affaan / ECC Voice Reference
When matching Affaan / ECC voice, bias toward:
- direct claims over scene-setting
- high specificity
- parentheticals used for qualification or over-clarification, not comedy
- capitalization chosen situationally, not as a gimmick
- very low tolerance for fake thought-leadership cadence
- almost no bait questions
## Banned Patterns ## Banned Patterns
Delete and rewrite any of these: Delete and rewrite any of these:
- generic openings like "In today's rapidly evolving landscape" - "In today's rapidly evolving landscape"
- filler transitions such as "Moreover" and "Furthermore" - "game-changer", "cutting-edge", "revolutionary"
- hype phrases like "game-changer", "cutting-edge", or "revolutionary" - "no fluff"
- vague claims without evidence - "not X, just Y"
- biography or credibility claims not backed by provided context - "here's why this matters" as a standalone bridge
- fake vulnerability arcs
- a closing question added only to juice engagement
- forced lowercase
- corny parenthetical asides
- biography padding that does not move the argument
## Writing Process ## Writing Process
1. Clarify the audience and purpose. 1. Clarify the audience and purpose.
2. Build a skeletal outline with one purpose per section. 2. Build a hard outline with one job per section.
3. Start each section with evidence, example, or scene. 3. Start sections with proof, artifact, conflict, or example.
4. Expand only where the next sentence earns its place. 4. Expand only where the next sentence earns space.
5. Remove anything that sounds templated or self-congratulatory. 5. Cut anything that sounds templated, overexplained, or self-congratulatory.
## Structure Guidance ## Structure Guidance
### Technical Guides ### Technical Guides
- open with what the reader gets
- use code or terminal examples in every major section
- end with concrete takeaways, not a soft summary
### Essays / Opinion Pieces - open with what the reader gets
- start with tension, contradiction, or a sharp observation - use code, commands, screenshots, or concrete output in major sections
- end with actionable takeaways, not a soft recap
### Essays / Opinion
- start with tension, contradiction, or a specific observation
- keep one argument thread per section - keep one argument thread per section
- use examples that earn the opinion - make opinions answer to evidence
### Newsletters ### Newsletters
- keep the first screen strong
- mix insight with updates, not diary filler - keep the first screen doing real work
- use clear section labels and easy skim structure - do not front-load diary filler
- use section labels only when they improve scanability
## Quality Gate ## Quality Gate
Before delivering: Before delivering:
- verify factual claims against provided sources - factual claims are backed by provided sources
- remove filler and corporate language - generic AI transitions are gone
- confirm the voice matches the supplied examples - the voice matches the supplied examples
- ensure every section adds new information - every section adds something new
- check formatting for the intended platform - formatting matches the intended medium

View File

@@ -6,83 +6,145 @@ origin: ECC
# Content Engine # Content Engine
Turn one idea into strong, platform-native content instead of posting the same thing everywhere. Build platform-native content without flattening the author's real voice into platform slop.
## When to Activate ## When to Activate
- writing X posts or threads - writing X posts or threads
- drafting LinkedIn posts or launch updates - drafting LinkedIn posts or launch updates
- scripting short-form video or YouTube explainers - scripting short-form video or YouTube explainers
- repurposing articles, podcasts, demos, or docs into social content - repurposing articles, podcasts, demos, docs, or internal notes into public content
- building a lightweight content plan around a launch, milestone, or theme - building a launch sequence or ongoing content system around a product, insight, or narrative
## First Questions ## Non-Negotiables
Clarify: 1. Start from source material, not generic post formulas.
- source asset: what are we adapting from 2. Adapt the format for the platform, not the persona.
- audience: builders, investors, customers, operators, or general audience 3. One post should carry one actual claim.
- platform: X, LinkedIn, TikTok, YouTube, newsletter, or multi-platform 4. Specificity beats adjectives.
- goal: awareness, conversion, recruiting, authority, launch support, or engagement 5. No engagement bait unless the user explicitly asks for it.
## Core Rules ## Source-First Workflow
1. Adapt for the platform. Do not cross-post the same copy. Before drafting, identify the source set:
2. Hooks matter more than summaries. - published articles
3. Every post should carry one clear idea. - notes or internal memos
4. Use specifics over slogans. - product demos
5. Keep the ask small and clear. - docs or changelogs
- transcripts
- screenshots
- prior posts from the same author
## Platform Guidance If the user wants a specific voice, build a voice profile from real examples before writing.
## Voice Capture Workflow
Collect 5 to 20 examples when available. Good sources:
- articles or essays
- X posts or threads
- docs or release notes
- newsletters
- previous launch posts
If live X access is available, use `x-api` to pull recent original posts before drafting. If not, use the examples already provided or present in the repo.
Extract and write down:
- sentence length and rhythm
- how compressed or explanatory the writing is
- whether capitalization is conventional, mixed, or situational
- how parentheses are used
- whether the writer uses fragments, lists, or abrupt pivots
- how often the writer asks questions
- how sharp, formal, opinionated, or dry the voice is
- what the writer never does
Do not start drafting until the voice profile is clear enough to enforce.
## Affaan / ECC Voice Reference
When the user wants Affaan / ECC voice specifically, default to this unless newer examples clearly override it:
- direct, compressed, concrete
- strong preference for specific claims, numbers, mechanisms, and receipts
- parentheticals used to qualify, narrow, or over-clarify, not to do corny bits
- lowercase is optional, not mandatory
- questions are rare and should not be added as bait
- transitions should feel earned, not polished
- tone can be sharp or blunt, but should not sound like a content marketer
## Hard Bans
Delete and rewrite any of these:
- "In today's rapidly evolving landscape"
- "game-changer", "revolutionary", "cutting-edge"
- "no fluff"
- "not X, just Y"
- "here's why this matters" unless it is followed immediately by something concrete
- "Excited to share"
- fake curiosity gaps
- ending with a LinkedIn-style question just to farm replies
- forced lowercase when the source voice does not call for it
- forced casualness on LinkedIn
- parenthetical jokes that were not present in the source voice
## Platform Adaptation Rules
### X ### X
- open fast
- one idea per post or per tweet in a thread - open with the strongest claim, artifact, or tension
- keep links out of the main body unless necessary - keep the compression if the source voice is compressed
- avoid hashtag spam - if writing a thread, each post must advance the argument
- do not pad with context the audience does not need
### LinkedIn ### LinkedIn
- strong first line
- short paragraphs
- more explicit framing around lessons, results, and takeaways
### TikTok / Short Video - expand only enough for people outside the immediate niche to follow
- first 3 seconds must interrupt attention - do not turn it into a fake lesson post unless the source material actually is reflective
- script around visuals, not just narration - no corporate inspiration cadence
- one demo, one claim, one CTA - no praise-stacking, no "journey" filler
### Short Video
- script around the visual sequence and proof points
- first seconds should show the result, problem, or punch
- do not write narration that sounds better on paper than on screen
### YouTube ### YouTube
- show the result early
- structure by chapter - show the result or tension early
- refresh the visual every 20-30 seconds - organize by argument or progression, not filler sections
- use chaptering only when it helps clarity
### Newsletter ### Newsletter
- deliver one clear lens, not a bundle of unrelated items
- make section titles skimmable - open with the point, conflict, or artifact
- keep the opening paragraph doing real work - do not spend the first paragraph warming up
- every section needs to add something new
## Repurposing Flow ## Repurposing Flow
Default cascade: 1. Pick the anchor asset.
1. anchor asset: article, video, demo, memo, or launch doc 2. Extract 3 to 7 atomic claims or scenes.
2. extract 3-7 atomic ideas 3. Rank them by sharpness, novelty, and proof.
3. write platform-native variants 4. Assign one strong idea per output.
4. trim repetition across outputs 5. Adapt structure for each platform.
5. align CTAs with platform intent 6. Strip platform-shaped filler.
7. Run the quality gate.
## Deliverables ## Deliverables
When asked for a campaign, return: When asked for a campaign, return:
- a short voice profile if voice matching matters
- the core angle - the core angle
- platform-specific drafts - platform-native drafts
- optional posting order - posting order only if it helps execution
- optional CTA variants - gaps that must be filled before publishing
- any missing inputs needed before publishing
## Quality Gate ## Quality Gate
Before delivering: Before delivering:
- each draft reads natively for its platform - every draft sounds like the intended author, not the platform stereotype
- hooks are strong and specific - every draft contains a real claim, proof point, or concrete observation
- no generic hype language - no generic hype language remains
- no fake engagement bait remains
- no duplicated copy across platforms unless requested - no duplicated copy across platforms unless requested
- the CTA matches the content and audience - any CTA is earned and user-approved

View File

@@ -6,183 +6,109 @@ origin: ECC
# Crosspost # Crosspost
Distribute content across multiple social platforms with platform-native adaptation. Distribute content across platforms without turning it into the same fake post in four costumes.
## When to Activate ## When to Activate
- User wants to post content to multiple platforms - the user wants to publish the same underlying idea across multiple platforms
- Publishing announcements, launches, or updates across social media - a launch, update, release, or essay needs platform-specific versions
- Repurposing a post from one platform to others - the user says "crosspost", "post this everywhere", or "adapt this for X and LinkedIn"
- User says "crosspost", "post everywhere", "share on all platforms", or "distribute this"
## Core Rules ## Core Rules
1. **Never post identical content cross-platform.** Each platform gets a native adaptation. 1. Do not publish identical copy across platforms.
2. **Primary platform first.** Post to the main platform, then adapt for others. 2. Preserve the author's voice across platforms.
3. **Respect platform conventions.** Length limits, formatting, link handling all differ. 3. Adapt for constraints, not stereotypes.
4. **One idea per post.** If the source content has multiple ideas, split across posts. 4. One post should still be about one thing.
5. **Attribution matters.** If crossposting someone else's content, credit the source. 5. Do not invent a CTA, question, or moral if the source did not earn one.
## Platform Specifications
| Platform | Max Length | Link Handling | Hashtags | Media |
|----------|-----------|---------------|----------|-------|
| X | 280 chars (4000 for Premium) | Counted in length | Minimal (1-2 max) | Images, video, GIFs |
| LinkedIn | 3000 chars | Not counted in length | 3-5 relevant | Images, video, docs, carousels |
| Threads | 500 chars | Separate link attachment | None typical | Images, video |
| Bluesky | 300 chars | Via facets (rich text) | None (use feeds) | Images |
## Workflow ## Workflow
### Step 1: Create Source Content ### Step 1: Start with the Primary Version
Start with the core idea. Use `content-engine` skill for high-quality drafts: Pick the strongest source version first:
- Identify the single core message - the original X post
- Determine the primary platform (where the audience is biggest) - the original article
- Draft the primary platform version first - the launch note
- the thread
- the memo or changelog
### Step 2: Identify Target Platforms Use `content-engine` first if the source still needs voice shaping.
Ask the user or determine from context: ### Step 2: Capture the Voice Fingerprint
- Which platforms to target
- Priority order (primary gets the best version)
- Any platform-specific requirements (e.g., LinkedIn needs professional tone)
### Step 3: Adapt Per Platform Before adapting, note:
- how blunt or explanatory the source is
- whether the source uses fragments, lists, or longer transitions
- whether the source uses parentheses
- whether the source avoids questions, hashtags, or CTA language
For each target platform, transform the content: The adaptation should preserve that fingerprint.
**X adaptation:** ### Step 3: Adapt by Platform Constraint
- Open with a hook, not a summary
- Cut to the core insight fast
- Keep links out of main body when possible
- Use thread format for longer content
**LinkedIn adaptation:** ### X
- Strong first line (visible before "see more")
- Short paragraphs with line breaks
- Frame around lessons, results, or professional takeaways
- More explicit context than X (LinkedIn audience needs framing)
**Threads adaptation:** - keep it compressed
- Conversational, casual tone - lead with the sharpest claim or artifact
- Shorter than LinkedIn, less compressed than X - use a thread only when a single post would collapse the argument
- Visual-first if possible - avoid hashtags and generic filler
**Bluesky adaptation:** ### LinkedIn
- Direct and concise (300 char limit)
- Community-oriented tone
- Use feeds/lists for topic targeting instead of hashtags
### Step 4: Post Primary Platform - add only the context needed for people outside the niche
- do not turn it into a fake founder-reflection post
- do not add a closing question just because it is LinkedIn
- do not force a polished "professional tone" if the author is naturally sharper
Post to the primary platform first: ### Threads
- Use `x-api` skill for X
- Use platform-specific APIs or tools for others
- Capture the post URL for cross-referencing
### Step 5: Post to Secondary Platforms - keep it readable and direct
- do not write fake hyper-casual creator copy
- do not paste the LinkedIn version and shorten it
Post adapted versions to remaining platforms: ### Bluesky
- Stagger timing (not all at once — 30-60 min gaps)
- Include cross-platform references where appropriate ("longer thread on X" etc.)
## Content Adaptation Examples - keep it concise
- preserve the author's cadence
- do not rely on hashtags or feed-gaming language
### Source: Product Launch ## Posting Order
**X version:** Default:
``` 1. post the strongest native version first
We just shipped [feature]. 2. adapt for the secondary platforms
3. stagger timing only if the user wants sequencing help
[One specific thing it does that's impressive] Do not add cross-platform references unless useful. Most of the time, the post should stand on its own.
[Link] ## Banned Patterns
```
**LinkedIn version:** Delete and rewrite any of these:
``` - "Excited to share"
Excited to share: we just launched [feature] at [Company]. - "Here's what I learned"
- "What do you think?"
- "link in bio" unless that is literally true
- "not X, just Y"
- generic "professional takeaway" paragraphs that were not in the source
Here's why it matters: ## Output Format
[2-3 short paragraphs with context] Return:
- the primary platform version
[Takeaway for the audience] - adapted variants for each requested platform
- a short note on what changed and why
[Link] - any publishing constraint the user still needs to resolve
```
**Threads version:**
```
just shipped something cool — [feature]
[casual explanation of what it does]
link in bio
```
### Source: Technical Insight
**X version:**
```
TIL: [specific technical insight]
[Why it matters in one sentence]
```
**LinkedIn version:**
```
A pattern I've been using that's made a real difference:
[Technical insight with professional framing]
[How it applies to teams/orgs]
#relevantHashtag
```
## API Integration
### Batch Crossposting Service (Example Pattern)
If using a crossposting service (e.g., Postbridge, Buffer, or a custom API), the pattern looks like:
```python
import os
import requests
resp = requests.post(
"https://api.postbridge.io/v1/posts",
headers={"Authorization": f"Bearer {os.environ['POSTBRIDGE_API_KEY']}"},
json={
"platforms": ["twitter", "linkedin", "threads"],
"content": {
"twitter": {"text": x_version},
"linkedin": {"text": linkedin_version},
"threads": {"text": threads_version}
}
}
)
```
### Manual Posting
Without Postbridge, post to each platform using its native API:
- X: Use `x-api` skill patterns
- LinkedIn: LinkedIn API v2 with OAuth 2.0
- Threads: Threads API (Meta)
- Bluesky: AT Protocol API
## Quality Gate ## Quality Gate
Before posting: Before delivering:
- [ ] Each platform version reads naturally for that platform - each version reads like the same author under different constraints
- [ ] No identical content across platforms - no platform version feels padded or sanitized
- [ ] Length limits respected - no copy is duplicated verbatim across platforms
- [ ] Links work and are placed appropriately - any extra context added for LinkedIn or newsletter use is actually necessary
- [ ] Tone matches platform conventions
- [ ] Media is sized correctly for each platform
## Related Skills ## Related Skills
- `content-engine` — Generate platform-native content - `content-engine` for voice capture and source shaping
- `x-api` — X/Twitter API integration - `x-api` for X publishing workflows

View File

@@ -304,24 +304,24 @@ Register the agent in AGENTS.md
Optionally update README.md and docs/COMMAND-AGENT-MAP.md Optionally update README.md and docs/COMMAND-AGENT-MAP.md
``` ```
### Add New Command ### Add New Workflow Surface
Adds a new command to the system, often paired with a backing skill. Adds or updates a workflow entrypoint. Default to skills-first; only add a command shim when legacy slash compatibility is still required.
**Frequency**: ~1 times per month **Frequency**: ~1 times per month
**Steps**: **Steps**:
1. Create a new markdown file under commands/{command-name}.md 1. Create or update the canonical workflow under skills/{skill-name}/SKILL.md
2. Optionally add or update a backing skill under skills/{skill-name}/SKILL.md 2. Only if needed, add or update commands/{command-name}.md as a compatibility shim
**Files typically involved**: **Files typically involved**:
- `commands/*.md`
- `skills/*/SKILL.md` - `skills/*/SKILL.md`
- `commands/*.md` (only when a legacy shim is intentionally retained)
**Example commit sequence**: **Example commit sequence**:
``` ```
Create a new markdown file under commands/{command-name}.md Create or update the canonical skill under skills/{skill-name}/SKILL.md
Optionally add or update a backing skill under skills/{skill-name}/SKILL.md Only if needed, add or update commands/{command-name}.md as a compatibility shim
``` ```
### Sync Catalog Counts ### Sync Catalog Counts

View File

@@ -6,7 +6,7 @@ origin: ECC
# Investor Outreach # Investor Outreach
Write investor communication that is short, personalized, and easy to act on. Write investor communication that is short, concrete, and easy to act on.
## When to Activate ## When to Activate
@@ -20,17 +20,28 @@ Write investor communication that is short, personalized, and easy to act on.
1. Personalize every outbound message. 1. Personalize every outbound message.
2. Keep the ask low-friction. 2. Keep the ask low-friction.
3. Use proof, not adjectives. 3. Use proof instead of adjectives.
4. Stay concise. 4. Stay concise.
5. Never send generic copy that could go to any investor. 5. Never send copy that could go to any investor.
## Hard Bans
Delete and rewrite any of these:
- "I'd love to connect"
- "excited to share"
- generic thesis praise without a real tie-in
- vague founder adjectives
- "no fluff"
- begging language
- soft closing questions when a direct ask is clearer
## Cold Email Structure ## Cold Email Structure
1. subject line: short and specific 1. subject line: short and specific
2. opener: why this investor specifically 2. opener: why this investor specifically
3. pitch: what the company does, why now, what proof matters 3. pitch: what the company does, why now, and what proof matters
4. ask: one concrete next step 4. ask: one concrete next step
5. sign-off: name, role, one credibility anchor if needed 5. sign-off: name, role, and one credibility anchor if needed
## Personalization Sources ## Personalization Sources
@@ -40,14 +51,14 @@ Reference one or more of:
- a mutual connection - a mutual connection
- a clear market or product fit with the investor's focus - a clear market or product fit with the investor's focus
If that context is missing, ask for it or state that the draft is a template awaiting personalization. If that context is missing, state that the draft still needs personalization instead of pretending it is finished.
## Follow-Up Cadence ## Follow-Up Cadence
Default: Default:
- day 0: initial outbound - day 0: initial outbound
- day 4-5: short follow-up with one new data point - day 4 or 5: short follow-up with one new data point
- day 10-12: final follow-up with a clean close - day 10 to 12: final follow-up with a clean close
Do not keep nudging after that unless the user wants a longer sequence. Do not keep nudging after that unless the user wants a longer sequence.
@@ -69,8 +80,8 @@ Include:
## Quality Gate ## Quality Gate
Before delivering: Before delivering:
- message is personalized - the message is genuinely personalized
- the ask is explicit - the ask is explicit
- there is no fluff or begging language
- the proof point is concrete - the proof point is concrete
- filler praise and softener language are gone
- word count stays tight - word count stays tight

View File

@@ -6,7 +6,7 @@ These constraints are not obvious from public examples and have caused repeated
### Custom Endpoints and Gateways ### Custom Endpoints and Gateways
ECC does not override Claude Code transport settings. If Claude Code is configured to run through an official LLM gateway or a compatible custom endpoint, the plugin continues to work because hooks, commands, and skills execute locally after the CLI starts successfully. ECC does not override Claude Code transport settings. If Claude Code is configured to run through an official LLM gateway or a compatible custom endpoint, the plugin continues to work because hooks, skills, and any retained legacy command shims execute locally after the CLI starts successfully.
Use Claude Code's own environment/configuration for transport selection, for example: Use Claude Code's own environment/configuration for transport selection, for example:

View File

@@ -1,7 +1,7 @@
{ {
"$schema": "https://anthropic.com/claude-code/marketplace.schema.json", "$schema": "https://anthropic.com/claude-code/marketplace.schema.json",
"name": "everything-claude-code", "name": "everything-claude-code",
"description": "Battle-tested Claude Code configurations from an Anthropic hackathon winner — agents, skills, hooks, commands, and rules evolved over 10+ months of intensive daily use", "description": "Battle-tested Claude Code configurations from an Anthropic hackathon winner — agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use",
"owner": { "owner": {
"name": "Affaan Mustafa", "name": "Affaan Mustafa",
"email": "me@affaanmustafa.com" "email": "me@affaanmustafa.com"
@@ -13,7 +13,7 @@
{ {
"name": "everything-claude-code", "name": "everything-claude-code",
"source": "./", "source": "./",
"description": "The most comprehensive Claude Code plugin — 14+ agents, 56+ skills, 33+ commands, and production-ready hooks for TDD, security scanning, code review, and continuous learning", "description": "The most comprehensive Claude Code plugin — 36 agents, 142 skills, 68 legacy command shims, and production-ready hooks for TDD, security scanning, code review, and continuous learning",
"version": "1.9.0", "version": "1.9.0",
"author": { "author": {
"name": "Affaan Mustafa", "name": "Affaan Mustafa",

View File

@@ -1,7 +1,7 @@
{ {
"name": "everything-claude-code", "name": "everything-claude-code",
"version": "1.9.0", "version": "1.9.0",
"description": "Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, and rules evolved over 10+ months of intensive daily use", "description": "Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use",
"author": { "author": {
"name": "Affaan Mustafa", "name": "Affaan Mustafa",
"url": "https://x.com/affaanmustafa" "url": "https://x.com/affaanmustafa"

View File

@@ -12,7 +12,7 @@ This directory contains the **Codex plugin manifest** for Everything Claude Code
## What This Provides ## What This Provides
- **125 skills** from `./skills/` — reusable Codex workflows for TDD, security, - **142 skills** from `./skills/` — reusable Codex workflows for TDD, security,
code review, architecture, and more code review, architecture, and more
- **6 MCP servers** — GitHub, Context7, Exa, Memory, Playwright, Sequential Thinking - **6 MCP servers** — GitHub, Context7, Exa, Memory, Playwright, Sequential Thinking
@@ -45,5 +45,7 @@ Run this from the repository root so `./` points to the repo root and `.mcp.json
- The `skills/` directory at the repo root is shared between Claude Code (`.claude-plugin/`) - The `skills/` directory at the repo root is shared between Claude Code (`.claude-plugin/`)
and Codex (`.codex-plugin/`) — same source of truth, no duplication and Codex (`.codex-plugin/`) — same source of truth, no duplication
- ECC is moving to a skills-first workflow surface. Legacy `commands/` remain for
compatibility on harnesses that still expect slash-entry shims.
- MCP server credentials are inherited from the launching environment (env vars) - MCP server credentials are inherited from the launching environment (env vars)
- This manifest does **not** override `~/.codex/config.toml` settings - This manifest does **not** override `~/.codex/config.toml` settings

View File

@@ -28,8 +28,10 @@ jobs:
fetch-depth: 0 fetch-depth: 0
- name: Validate version tag - name: Validate version tag
env:
INPUT_TAG: ${{ inputs.tag }}
run: | run: |
if ! [[ "${{ inputs.tag }}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then if ! [[ "$INPUT_TAG" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "Invalid version tag format. Expected vX.Y.Z" echo "Invalid version tag format. Expected vX.Y.Z"
exit 1 exit 1
fi fi

View File

@@ -1,6 +1,6 @@
# Everything Claude Code (ECC) — Agent Instructions # Everything Claude Code (ECC) — Agent Instructions
This is a **production-ready AI coding plugin** providing 36 specialized agents, 142 skills, 68 commands, and automated hook workflows for software development. This is a **production-ready AI coding plugin** providing 36 specialized agents, 147 skills, 68 commands, and automated hook workflows for software development.
**Version:** 1.9.0 **Version:** 1.9.0
@@ -116,6 +116,12 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
- If there is no obvious project doc location, ask before creating a new top-level file - If there is no obvious project doc location, ask before creating a new top-level file
5. **Commit** — Conventional commits format, comprehensive PR summaries 5. **Commit** — Conventional commits format, comprehensive PR summaries
## Workflow Surface Policy
- `skills/` is the canonical workflow surface.
- New workflow contributions should land in `skills/` first.
- `commands/` is a legacy slash-entry compatibility surface and should only be added or updated when a shim is still required for migration or cross-harness parity.
## Git Workflow ## Git Workflow
**Commit format:** `<type>: <description>` — Types: feat, fix, refactor, docs, test, chore, perf, ci **Commit format:** `<type>: <description>` — Types: feat, fix, refactor, docs, test, chore, perf, ci
@@ -140,7 +146,7 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
``` ```
agents/ — 36 specialized subagents agents/ — 36 specialized subagents
skills/ — 142 workflow skills and domain knowledge skills/ — 147 workflow skills and domain knowledge
commands/ — 68 slash commands commands/ — 68 slash commands
hooks/ — Trigger-based automations hooks/ — Trigger-based automations
rules/ — Always-follow guidelines (common + per-language) rules/ — Always-follow guidelines (common + per-language)
@@ -149,6 +155,8 @@ mcp-configs/ — 14 MCP server configurations
tests/ — Test suite tests/ — Test suite
``` ```
`commands/` remains in the repo for compatibility, but the long-term direction is skills-first.
## Success Metrics ## Success Metrics
- All tests pass with 80%+ coverage - All tests pass with 80%+ coverage

View File

@@ -34,7 +34,7 @@
**The performance optimization system for AI agent harnesses. From an Anthropic hackathon winner.** **The performance optimization system for AI agent harnesses. From an Anthropic hackathon winner.**
Not just configs. A complete system: skills, instincts, memory optimization, continuous learning, security scanning, and research-first development. Production-ready agents, hooks, commands, rules, and MCP configurations evolved over 10+ months of intensive daily use building real products. Not just configs. A complete system: skills, instincts, memory optimization, continuous learning, security scanning, and research-first development. Production-ready agents, skills, hooks, rules, MCP configurations, and legacy command shims evolved over 10+ months of intensive daily use building real products.
Works across **Claude Code**, **Codex**, **Cowork**, and other AI agent harnesses. Works across **Claude Code**, **Codex**, **Cowork**, and other AI agent harnesses.
@@ -212,17 +212,20 @@ For manual install instructions see the README in the `rules/` folder. When copy
### Step 3: Start Using ### Step 3: Start Using
```bash ```bash
# Try a command (plugin install uses namespaced form) # Skills are the primary workflow surface.
# Existing slash-style command names still work while ECC migrates off commands/.
# Plugin install uses the namespaced form
/everything-claude-code:plan "Add user authentication" /everything-claude-code:plan "Add user authentication"
# Manual install (Option 2) uses the shorter form: # Manual install keeps the shorter slash form:
# /plan "Add user authentication" # /plan "Add user authentication"
# Check available commands # Check available commands
/plugin list everything-claude-code@everything-claude-code /plugin list everything-claude-code@everything-claude-code
``` ```
**That's it!** You now have access to 36 agents, 142 skills, and 68 commands. **That's it!** You now have access to 36 agents, 147 skills, and 68 legacy command shims.
### Multi-model commands require additional setup ### Multi-model commands require additional setup
@@ -392,7 +395,7 @@ everything-claude-code/
| |-- autonomous-loops/ # Autonomous loop patterns: sequential pipelines, PR loops, DAG orchestration (NEW) | |-- autonomous-loops/ # Autonomous loop patterns: sequential pipelines, PR loops, DAG orchestration (NEW)
| |-- plankton-code-quality/ # Write-time code quality enforcement with Plankton hooks (NEW) | |-- plankton-code-quality/ # Write-time code quality enforcement with Plankton hooks (NEW)
| |
|-- commands/ # Slash commands for quick execution |-- commands/ # Legacy slash-entry shims; prefer skills/
| |-- tdd.md # /tdd - Test-driven development | |-- tdd.md # /tdd - Test-driven development
| |-- plan.md # /plan - Implementation planning | |-- plan.md # /plan - Implementation planning
| |-- e2e.md # /e2e - E2E test generation | |-- e2e.md # /e2e - E2E test generation
@@ -671,10 +674,7 @@ cp -r everything-claude-code/rules/python ~/.claude/rules/
cp -r everything-claude-code/rules/golang ~/.claude/rules/ cp -r everything-claude-code/rules/golang ~/.claude/rules/
cp -r everything-claude-code/rules/php ~/.claude/rules/ cp -r everything-claude-code/rules/php ~/.claude/rules/
# Copy commands # Copy skills first (primary workflow surface)
cp everything-claude-code/commands/*.md ~/.claude/commands/
# Copy skills (core vs niche)
# Recommended (new users): core/general skills only # Recommended (new users): core/general skills only
cp -r everything-claude-code/.agents/skills/* ~/.claude/skills/ cp -r everything-claude-code/.agents/skills/* ~/.claude/skills/
cp -r everything-claude-code/skills/search-first ~/.claude/skills/ cp -r everything-claude-code/skills/search-first ~/.claude/skills/
@@ -683,6 +683,10 @@ cp -r everything-claude-code/skills/search-first ~/.claude/skills/
# for s in django-patterns django-tdd laravel-patterns springboot-patterns; do # for s in django-patterns django-tdd laravel-patterns springboot-patterns; do
# cp -r everything-claude-code/skills/$s ~/.claude/skills/ # cp -r everything-claude-code/skills/$s ~/.claude/skills/
# done # done
# Optional: keep legacy slash-command compatibility during migration
mkdir -p ~/.claude/commands
cp everything-claude-code/commands/*.md ~/.claude/commands/
``` ```
#### Add hooks to settings.json #### Add hooks to settings.json
@@ -691,7 +695,7 @@ Copy the hooks from `hooks/hooks.json` to your `~/.claude/settings.json`.
#### Configure MCPs #### Configure MCPs
Copy desired MCP servers from `mcp-configs/mcp-servers.json` to your `~/.claude.json`. Copy desired MCP server definitions from `mcp-configs/mcp-servers.json` into your official Claude Code config in `~/.claude/settings.json`, or into a project-scoped `.mcp.json` if you want repo-local MCP access.
**Important:** Replace `YOUR_*_HERE` placeholders with your actual API keys. **Important:** Replace `YOUR_*_HERE` placeholders with your actual API keys.
@@ -716,7 +720,7 @@ You are a senior code reviewer...
### Skills ### Skills
Skills are workflow definitions invoked by commands or agents: Skills are the primary workflow surface. They can be invoked directly, suggested automatically, and reused by agents. ECC still ships `commands/` during migration, but new workflow development should land in `skills/` first.
```markdown ```markdown
# TDD Workflow # TDD Workflow
@@ -762,7 +766,7 @@ See [`rules/README.md`](rules/README.md) for installation and structure details.
## Which Agent Should I Use? ## Which Agent Should I Use?
Not sure where to start? Use this quick reference: Not sure where to start? Use this quick reference. Skills are the canonical workflow surface; slash entries below are the compatibility form most users already know.
| I want to... | Use this command | Agent used | | I want to... | Use this command | Agent used |
|--------------|-----------------|------------| |--------------|-----------------|------------|
@@ -782,6 +786,8 @@ Not sure where to start? Use this quick reference:
### Common Workflows ### Common Workflows
Slash forms below are shown because they are still the fastest familiar entrypoint. Under the hood, ECC is shifting these workflows toward skills-first definitions.
**Starting a new feature:** **Starting a new feature:**
``` ```
/everything-claude-code:plan "Add user authentication with OAuth" /everything-claude-code:plan "Add user authentication with OAuth"
@@ -1114,7 +1120,7 @@ The configuration is automatically detected from `.opencode/opencode.json`.
|---------|-------------|----------|--------| |---------|-------------|----------|--------|
| Agents | PASS: 36 agents | PASS: 12 agents | **Claude Code leads** | | Agents | PASS: 36 agents | PASS: 12 agents | **Claude Code leads** |
| Commands | PASS: 68 commands | PASS: 31 commands | **Claude Code leads** | | Commands | PASS: 68 commands | PASS: 31 commands | **Claude Code leads** |
| Skills | PASS: 142 skills | PASS: 37 skills | **Claude Code leads** | | Skills | PASS: 147 skills | PASS: 37 skills | **Claude Code leads** |
| Hooks | PASS: 8 event types | PASS: 11 events | **OpenCode has more!** | | Hooks | PASS: 8 event types | PASS: 11 events | **OpenCode has more!** |
| Rules | PASS: 29 rules | PASS: 13 instructions | **Claude Code leads** | | Rules | PASS: 29 rules | PASS: 13 instructions | **Claude Code leads** |
| MCP Servers | PASS: 14 servers | PASS: Full | **Full parity** | | MCP Servers | PASS: 14 servers | PASS: Full | **Full parity** |
@@ -1134,7 +1140,7 @@ OpenCode's plugin system is MORE sophisticated than Claude Code with 20+ event t
**Additional OpenCode events**: `file.edited`, `file.watcher.updated`, `message.updated`, `lsp.client.diagnostics`, `tui.toast.show`, and more. **Additional OpenCode events**: `file.edited`, `file.watcher.updated`, `message.updated`, `lsp.client.diagnostics`, `tui.toast.show`, and more.
### Available Commands (31+) ### Available Slash Entry Shims (31+)
| Command | Description | | Command | Description |
|---------|-------------| |---------|-------------|
@@ -1221,9 +1227,9 @@ ECC is the **first plugin to maximize every major AI coding tool**. Here's how e
| Feature | Claude Code | Cursor IDE | Codex CLI | OpenCode | | Feature | Claude Code | Cursor IDE | Codex CLI | OpenCode |
|---------|------------|------------|-----------|----------| |---------|------------|------------|-----------|----------|
| **Agents** | 21 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 | | **Agents** | 36 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
| **Commands** | 52 | Shared | Instruction-based | 31 | | **Commands** | 68 | Shared | Instruction-based | 31 |
| **Skills** | 102 | Shared | 10 (native format) | 37 | | **Skills** | 147 | Shared | 10 (native format) | 37 |
| **Hook Events** | 8 types | 15 types | None yet | 11 types | | **Hook Events** | 8 types | 15 types | None yet | 11 types |
| **Hook Scripts** | 20+ scripts | 16 scripts (DRY adapter) | N/A | Plugin hooks | | **Hook Scripts** | 20+ scripts | 16 scripts (DRY adapter) | N/A | Plugin hooks |
| **Rules** | 34 (common + lang) | 34 (YAML frontmatter) | Instruction-based | 13 instructions | | **Rules** | 34 (common + lang) | 34 (YAML frontmatter) | Instruction-based | 13 instructions |

View File

@@ -106,7 +106,7 @@ cp -r everything-claude-code/rules/perl ~/.claude/rules/
/plugin list everything-claude-code@everything-claude-code /plugin list everything-claude-code@everything-claude-code
``` ```
**完成!** 你现在可以使用 13 个代理、43 个技能和 31 个命令。 **完成!** 你现在可以使用 36 个代理、147 个技能和 68 个命令。
### multi-* 命令需要额外配置 ### multi-* 命令需要额外配置

View File

@@ -30,6 +30,12 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
- control plane primitives - control plane primitives
- operator surface - operator surface
- self-improving skills - self-improving skills
- Skill quality:
- rewrite content-facing skills to use source-backed voice modeling
- remove generic LLM rhetoric, canned CTA patterns, and forced platform stereotypes
- continue one-by-one audit of overlapping or low-signal skill content
- move repo guidance and contribution flow to skills-first, leaving commands only as explicit compatibility shims
- add operator skills that wrap connected surfaces instead of exposing only raw APIs or disconnected primitives
- Security: - Security:
- keep dependency posture clean - keep dependency posture clean
- preserve self-contained hook and MCP behavior - preserve self-contained hook and MCP behavior
@@ -39,6 +45,8 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
- Closed on 2026-04-01 under backlog hygiene / merge policy: - Closed on 2026-04-01 under backlog hygiene / merge policy:
- `#1069` `feat: add everything-claude-code ECC bundle` - `#1069` `feat: add everything-claude-code ECC bundle`
- `#1068` `feat: add everything-claude-code-conventions ECC bundle` - `#1068` `feat: add everything-claude-code-conventions ECC bundle`
- `#1080` `feat: add everything-claude-code ECC bundle`
- `#1079` `feat: add everything-claude-code-conventions ECC bundle`
- `#1064` `chore(deps-dev): bump @eslint/js from 9.39.2 to 10.0.1` - `#1064` `chore(deps-dev): bump @eslint/js from 9.39.2 to 10.0.1`
- `#1063` `chore(deps-dev): bump eslint from 9.39.2 to 10.1.0` - `#1063` `chore(deps-dev): bump eslint from 9.39.2 to 10.1.0`
- Closed on 2026-04-01 because the content is sourced from external ecosystems and should only land via manual ECC-native re-port: - Closed on 2026-04-01 because the content is sourced from external ecosystems and should only land via manual ECC-native re-port:
@@ -48,10 +56,11 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
- Native-support candidates to fully diff-audit next: - Native-support candidates to fully diff-audit next:
- `#1055` Dart / Flutter support - `#1055` Dart / Flutter support
- `#1043` C# reviewer and .NET skills - `#1043` C# reviewer and .NET skills
- `#834` localized catalog sync and antigravity target filtering - Direct-port candidates landed after audit:
- `#1078` hook-id dedupe for managed Claude hook reinstalls
- `#844` ui-demo skill
- Port or rebuild inside ECC after full audit: - Port or rebuild inside ECC after full audit:
- `#894` Jira integration - `#894` Jira integration
- `#844` ui-demo skill
- `#814` + `#808` rebuild as a single consolidated notifications lane for Opencode and cross-harness surfaces - `#814` + `#808` rebuild as a single consolidated notifications lane for Opencode and cross-harness surfaces
## Interfaces ## Interfaces
@@ -62,6 +71,7 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
- `ECC-206` ecosystem CI baseline - `ECC-206` ecosystem CI baseline
- `ECC-207` PR backlog audit and merge-policy enforcement - `ECC-207` PR backlog audit and merge-policy enforcement
- `ECC-208` context hygiene - `ECC-208` context hygiene
- `ECC-210` skills-first workflow migration and command compatibility retirement
## Update Rule ## Update Rule
@@ -75,3 +85,15 @@ Keep this file detailed for only the current sprint, blockers, and next actions.
- 2026-04-01: Notification PRs `#808` and `#814` were identified as overlapping and should be rebuilt as one unified feature instead of landing as parallel branches. - 2026-04-01: Notification PRs `#808` and `#814` were identified as overlapping and should be rebuilt as one unified feature instead of landing as parallel branches.
- 2026-04-01: External-source skill PRs `#640`, `#851`, and `#852` were closed under the new ingestion policy; copy ideas from audited source later rather than merging branded/source-import PRs directly. - 2026-04-01: External-source skill PRs `#640`, `#851`, and `#852` were closed under the new ingestion policy; copy ideas from audited source later rather than merging branded/source-import PRs directly.
- 2026-04-01: The remaining low GitHub advisory on `ecc2/Cargo.lock` was addressed by moving `ratatui` to `0.30` with `crossterm_0_28`, which updated transitive `lru` from `0.12.5` to `0.16.3`. `cargo build --manifest-path ecc2/Cargo.toml` still passes. - 2026-04-01: The remaining low GitHub advisory on `ecc2/Cargo.lock` was addressed by moving `ratatui` to `0.30` with `crossterm_0_28`, which updated transitive `lru` from `0.12.5` to `0.16.3`. `cargo build --manifest-path ecc2/Cargo.toml` still passes.
- 2026-04-01: Safe core of `#834` was ported directly into `main` instead of merging the PR wholesale. This included stricter install-plan validation, antigravity target filtering that skips unsupported module trees, tracked catalog sync for English plus zh-CN docs, and a dedicated `catalog:sync` write mode.
- 2026-04-01: Repo catalog truth is now synced at `36` agents, `68` commands, and `142` skills across the tracked English and zh-CN docs.
- 2026-04-01: Legacy emoji and non-essential symbol usage in docs, scripts, and tests was normalized to keep the unicode-safety lane green without weakening the check itself.
- 2026-04-01: The remaining self-contained piece of `#834`, `docs/zh-CN/skills/browser-qa/SKILL.md`, was ported directly into the repo. After commit, `#834` should be closed as superseded-by-direct-port.
- 2026-04-01: Content skill cleanup started with `content-engine`, `crosspost`, `article-writing`, and `investor-outreach`. The new direction is source-first voice capture, explicit anti-trope bans, and no forced platform persona shifts.
- 2026-04-01: `node scripts/ci/check-unicode-safety.js --write` sanitized the remaining emoji-bearing Markdown files, including several `remotion-video-creation` rule docs and an old local plan note.
- 2026-04-01: Core English repo surfaces were shifted to a skills-first posture. README, AGENTS, plugin metadata, and contributor instructions now treat `skills/` as canonical and `commands/` as legacy slash-entry compatibility during migration.
- 2026-04-01: Follow-up bundle cleanup closed `#1080` and `#1079`, which were generated `.claude/` bundle PRs duplicating command-first scaffolding instead of shipping canonical ECC source changes.
- 2026-04-01: Ported the useful core of `#1078` directly into `main`, but tightened the implementation so legacy no-id hook installs deduplicate cleanly on the first reinstall instead of the second. Added stable hook ids to `hooks/hooks.json`, semantic fallback aliases in `mergeHookEntries()`, and a regression test covering upgrade from pre-id settings.
- 2026-04-01: Collapsed the obvious command/skill duplicates into thin legacy shims so `skills/` now hold the maintained bodies for NanoClaw, context-budget, DevFleet, docs lookup, E2E, evals, orchestration, prompt optimization, rules distillation, TDD, and verification.
- 2026-04-01: Ported the self-contained core of `#844` directly into `main` as `skills/ui-demo/SKILL.md` and registered it under the `media-generation` install module instead of merging the PR wholesale.
- 2026-04-01: Added the first connected-workflow operator lane as ECC-native skills instead of leaving the surface as raw plugins or APIs: `workspace-surface-audit`, `customer-billing-ops`, `project-flow-ops`, and `google-workspace-ops`. These are tracked under the new `operator-workflows` install module.

View File

@@ -98,21 +98,21 @@ Write to `gan-harness/generator-state.md` after each iteration:
The Evaluator will specifically penalize these patterns. **Avoid them:** The Evaluator will specifically penalize these patterns. **Avoid them:**
- ❌ Generic gradient backgrounds (#667eea #764ba2 is an instant tell) - Avoid generic gradient backgrounds (#667eea -> #764ba2 is an instant tell)
- ❌ Excessive rounded corners on everything - Avoid excessive rounded corners on everything
- ❌ Stock hero sections with "Welcome to [App Name]" - Avoid stock hero sections with "Welcome to [App Name]"
- ❌ Default Material UI / Shadcn themes without customization - Avoid default Material UI / Shadcn themes without customization
- ❌ Placeholder images from unsplash/placeholder services - Avoid placeholder images from unsplash/placeholder services
- ❌ Generic card grids with identical layouts - Avoid generic card grids with identical layouts
- "AI-generated" decorative SVG patterns - Avoid "AI-generated" decorative SVG patterns
**Instead, aim for:** **Instead, aim for:**
- ✅ A specific, opinionated color palette (follow the spec) - Use a specific, opinionated color palette (follow the spec)
- ✅ Thoughtful typography hierarchy (different weights, sizes for different content) - Use thoughtful typography hierarchy (different weights, sizes for different content)
- ✅ Custom layouts that match the content (not generic grids) - Use custom layouts that match the content (not generic grids)
- ✅ Meaningful animations tied to user actions (not decoration) - Use meaningful animations tied to user actions (not decoration)
- ✅ Real empty states with personality - Use real empty states with personality
- ✅ Error states that help the user (not just "Something went wrong") - Use error states that help the user (not just "Something went wrong")
## Interaction with Evaluator ## Interaction with Evaluator

View File

@@ -1,51 +1,23 @@
--- ---
description: Start NanoClaw v2 — ECC's persistent, zero-dependency REPL with model routing, skill hot-load, branching, compaction, export, and metrics. description: Legacy slash-entry shim for the nanoclaw-repl skill. Prefer the skill directly.
--- ---
# Claw Command # Claw Command (Legacy Shim)
Start an interactive AI agent session with persistent markdown history and operational controls. Use this only if you still reach for `/claw` from muscle memory. The maintained implementation lives in `skills/nanoclaw-repl/SKILL.md`.
## Usage ## Canonical Surface
```bash - Prefer the `nanoclaw-repl` skill directly.
node scripts/claw.js - Keep this file only as a compatibility entry point while command-first usage is retired.
```
Or via npm: ## Arguments
```bash `$ARGUMENTS`
npm run claw
```
## Environment Variables ## Delegation
| Variable | Default | Description | Apply the `nanoclaw-repl` skill and keep the response focused on operating or extending `scripts/claw.js`.
|----------|---------|-------------| - If the user wants to run it, use `node scripts/claw.js` or `npm run claw`.
| `CLAW_SESSION` | `default` | Session name (alphanumeric + hyphens) | - If the user wants to extend it, preserve the zero-dependency and markdown-backed session model.
| `CLAW_SKILLS` | *(empty)* | Comma-separated skills loaded at startup | - If the request is really about long-running orchestration rather than NanoClaw itself, redirect to `dmux-workflows` or `autonomous-agent-harness`.
| `CLAW_MODEL` | `sonnet` | Default model for the session |
## REPL Commands
```text
/help Show help
/clear Clear current session history
/history Print full conversation history
/sessions List saved sessions
/model [name] Show/set model
/load <skill-name> Hot-load a skill into context
/branch <session-name> Branch current session
/search <query> Search query across sessions
/compact Compact old turns, keep recent context
/export <md|json|txt> [path] Export session
/metrics Show session metrics
exit Quit
```
## Notes
- NanoClaw remains zero-dependency.
- Sessions are stored at `~/.claude/claw/<session>.md`.
- Compaction keeps the most recent turns and writes a compaction header.
- Export supports markdown, JSON turns, and plain text.

View File

@@ -219,10 +219,10 @@ Create review artifact at `.claude/PRPs/reviews/pr-<NUMBER>-review.md`:
| Check | Result | | Check | Result |
|---|---| |---|---|
| Type check | Pass / Fail / ⏭️ Skipped | | Type check | Pass / Fail / Skipped |
| Lint | ✅ / ❌ / ⏭️ | | Lint | Pass / Fail / Skipped |
| Tests | ✅ / ❌ / ⏭️ | | Tests | Pass / Fail / Skipped |
| Build | ✅ / ❌ / ⏭️ | | Build | Pass / Fail / Skipped |
## Files Reviewed ## Files Reviewed
<list of files with change type: Added/Modified/Deleted> <list of files with change type: Added/Modified/Deleted>

View File

@@ -1,29 +1,23 @@
--- ---
description: Analyze context window usage across agents, skills, MCP servers, and rules to find optimization opportunities. Helps reduce token overhead and avoid performance warnings. description: Legacy slash-entry shim for the context-budget skill. Prefer the skill directly.
--- ---
# Context Budget Optimizer # Context Budget Optimizer (Legacy Shim)
Analyze your Claude Code setup's context window consumption and produce actionable recommendations to reduce token overhead. Use this only if you still invoke `/context-budget`. The maintained workflow lives in `skills/context-budget/SKILL.md`.
## Usage ## Canonical Surface
``` - Prefer the `context-budget` skill directly.
/context-budget [--verbose] - Keep this file only as a compatibility entry point.
```
- Default: summary with top recommendations ## Arguments
- `--verbose`: full breakdown per component
$ARGUMENTS $ARGUMENTS
## What to Do ## Delegation
Run the **context-budget** skill (`skills/context-budget/SKILL.md`) with the following inputs: Apply the `context-budget` skill.
- Pass through `--verbose` if the user supplied it.
1. Pass `--verbose` flag if present in `$ARGUMENTS` - Assume a 200K context window unless the user specified otherwise.
2. Assume a 200K context window (Claude Sonnet default) unless the user specifies otherwise - Return the skill's inventory, issue detection, and prioritized savings report without re-implementing the scan here.
3. Follow the skill's four phases: Inventory → Classify → Detect Issues → Report
4. Output the formatted Context Budget Report to the user
The skill handles all scanning logic, token estimation, issue detection, and report formatting.

View File

@@ -1,92 +1,23 @@
--- ---
description: Orchestrate parallel Claude Code agents via Claude DevFleet — plan projects from natural language, dispatch agents in isolated worktrees, monitor progress, and read structured reports. description: Legacy slash-entry shim for the claude-devfleet skill. Prefer the skill directly.
--- ---
# DevFleet — Multi-Agent Orchestration # DevFleet (Legacy Shim)
Orchestrate parallel Claude Code agents via Claude DevFleet. Each agent runs in an isolated git worktree with full tooling. Use this only if you still call `/devfleet`. The maintained workflow lives in `skills/claude-devfleet/SKILL.md`.
Requires the DevFleet MCP server: `claude mcp add devfleet --transport http http://localhost:18801/mcp` ## Canonical Surface
## Flow - Prefer the `claude-devfleet` skill directly.
- Keep this file only as a compatibility entry point while command-first usage is retired.
``` ## Arguments
User describes project
→ plan_project(prompt) → mission DAG with dependencies
→ Show plan, get approval
→ dispatch_mission(M1) → Agent spawns in worktree
→ M1 completes → auto-merge → M2 auto-dispatches (depends_on M1)
→ M2 completes → auto-merge
→ get_report(M2) → files_changed, what_done, errors, next_steps
→ Report summary to user
```
## Workflow `$ARGUMENTS`
1. **Plan the project** from the user's description: ## Delegation
``` Apply the `claude-devfleet` skill.
mcp__devfleet__plan_project(prompt="<user's description>") - Plan from the user's description, show the DAG, and get approval before dispatch unless the user already said to proceed.
``` - Prefer polling status over blocking waits for long missions.
- Report mission IDs, files changed, failures, and next steps from structured mission reports.
This returns a project with chained missions. Show the user:
- Project name and ID
- Each mission: title, type, dependencies
- The dependency DAG (which missions block which)
2. **Wait for user approval** before dispatching. Show the plan clearly.
3. **Dispatch the first mission** (the one with empty `depends_on`):
```
mcp__devfleet__dispatch_mission(mission_id="<first_mission_id>")
```
The remaining missions auto-dispatch as their dependencies complete (because `plan_project` creates them with `auto_dispatch=true`). When manually creating missions with `create_mission`, you must explicitly set `auto_dispatch=true` for this behavior.
4. **Monitor progress** — check what's running:
```
mcp__devfleet__get_dashboard()
```
Or check a specific mission:
```
mcp__devfleet__get_mission_status(mission_id="<id>")
```
Prefer polling with `get_mission_status` over `wait_for_mission` for long-running missions, so the user sees progress updates.
5. **Read the report** for each completed mission:
```
mcp__devfleet__get_report(mission_id="<mission_id>")
```
Call this for every mission that reached a terminal state. Reports contain: files_changed, what_done, what_open, what_tested, what_untested, next_steps, errors_encountered.
## All Available Tools
| Tool | Purpose |
|------|---------|
| `plan_project(prompt)` | AI breaks description into chained missions with `auto_dispatch=true` |
| `create_project(name, path?, description?)` | Create a project manually, returns `project_id` |
| `create_mission(project_id, title, prompt, depends_on?, auto_dispatch?)` | Add a mission. `depends_on` is a list of mission ID strings. |
| `dispatch_mission(mission_id, model?, max_turns?)` | Start an agent |
| `cancel_mission(mission_id)` | Stop a running agent |
| `wait_for_mission(mission_id, timeout_seconds?)` | Block until done (prefer polling for long tasks) |
| `get_mission_status(mission_id)` | Check progress without blocking |
| `get_report(mission_id)` | Read structured report |
| `get_dashboard()` | System overview |
| `list_projects()` | Browse projects |
| `list_missions(project_id, status?)` | List missions |
## Guidelines
- Always confirm the plan before dispatching unless the user said "go ahead"
- Include mission titles and IDs when reporting status
- If a mission fails, read its report to understand errors before retrying
- Agent concurrency is configurable (default: 3). Excess missions queue and auto-dispatch as slots free up. Check `get_dashboard()` for slot availability.
- Dependencies form a DAG — never create circular dependencies
- Each agent auto-merges its worktree on completion. If a merge conflict occurs, the changes remain on the worktree branch for manual resolution.

View File

@@ -1,31 +1,23 @@
--- ---
description: Look up current documentation for a library or topic via Context7. description: Legacy slash-entry shim for the documentation-lookup skill. Prefer the skill directly.
--- ---
# /docs # Docs Command (Legacy Shim)
## Purpose Use this only if you still reach for `/docs`. The maintained workflow lives in `skills/documentation-lookup/SKILL.md`.
Look up up-to-date documentation for a library, framework, or API and return a summarized answer with relevant code snippets. Uses the Context7 MCP (resolve-library-id and query-docs) so answers reflect current docs, not training data. ## Canonical Surface
## Usage - Prefer the `documentation-lookup` skill directly.
- Keep this file only as a compatibility entry point.
``` ## Arguments
/docs [library name] [question]
```
Use quotes for multi-word arguments so they are parsed as a single token. Example: `/docs "Next.js" "How do I configure middleware?"` `$ARGUMENTS`
If library or question is omitted, prompt the user for: ## Delegation
1. The library or product name (e.g. Next.js, Prisma, Supabase).
2. The specific question or task (e.g. "How do I set up middleware?", "Auth methods").
## Workflow Apply the `documentation-lookup` skill.
- If the library or the question is missing, ask for the missing part.
1. **Resolve library ID** — Call the Context7 tool `resolve-library-id` with the library name and the user's question to get a Context7-compatible library ID (e.g. `/vercel/next.js`). - Use live documentation through Context7 instead of training data.
2. **Query docs** — Call `query-docs` with that library ID and the user's question. - Return only the current answer and the minimum code/example surface needed.
3. **Summarize** — Return a concise answer and include relevant code examples from the fetched documentation. Mention the library (and version if relevant).
## Output
The user receives a short, accurate answer backed by current docs, plus any code snippets that help. If Context7 is not available, say so and answer from training data with a note that docs may be outdated.

View File

@@ -1,123 +1,26 @@
--- ---
description: Generate and run end-to-end tests with Playwright. Creates test journeys, runs tests, captures screenshots/videos/traces, and uploads artifacts. description: Legacy slash-entry shim for the e2e-testing skill. Prefer the skill directly.
--- ---
# E2E Command # E2E Command (Legacy Shim)
This command invokes the **e2e-runner** agent to generate, maintain, and execute end-to-end tests using Playwright. Use this only if you still invoke `/e2e`. The maintained workflow lives in `skills/e2e-testing/SKILL.md`.
## What This Command Does ## Canonical Surface
1. **Generate Test Journeys** - Create Playwright tests for user flows - Prefer the `e2e-testing` skill directly.
2. **Run E2E Tests** - Execute tests across browsers - Keep this file only as a compatibility entry point.
3. **Capture Artifacts** - Screenshots, videos, traces on failures
4. **Upload Results** - HTML reports and JUnit XML
5. **Identify Flaky Tests** - Quarantine unstable tests
## When to Use ## Arguments
Use `/e2e` when: `$ARGUMENTS`
- Testing critical user journeys (login, trading, payments)
- Verifying multi-step flows work end-to-end
- Testing UI interactions and navigation
- Validating integration between frontend and backend
- Preparing for production deployment
## How It Works ## Delegation
The e2e-runner agent will: Apply the `e2e-testing` skill.
- Generate or update Playwright coverage for the requested user flow.
1. **Analyze user flow** and identify test scenarios - Run only the relevant tests unless the user explicitly asked for the entire suite.
2. **Generate Playwright test** using Page Object Model pattern - Capture the usual artifacts and report failures, flake risk, and next fixes without duplicating the full skill body here.
3. **Run tests** across multiple browsers (Chrome, Firefox, Safari)
4. **Capture failures** with screenshots, videos, and traces
5. **Generate report** with results and artifacts
6. **Identify flaky tests** and recommend fixes
## Example Usage
```
User: /e2e Test the market search and view flow
Agent (e2e-runner):
# E2E Test Generation: Market Search and View Flow
## Test Scenario Identified
**User Journey:** Market Search → View Results → Click Market → View Details
**Test Coverage:**
1. Navigate to markets page
2. Perform semantic search
3. Verify search results
4. Click on first result
5. Verify market details page loads
6. Verify chart renders
## Generated Test Code
```typescript
// tests/e2e/markets/search-and-view.spec.ts
import { test, expect } from '@playwright/test'
import { MarketsPage } from '../../pages/MarketsPage'
import { MarketDetailsPage } from '../../pages/MarketDetailsPage'
test.describe('Market Search and View Flow', () => {
test('user can search markets and view details', async ({ page }) => {
// 1. Navigate to markets page
const marketsPage = new MarketsPage(page)
await marketsPage.goto()
// Verify page loaded
await expect(page).toHaveTitle(/Markets/)
await expect(page.locator('h1')).toContainText('Markets')
// 2. Perform semantic search
await marketsPage.searchMarkets('election')
// Wait for API response
await page.waitForResponse(resp =>
resp.url().includes('/api/markets/search') && resp.status() === 200
)
// 3. Verify search results
const marketCards = marketsPage.marketCards
await expect(marketCards.first()).toBeVisible()
const resultCount = await marketCards.count()
expect(resultCount).toBeGreaterThan(0)
// Take screenshot of search results
await page.screenshot({ path: 'artifacts/search-results.png' })
// 4. Click on first result
const firstMarketTitle = await marketCards.first().textContent()
await marketCards.first().click()
// 5. Verify market details page loads
await expect(page).toHaveURL(/\/markets\/[a-z0-9-]+/)
const detailsPage = new MarketDetailsPage(page)
await expect(detailsPage.marketName).toBeVisible()
await expect(detailsPage.marketDescription).toBeVisible()
// 6. Verify chart renders
await expect(detailsPage.priceChart).toBeVisible()
// Verify market name matches
const detailsTitle = await detailsPage.marketName.textContent()
expect(detailsTitle?.toLowerCase()).toContain(
firstMarketTitle?.toLowerCase().substring(0, 20) || ''
)
// Take screenshot of market details
await page.screenshot({ path: 'artifacts/market-details.png' })
})
test('search with no results shows empty state', async ({ page }) => {
const marketsPage = new MarketsPage(page)
await marketsPage.goto()
// Search for non-existent market
await marketsPage.searchMarkets('xyznonexistentmarket123456') await marketsPage.searchMarkets('xyznonexistentmarket123456')
// Verify empty state // Verify empty state

View File

@@ -1,120 +1,23 @@
# Eval Command ---
description: Legacy slash-entry shim for the eval-harness skill. Prefer the skill directly.
---
Manage eval-driven development workflow. # Eval Command (Legacy Shim)
## Usage Use this only if you still invoke `/eval`. The maintained workflow lives in `skills/eval-harness/SKILL.md`.
`/eval [define|check|report|list] [feature-name]` ## Canonical Surface
## Define Evals - Prefer the `eval-harness` skill directly.
- Keep this file only as a compatibility entry point.
`/eval define feature-name`
Create a new eval definition:
1. Create `.claude/evals/feature-name.md` with template:
```markdown
## EVAL: feature-name
Created: $(date)
### Capability Evals
- [ ] [Description of capability 1]
- [ ] [Description of capability 2]
### Regression Evals
- [ ] [Existing behavior 1 still works]
- [ ] [Existing behavior 2 still works]
### Success Criteria
- pass@3 > 90% for capability evals
- pass^3 = 100% for regression evals
```
2. Prompt user to fill in specific criteria
## Check Evals
`/eval check feature-name`
Run evals for a feature:
1. Read eval definition from `.claude/evals/feature-name.md`
2. For each capability eval:
- Attempt to verify criterion
- Record PASS/FAIL
- Log attempt in `.claude/evals/feature-name.log`
3. For each regression eval:
- Run relevant tests
- Compare against baseline
- Record PASS/FAIL
4. Report current status:
```
EVAL CHECK: feature-name
========================
Capability: X/Y passing
Regression: X/Y passing
Status: IN PROGRESS / READY
```
## Report Evals
`/eval report feature-name`
Generate comprehensive eval report:
```
EVAL REPORT: feature-name
=========================
Generated: $(date)
CAPABILITY EVALS
----------------
[eval-1]: PASS (pass@1)
[eval-2]: PASS (pass@2) - required retry
[eval-3]: FAIL - see notes
REGRESSION EVALS
----------------
[test-1]: PASS
[test-2]: PASS
[test-3]: PASS
METRICS
-------
Capability pass@1: 67%
Capability pass@3: 100%
Regression pass^3: 100%
NOTES
-----
[Any issues, edge cases, or observations]
RECOMMENDATION
--------------
[SHIP / NEEDS WORK / BLOCKED]
```
## List Evals
`/eval list`
Show all eval definitions:
```
EVAL DEFINITIONS
================
feature-auth [3/5 passing] IN PROGRESS
feature-search [5/5 passing] READY
feature-export [0/4 passing] NOT STARTED
```
## Arguments ## Arguments
$ARGUMENTS: `$ARGUMENTS`
- `define <name>` - Create new eval definition
- `check <name>` - Run and check evals ## Delegation
- `report <name>` - Generate full report
- `list` - Show all evals Apply the `eval-harness` skill.
- `clean` - Remove old eval logs (keeps last 10 runs) - Support the same user intents as before: define, check, report, list, and cleanup.
- Keep evals capability-first, regression-backed, and evidence-based.
- Use the skill as the canonical evaluator instead of maintaining a separate command-specific playbook.

View File

@@ -1,123 +1,27 @@
--- ---
description: Sequential and tmux/worktree orchestration guidance for multi-agent workflows. description: Legacy slash-entry shim for dmux-workflows and autonomous-agent-harness. Prefer the skills directly.
--- ---
# Orchestrate Command # Orchestrate Command (Legacy Shim)
Sequential agent workflow for complex tasks. Use this only if you still invoke `/orchestrate`. The maintained orchestration guidance lives in `skills/dmux-workflows/SKILL.md` and `skills/autonomous-agent-harness/SKILL.md`.
## Usage ## Canonical Surface
`/orchestrate [workflow-type] [task-description]` - Prefer `dmux-workflows` for parallel panes, worktrees, and multi-agent splits.
- Prefer `autonomous-agent-harness` for longer-running loops, governance, scheduling, and control-plane style execution.
- Keep this file only as a compatibility entry point.
## Workflow Types ## Arguments
### feature `$ARGUMENTS`
Full feature implementation workflow:
```
planner -> tdd-guide -> code-reviewer -> security-reviewer
```
### bugfix ## Delegation
Bug investigation and fix workflow:
```
planner -> tdd-guide -> code-reviewer
```
### refactor Apply the orchestration skills instead of maintaining a second workflow spec here.
Safe refactoring workflow: - Start with `dmux-workflows` for split/parallel execution.
``` - Pull in `autonomous-agent-harness` when the user is really asking for persistent loops, governance, or operator-layer behavior.
architect -> code-reviewer -> tdd-guide - Keep handoffs structured, but let the skills define the maintained sequencing rules.
```
### security
Security-focused review:
```
security-reviewer -> code-reviewer -> architect
```
## Execution Pattern
For each agent in the workflow:
1. **Invoke agent** with context from previous agent
2. **Collect output** as structured handoff document
3. **Pass to next agent** in chain
4. **Aggregate results** into final report
## Handoff Document Format
Between agents, create handoff document:
```markdown
## HANDOFF: [previous-agent] -> [next-agent]
### Context
[Summary of what was done]
### Findings
[Key discoveries or decisions]
### Files Modified
[List of files touched]
### Open Questions
[Unresolved items for next agent]
### Recommendations
[Suggested next steps]
```
## Example: Feature Workflow
```
/orchestrate feature "Add user authentication"
```
Executes:
1. **Planner Agent**
- Analyzes requirements
- Creates implementation plan
- Identifies dependencies
- Output: `HANDOFF: planner -> tdd-guide`
2. **TDD Guide Agent**
- Reads planner handoff
- Writes tests first
- Implements to pass tests
- Output: `HANDOFF: tdd-guide -> code-reviewer`
3. **Code Reviewer Agent**
- Reviews implementation
- Checks for issues
- Suggests improvements
- Output: `HANDOFF: code-reviewer -> security-reviewer`
4. **Security Reviewer Agent**
- Security audit
- Vulnerability check
- Final approval
- Output: Final Report
## Final Report Format
```
ORCHESTRATION REPORT
====================
Workflow: feature
Task: Add user authentication
Agents: planner -> tdd-guide -> code-reviewer -> security-reviewer
SUMMARY
-------
[One paragraph summary]
AGENT OUTPUTS
-------------
Planner: [summary]
TDD Guide: [summary]
Code Reviewer: [summary]
Security Reviewer: [summary] Security Reviewer: [summary]
FILES CHANGED FILES CHANGED

View File

@@ -1,38 +1,23 @@
--- ---
description: Analyze a draft prompt and output an optimized, ECC-enriched version ready to paste and run. Does NOT execute the task — outputs advisory analysis only. description: Legacy slash-entry shim for the prompt-optimizer skill. Prefer the skill directly.
--- ---
# /prompt-optimize # Prompt Optimize (Legacy Shim)
Analyze and optimize the following prompt for maximum ECC leverage. Use this only if you still invoke `/prompt-optimize`. The maintained workflow lives in `skills/prompt-optimizer/SKILL.md`.
## Your Task ## Canonical Surface
Apply the **prompt-optimizer** skill to the user's input below. Follow the 6-phase analysis pipeline: - Prefer the `prompt-optimizer` skill directly.
- Keep this file only as a compatibility entry point.
0. **Project Detection** — Read CLAUDE.md, detect tech stack from project files (package.json, go.mod, pyproject.toml, etc.) ## Arguments
1. **Intent Detection** — Classify the task type (new feature, bug fix, refactor, research, testing, review, documentation, infrastructure, design)
2. **Scope Assessment** — Evaluate complexity (TRIVIAL / LOW / MEDIUM / HIGH / EPIC), using codebase size as signal if detected
3. **ECC Component Matching** — Map to specific skills, commands, agents, and model tier
4. **Missing Context Detection** — Identify gaps. If 3+ critical items missing, ask the user to clarify before generating
5. **Workflow & Model** — Determine lifecycle position, recommend model tier, and split into multiple prompts if HIGH/EPIC
## Output Requirements `$ARGUMENTS`
- Present diagnosis, recommended ECC components, and an optimized prompt using the Output Format from the prompt-optimizer skill ## Delegation
- Provide both **Full Version** (detailed) and **Quick Version** (compact, varied by intent type)
- Respond in the same language as the user's input
- The optimized prompt must be complete and ready to copy-paste into a new session
- End with a footer offering adjustment or a clear next step for starting a separate execution request
## CRITICAL Apply the `prompt-optimizer` skill.
- Keep it advisory-only: optimize the prompt, do not execute the task.
Do NOT execute the user's task. Output ONLY the analysis and optimized prompt. - Return the recommended ECC components plus a ready-to-run prompt.
If the user asks for direct execution, explain that `/prompt-optimize` only produces advisory output and tell them to start a normal task request instead. - If the user actually wants direct execution, say so and tell them to make a normal task request instead of staying inside the shim.
Note: `blueprint` is a **skill**, not a slash command. Write "Use the blueprint skill"
instead of presenting it as a `/...` command.
## User Input
$ARGUMENTS

View File

@@ -115,7 +115,7 @@ For each task in **Step-by-Step Tasks**:
``` ```
If type-check fails → fix the error before moving to the next file. If type-check fails → fix the error before moving to the next file.
4. **Track progress** — Log: ` Task N: [task name] — complete` 4. **Track progress** — Log: `[done] Task N: [task name] — complete`
### Handling Deviations ### Handling Deviations
@@ -234,18 +234,18 @@ Write report to `.claude/PRPs/reports/{plan-name}-report.md`:
| # | Task | Status | Notes | | # | Task | Status | Notes |
|---|---|---|---| |---|---|---|---|
| 1 | [task name] | Complete | | | 1 | [task name] | [done] Complete | |
| 2 | [task name] | Complete | Deviated — [reason] | | 2 | [task name] | [done] Complete | Deviated — [reason] |
## Validation Results ## Validation Results
| Level | Status | Notes | | Level | Status | Notes |
|---|---|---| |---|---|---|
| Static Analysis | Pass | | | Static Analysis | [done] Pass | |
| Unit Tests | Pass | N tests written | | Unit Tests | [done] Pass | N tests written |
| Build | Pass | | | Build | [done] Pass | |
| Integration | Pass | or N/A | | Integration | [done] Pass | or N/A |
| Edge Cases | Pass | | | Edge Cases | [done] Pass | |
## Files Changed ## Files Changed
@@ -297,17 +297,17 @@ Report to user:
- **Plan**: [plan file path] → archived to completed/ - **Plan**: [plan file path] → archived to completed/
- **Branch**: [current branch name] - **Branch**: [current branch name]
- **Status**: All tasks complete - **Status**: [done] All tasks complete
### Validation Summary ### Validation Summary
| Check | Status | | Check | Status |
|---|---| |---|---|
| Type Check | | | Type Check | [done] |
| Lint | | | Lint | [done] |
| Tests | (N written) | | Tests | [done] (N written) |
| Build | | | Build | [done] |
| Integration | or N/A | | Integration | [done] or N/A |
### Files Changed ### Files Changed
- [N] files created, [M] files updated - [N] files created, [M] files updated
@@ -322,8 +322,8 @@ Report to user:
### PRD Progress (if applicable) ### PRD Progress (if applicable)
| Phase | Status | | Phase | Status |
|---|---| |---|---|
| Phase 1 | Complete | | Phase 1 | [done] Complete |
| Phase 2 | ⏳ Next | | Phase 2 | [next] |
| ... | ... | | ... | ... |
> Next step: Run `/prp-pr` to create a pull request, or `/code-review` to review changes first. > Next step: Run `/prp-pr` to create a pull request, or `/code-review` to review changes first.

View File

@@ -1,11 +1,20 @@
--- ---
description: "Scan skills to extract cross-cutting principles and distill them into rules" description: Legacy slash-entry shim for the rules-distill skill. Prefer the skill directly.
--- ---
# /rules-distill — Distill Principles from Skills into Rules # Rules Distill (Legacy Shim)
Scan installed skills, extract cross-cutting principles, and distill them into rules. Use this only if you still invoke `/rules-distill`. The maintained workflow lives in `skills/rules-distill/SKILL.md`.
## Process ## Canonical Surface
Follow the full workflow defined in the `rules-distill` skill. - Prefer the `rules-distill` skill directly.
- Keep this file only as a compatibility entry point.
## Arguments
`$ARGUMENTS`
## Delegation
Apply the `rules-distill` skill and follow its inventory, cross-read, and verdict workflow instead of duplicating that logic here.

View File

@@ -1,123 +1,26 @@
--- ---
description: Enforce test-driven development workflow. Scaffold interfaces, generate tests FIRST, then implement minimal code to pass. Ensure 80%+ coverage. description: Legacy slash-entry shim for the tdd-workflow skill. Prefer the skill directly.
--- ---
# TDD Command # TDD Command (Legacy Shim)
This command invokes the **tdd-guide** agent to enforce test-driven development methodology. Use this only if you still invoke `/tdd`. The maintained workflow lives in `skills/tdd-workflow/SKILL.md`.
## What This Command Does ## Canonical Surface
1. **Scaffold Interfaces** - Define types/interfaces first - Prefer the `tdd-workflow` skill directly.
2. **Generate Tests First** - Write failing tests (RED) - Keep this file only as a compatibility entry point.
3. **Implement Minimal Code** - Write just enough to pass (GREEN)
4. **Refactor** - Improve code while keeping tests green (REFACTOR)
5. **Verify Coverage** - Ensure 80%+ test coverage
## When to Use ## Arguments
Use `/tdd` when: `$ARGUMENTS`
- Implementing new features
- Adding new functions/components
- Fixing bugs (write test that reproduces bug first)
- Refactoring existing code
- Building critical business logic
## How It Works ## Delegation
The tdd-guide agent will: Apply the `tdd-workflow` skill.
- Stay strict on RED -> GREEN -> REFACTOR.
1. **Define interfaces** for inputs/outputs - Keep tests first, coverage explicit, and checkpoint evidence clear.
2. **Write tests that will FAIL** (because code doesn't exist yet) - Use the skill as the maintained TDD body instead of duplicating the playbook here.
3. **Run tests** and verify they fail for the right reason
4. **Write minimal implementation** to make tests pass
5. **Run tests** and verify they pass
6. **Refactor** code while keeping tests green
7. **Check coverage** and add more tests if below 80%
## TDD Cycle
```
RED → GREEN → REFACTOR → REPEAT
RED: Write a failing test
GREEN: Write minimal code to pass
REFACTOR: Improve code, keep tests passing
REPEAT: Next feature/scenario
```
## Example Usage
```
User: /tdd I need a function to calculate market liquidity score
Agent (tdd-guide):
# TDD Session: Market Liquidity Score Calculator
## Step 1: Define Interface (SCAFFOLD)
```typescript
// lib/liquidity.ts
export interface MarketData {
totalVolume: number
bidAskSpread: number
activeTraders: number
lastTradeTime: Date
}
export function calculateLiquidityScore(market: MarketData): number {
// TODO: Implementation
throw new Error('Not implemented')
}
```
## Step 2: Write Failing Test (RED)
```typescript
// lib/liquidity.test.ts
import { calculateLiquidityScore } from './liquidity'
describe('calculateLiquidityScore', () => {
it('should return high score for liquid market', () => {
const market = {
totalVolume: 100000,
bidAskSpread: 0.01,
activeTraders: 500,
lastTradeTime: new Date()
}
const score = calculateLiquidityScore(market)
expect(score).toBeGreaterThan(80)
expect(score).toBeLessThanOrEqual(100)
})
it('should return low score for illiquid market', () => {
const market = {
totalVolume: 100,
bidAskSpread: 0.5,
activeTraders: 2,
lastTradeTime: new Date(Date.now() - 86400000) // 1 day ago
}
const score = calculateLiquidityScore(market)
expect(score).toBeLessThan(30)
expect(score).toBeGreaterThanOrEqual(0)
})
it('should handle edge case: zero volume', () => {
const market = {
totalVolume: 0,
bidAskSpread: 0,
activeTraders: 0,
lastTradeTime: new Date()
}
const score = calculateLiquidityScore(market)
expect(score).toBe(0)
})
}) })
``` ```

View File

@@ -1,59 +1,23 @@
# Verification Command ---
description: Legacy slash-entry shim for the verification-loop skill. Prefer the skill directly.
---
Run comprehensive verification on current codebase state. # Verification Command (Legacy Shim)
## Instructions Use this only if you still invoke `/verify`. The maintained workflow lives in `skills/verification-loop/SKILL.md`.
Execute verification in this exact order: ## Canonical Surface
1. **Build Check** - Prefer the `verification-loop` skill directly.
- Run the build command for this project - Keep this file only as a compatibility entry point.
- If it fails, report errors and STOP
2. **Type Check**
- Run TypeScript/type checker
- Report all errors with file:line
3. **Lint Check**
- Run linter
- Report warnings and errors
4. **Test Suite**
- Run all tests
- Report pass/fail count
- Report coverage percentage
5. **Console.log Audit**
- Search for console.log in source files
- Report locations
6. **Git Status**
- Show uncommitted changes
- Show files modified since last commit
## Output
Produce a concise verification report:
```
VERIFICATION: [PASS/FAIL]
Build: [OK/FAIL]
Types: [OK/X errors]
Lint: [OK/X issues]
Tests: [X/Y passed, Z% coverage]
Secrets: [OK/X found]
Logs: [OK/X console.logs]
Ready for PR: [YES/NO]
```
If any critical issues, list them with fix suggestions.
## Arguments ## Arguments
$ARGUMENTS can be: `$ARGUMENTS`
- `quick` - Only build + types
- `full` - All checks (default) ## Delegation
- `pre-commit` - Checks relevant for commits
- `pre-pr` - Full checks plus security scan Apply the `verification-loop` skill.
- Choose the right verification depth for the user's requested mode.
- Run build, types, lint, tests, security/log checks, and diff review in the right order for the current repo.
- Report only the verdicts and blockers instead of maintaining a second verification checklist here.

View File

@@ -1,6 +1,6 @@
# Everything Claude Code (ECC) — 智能体指令 # Everything Claude Code (ECC) — 智能体指令
这是一个**生产就绪的 AI 编码插件**,提供 28 个专业代理、116 项技能、59 条命令以及自动化钩子工作流,用于软件开发。 这是一个**生产就绪的 AI 编码插件**,提供 36 个专业代理、147 项技能、68 条命令以及自动化钩子工作流,用于软件开发。
**版本:** 1.9.0 **版本:** 1.9.0
@@ -146,9 +146,9 @@
## 项目结构 ## 项目结构
``` ```
agents/ — 28 个专业子代理 agents/ — 36 个专业子代理
skills/ — 115 个工作流技能和领域知识 skills/ — 147 个工作流技能和领域知识
commands/ — 59 个斜杠命令 commands/ — 68 个斜杠命令
hooks/ — 基于触发的自动化 hooks/ — 基于触发的自动化
rules/ — 始终遵循的指导方针(通用 + 每种语言) rules/ — 始终遵循的指导方针(通用 + 每种语言)
scripts/ — 跨平台 Node.js 实用工具 scripts/ — 跨平台 Node.js 实用工具

View File

@@ -209,7 +209,7 @@ npx ecc-install typescript
/plugin list everything-claude-code@everything-claude-code /plugin list everything-claude-code@everything-claude-code
``` ```
**搞定!** 你现在可以使用 28 个智能体、116 项技能和 59 个命令了。 **搞定!** 你现在可以使用 36 个智能体、147 项技能和 68 个命令了。
*** ***
@@ -1094,9 +1094,9 @@ opencode
| 功能特性 | Claude Code | OpenCode | 状态 | | 功能特性 | Claude Code | OpenCode | 状态 |
|---------|-------------|----------|--------| |---------|-------------|----------|--------|
| 智能体 | PASS: 28 个 | PASS: 12 个 | **Claude Code 领先** | | 智能体 | PASS: 36 个 | PASS: 12 个 | **Claude Code 领先** |
| 命令 | PASS: 59 个 | PASS: 31 个 | **Claude Code 领先** | | 命令 | PASS: 68 个 | PASS: 31 个 | **Claude Code 领先** |
| 技能 | PASS: 116 项 | PASS: 37 项 | **Claude Code 领先** | | 技能 | PASS: 147 项 | PASS: 37 项 | **Claude Code 领先** |
| 钩子 | PASS: 8 种事件类型 | PASS: 11 种事件 | **OpenCode 更多!** | | 钩子 | PASS: 8 种事件类型 | PASS: 11 种事件 | **OpenCode 更多!** |
| 规则 | PASS: 29 条 | PASS: 13 条指令 | **Claude Code 领先** | | 规则 | PASS: 29 条 | PASS: 13 条指令 | **Claude Code 领先** |
| MCP 服务器 | PASS: 14 个 | PASS: 完整 | **完全对等** | | MCP 服务器 | PASS: 14 个 | PASS: 完整 | **完全对等** |
@@ -1206,9 +1206,9 @@ ECC 是**第一个最大化利用每个主要 AI 编码工具的插件**。以
| 功能特性 | Claude Code | Cursor IDE | Codex CLI | OpenCode | | 功能特性 | Claude Code | Cursor IDE | Codex CLI | OpenCode |
|---------|------------|------------|-----------|----------| |---------|------------|------------|-----------|----------|
| **智能体** | 21 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 | | **智能体** | 36 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 |
| **命令** | 52 | 共享 | 基于指令 | 31 | | **命令** | 68 | 共享 | 基于指令 | 31 |
| **技能** | 102 | 共享 | 10 (原生格式) | 37 | | **技能** | 147 | 共享 | 10 (原生格式) | 37 |
| **钩子事件** | 8 种类型 | 15 种类型 | 暂无 | 11 种类型 | | **钩子事件** | 8 种类型 | 15 种类型 | 暂无 | 11 种类型 |
| **钩子脚本** | 20+ 个脚本 | 16 个脚本 (DRY 适配器) | N/A | 插件钩子 | | **钩子脚本** | 20+ 个脚本 | 16 个脚本 (DRY 适配器) | N/A | 插件钩子 |
| **规则** | 34 (通用 + 语言) | 34 (YAML 前页) | 基于指令 | 13 条指令 | | **规则** | 34 (通用 + 语言) | 34 (YAML 前页) | 基于指令 | 13 条指令 |

View File

@@ -0,0 +1,81 @@
# Browser QA — 自动化视觉测试与交互验证
## When to use
- 功能部署到 staging / preview 之后
- 需要验证跨页面的 UI 行为时
- 发布前确认布局、表单和交互是否真的可用
- 审查涉及前端改动的 PR 时
- 做可访问性审计和响应式测试时
## How it works
使用浏览器自动化 MCPclaude-in-chrome、Playwright 或 Puppeteer像真实用户一样与线上页面交互。
### 阶段 1冒烟测试
```
1. 打开目标 URL
2. 检查控制台错误(过滤噪声:分析脚本、第三方库)
3. 验证网络请求中没有 4xx / 5xx
4. 在桌面和移动端视口截图首屏内容
5. 检查 Core Web VitalsLCP < 2.5sCLS < 0.1INP < 200ms
```
### 阶段 2交互测试
```
1. 点击所有导航链接,验证没有死链
2. 使用有效数据提交表单,验证成功态
3. 使用无效数据提交表单,验证错误态
4. 测试认证流程:登录 → 受保护页面 → 登出
5. 测试关键用户路径(结账、引导、搜索)
```
### 阶段 3视觉回归
```
1. 在 3 个断点375px、768px、1440px对关键页面截图
2. 与基线截图对比(如果已保存)
3. 标记 > 5px 的布局偏移、缺失元素、内容溢出
4. 如适用,检查暗色模式
```
### 阶段 4可访问性
```
1. 在每个页面运行 axe-core 或等价工具
2. 标记 WCAG AA 违规(对比度、标签、焦点顺序)
3. 验证键盘导航可以端到端工作
4. 检查屏幕阅读器地标
```
## Examples
```markdown
## QA 报告 — [URL] — [timestamp]
### 冒烟测试
- 控制台错误0 个严重错误2 个警告(分析脚本噪声)
- 网络:全部 200/304无失败请求
- Core Web VitalsLCP 1.2sCLS 0.02INP 89ms
### 交互
- [done] 导航链接12/12 正常
- [issue] 联系表单:无效邮箱缺少错误态
- [done] 认证流程:登录 / 登出正常
### 视觉
- [issue] Hero 区域在 375px 视口下溢出
- [done] 暗色模式:所有页面一致
### 可访问性
- 2 个 AA 级违规Hero 图片缺少 alt 文本,页脚链接对比度过低
### 结论修复后可发布2 个问题0 个阻塞项)
```
## 集成
可与任意浏览器 MCP 配合:
- `mChild__claude-in-chrome__*` 工具(推荐,直接使用你的真实 Chrome
- 通过 `mcp__browserbase__*` 使用 Playwright
- 直接运行 Puppeteer 脚本
可与 `/canary-watch` 搭配用于发布后的持续监控。

View File

@@ -10,7 +10,8 @@
"command": "npx block-no-verify@1.1.2" "command": "npx block-no-verify@1.1.2"
} }
], ],
"description": "Block git hook-bypass flag to protect pre-commit, commit-msg, and pre-push hooks from being skipped" "description": "Block git hook-bypass flag to protect pre-commit, commit-msg, and pre-push hooks from being skipped",
"id": "pre:bash:block-no-verify"
}, },
{ {
"matcher": "Bash", "matcher": "Bash",
@@ -20,7 +21,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/auto-tmux-dev.js\"" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/auto-tmux-dev.js\""
} }
], ],
"description": "Auto-start dev servers in tmux with directory-based session names" "description": "Auto-start dev servers in tmux with directory-based session names",
"id": "pre:bash:auto-tmux-dev"
}, },
{ {
"matcher": "Bash", "matcher": "Bash",
@@ -30,7 +32,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:bash:tmux-reminder\" \"scripts/hooks/pre-bash-tmux-reminder.js\" \"strict\"" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:bash:tmux-reminder\" \"scripts/hooks/pre-bash-tmux-reminder.js\" \"strict\""
} }
], ],
"description": "Reminder to use tmux for long-running commands" "description": "Reminder to use tmux for long-running commands",
"id": "pre:bash:tmux-reminder"
}, },
{ {
"matcher": "Bash", "matcher": "Bash",
@@ -40,7 +43,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:bash:git-push-reminder\" \"scripts/hooks/pre-bash-git-push-reminder.js\" \"strict\"" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:bash:git-push-reminder\" \"scripts/hooks/pre-bash-git-push-reminder.js\" \"strict\""
} }
], ],
"description": "Reminder before git push to review changes" "description": "Reminder before git push to review changes",
"id": "pre:bash:git-push-reminder"
}, },
{ {
"matcher": "Bash", "matcher": "Bash",
@@ -50,7 +54,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:bash:commit-quality\" \"scripts/hooks/pre-bash-commit-quality.js\" \"strict\"" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:bash:commit-quality\" \"scripts/hooks/pre-bash-commit-quality.js\" \"strict\""
} }
], ],
"description": "Pre-commit quality check: lint staged files, validate commit message format, detect console.log/debugger/secrets before committing" "description": "Pre-commit quality check: lint staged files, validate commit message format, detect console.log/debugger/secrets before committing",
"id": "pre:bash:commit-quality"
}, },
{ {
"matcher": "Write", "matcher": "Write",
@@ -60,7 +65,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:write:doc-file-warning\" \"scripts/hooks/doc-file-warning.js\" \"standard,strict\"" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:write:doc-file-warning\" \"scripts/hooks/doc-file-warning.js\" \"standard,strict\""
} }
], ],
"description": "Doc file warning: warn about non-standard documentation files (exit code 0; warns only)" "description": "Doc file warning: warn about non-standard documentation files (exit code 0; warns only)",
"id": "pre:write:doc-file-warning"
}, },
{ {
"matcher": "Edit|Write", "matcher": "Edit|Write",
@@ -70,7 +76,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:edit-write:suggest-compact\" \"scripts/hooks/suggest-compact.js\" \"standard,strict\"" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:edit-write:suggest-compact\" \"scripts/hooks/suggest-compact.js\" \"standard,strict\""
} }
], ],
"description": "Suggest manual compaction at logical intervals" "description": "Suggest manual compaction at logical intervals",
"id": "pre:edit-write:suggest-compact"
}, },
{ {
"matcher": "*", "matcher": "*",
@@ -82,7 +89,8 @@
"timeout": 10 "timeout": 10
} }
], ],
"description": "Capture tool use observations for continuous learning" "description": "Capture tool use observations for continuous learning",
"id": "pre:observe:continuous-learning"
}, },
{ {
"matcher": "Bash|Write|Edit|MultiEdit", "matcher": "Bash|Write|Edit|MultiEdit",
@@ -93,7 +101,8 @@
"timeout": 15 "timeout": 15
} }
], ],
"description": "Optional InsAIts AI security monitor for Bash/Edit/Write flows. Enable with ECC_ENABLE_INSAITS=1. Requires: pip install insa-its" "description": "Optional InsAIts AI security monitor for Bash/Edit/Write flows. Enable with ECC_ENABLE_INSAITS=1. Requires: pip install insa-its",
"id": "pre:insaits:security"
}, },
{ {
"matcher": "Bash|Write|Edit|MultiEdit", "matcher": "Bash|Write|Edit|MultiEdit",
@@ -104,7 +113,8 @@
"timeout": 10 "timeout": 10
} }
], ],
"description": "Capture governance events (secrets, policy violations, approval requests). Enable with ECC_GOVERNANCE_CAPTURE=1" "description": "Capture governance events (secrets, policy violations, approval requests). Enable with ECC_GOVERNANCE_CAPTURE=1",
"id": "pre:governance-capture"
}, },
{ {
"matcher": "Write|Edit|MultiEdit", "matcher": "Write|Edit|MultiEdit",
@@ -115,7 +125,8 @@
"timeout": 5 "timeout": 5
} }
], ],
"description": "Block modifications to linter/formatter config files. Steers agent to fix code instead of weakening configs." "description": "Block modifications to linter/formatter config files. Steers agent to fix code instead of weakening configs.",
"id": "pre:config-protection"
}, },
{ {
"matcher": "*", "matcher": "*",
@@ -125,7 +136,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:mcp-health-check\" \"scripts/hooks/mcp-health-check.js\" \"standard,strict\"" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:mcp-health-check\" \"scripts/hooks/mcp-health-check.js\" \"standard,strict\""
} }
], ],
"description": "Check MCP server health before MCP tool execution and block unhealthy MCP calls" "description": "Check MCP server health before MCP tool execution and block unhealthy MCP calls",
"id": "pre:mcp-health-check"
} }
], ],
"PreCompact": [ "PreCompact": [
@@ -137,7 +149,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:compact\" \"scripts/hooks/pre-compact.js\" \"standard,strict\"" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:compact\" \"scripts/hooks/pre-compact.js\" \"standard,strict\""
} }
], ],
"description": "Save state before context compaction" "description": "Save state before context compaction",
"id": "pre:compact"
} }
], ],
"SessionStart": [ "SessionStart": [
@@ -149,7 +162,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/session-start-bootstrap.js\"" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/session-start-bootstrap.js\""
} }
], ],
"description": "Load previous context and detect package manager on new session" "description": "Load previous context and detect package manager on new session",
"id": "session:start"
} }
], ],
"PostToolUse": [ "PostToolUse": [
@@ -161,7 +175,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/post-bash-command-log.js\" audit" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/post-bash-command-log.js\" audit"
} }
], ],
"description": "Audit log all bash commands to ~/.claude/bash-commands.log" "description": "Audit log all bash commands to ~/.claude/bash-commands.log",
"id": "post:bash:command-log-audit"
}, },
{ {
"matcher": "Bash", "matcher": "Bash",
@@ -171,7 +186,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/post-bash-command-log.js\" cost" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/post-bash-command-log.js\" cost"
} }
], ],
"description": "Cost tracker - log bash tool usage with timestamps" "description": "Cost tracker - log bash tool usage with timestamps",
"id": "post:bash:command-log-cost"
}, },
{ {
"matcher": "Bash", "matcher": "Bash",
@@ -181,7 +197,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:bash:pr-created\" \"scripts/hooks/post-bash-pr-created.js\" \"standard,strict\"" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:bash:pr-created\" \"scripts/hooks/post-bash-pr-created.js\" \"standard,strict\""
} }
], ],
"description": "Log PR URL and provide review command after PR creation" "description": "Log PR URL and provide review command after PR creation",
"id": "post:bash:pr-created"
}, },
{ {
"matcher": "Bash", "matcher": "Bash",
@@ -193,7 +210,8 @@
"timeout": 30 "timeout": 30
} }
], ],
"description": "Example: async hook for build analysis (runs in background without blocking)" "description": "Example: async hook for build analysis (runs in background without blocking)",
"id": "post:bash:build-complete"
}, },
{ {
"matcher": "Edit|Write|MultiEdit", "matcher": "Edit|Write|MultiEdit",
@@ -205,7 +223,8 @@
"timeout": 30 "timeout": 30
} }
], ],
"description": "Run quality gate checks after file edits" "description": "Run quality gate checks after file edits",
"id": "post:quality-gate"
}, },
{ {
"matcher": "Edit|Write|MultiEdit", "matcher": "Edit|Write|MultiEdit",
@@ -215,7 +234,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:edit:accumulate\" \"scripts/hooks/post-edit-accumulator.js\" \"standard,strict\"" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:edit:accumulate\" \"scripts/hooks/post-edit-accumulator.js\" \"standard,strict\""
} }
], ],
"description": "Record edited JS/TS file paths for batch format+typecheck at Stop time" "description": "Record edited JS/TS file paths for batch format+typecheck at Stop time",
"id": "post:edit:accumulator"
}, },
{ {
"matcher": "Edit", "matcher": "Edit",
@@ -225,7 +245,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:edit:console-warn\" \"scripts/hooks/post-edit-console-warn.js\" \"standard,strict\"" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:edit:console-warn\" \"scripts/hooks/post-edit-console-warn.js\" \"standard,strict\""
} }
], ],
"description": "Warn about console.log statements after edits" "description": "Warn about console.log statements after edits",
"id": "post:edit:console-warn"
}, },
{ {
"matcher": "Bash|Write|Edit|MultiEdit", "matcher": "Bash|Write|Edit|MultiEdit",
@@ -236,7 +257,8 @@
"timeout": 10 "timeout": 10
} }
], ],
"description": "Capture governance events from tool outputs. Enable with ECC_GOVERNANCE_CAPTURE=1" "description": "Capture governance events from tool outputs. Enable with ECC_GOVERNANCE_CAPTURE=1",
"id": "post:governance-capture"
}, },
{ {
"matcher": "*", "matcher": "*",
@@ -248,7 +270,8 @@
"timeout": 10 "timeout": 10
} }
], ],
"description": "Capture tool use results for continuous learning" "description": "Capture tool use results for continuous learning",
"id": "post:observe:continuous-learning"
} }
], ],
"PostToolUseFailure": [ "PostToolUseFailure": [
@@ -260,7 +283,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:mcp-health-check\" \"scripts/hooks/mcp-health-check.js\" \"standard,strict\"" "command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:mcp-health-check\" \"scripts/hooks/mcp-health-check.js\" \"standard,strict\""
} }
], ],
"description": "Track failed MCP tool calls, mark unhealthy servers, and attempt reconnect" "description": "Track failed MCP tool calls, mark unhealthy servers, and attempt reconnect",
"id": "post:mcp-health-check"
} }
], ],
"Stop": [ "Stop": [
@@ -273,7 +297,8 @@
"timeout": 300 "timeout": 300
} }
], ],
"description": "Batch format (Biome/Prettier) and typecheck (tsc) all JS/TS files edited this response — runs once at Stop instead of after every Edit" "description": "Batch format (Biome/Prettier) and typecheck (tsc) all JS/TS files edited this response — runs once at Stop instead of after every Edit",
"id": "stop:format-typecheck"
}, },
{ {
"matcher": "*", "matcher": "*",
@@ -283,7 +308,8 @@
"command": "node -e \"const fs=require('fs');const path=require('path');const {spawnSync}=require('child_process');const raw=fs.readFileSync(0,'utf8');const rel=path.join('scripts','hooks','run-with-flags.js');const hasRunnerRoot=candidate=>{const value=typeof candidate==='string'?candidate.trim():'';return value.length>0&&fs.existsSync(path.join(path.resolve(value),rel));};const root=(()=>{const envRoot=process.env.CLAUDE_PLUGIN_ROOT||'';if(hasRunnerRoot(envRoot))return path.resolve(envRoot.trim());const home=require('os').homedir();const claudeDir=path.join(home,'.claude');if(hasRunnerRoot(claudeDir))return claudeDir;for(const candidate of [path.join(claudeDir,'plugins','everything-claude-code'),path.join(claudeDir,'plugins','everything-claude-code@everything-claude-code'),path.join(claudeDir,'plugins','marketplace','everything-claude-code')]){if(hasRunnerRoot(candidate))return candidate;}try{const cacheBase=path.join(claudeDir,'plugins','cache','everything-claude-code');for(const org of fs.readdirSync(cacheBase,{withFileTypes:true})){if(!org.isDirectory())continue;for(const version of fs.readdirSync(path.join(cacheBase,org.name),{withFileTypes:true})){if(!version.isDirectory())continue;const candidate=path.join(cacheBase,org.name,version.name);if(hasRunnerRoot(candidate))return candidate;}}}catch{}return claudeDir;})();const script=path.join(root,rel);if(fs.existsSync(script)){const result=spawnSync(process.execPath,[script,'stop:check-console-log','scripts/hooks/check-console-log.js','standard,strict'],{input:raw,encoding:'utf8',env:process.env,cwd:process.cwd(),timeout:30000});const stdout=typeof result.stdout==='string'?result.stdout:'';if(stdout)process.stdout.write(stdout);else process.stdout.write(raw);if(result.stderr)process.stderr.write(result.stderr);if(result.error||result.status===null||result.signal){const reason=result.error?result.error.message:(result.signal?'signal '+result.signal:'missing exit status');process.stderr.write('[Stop] ERROR: hook runner failed: '+reason+String.fromCharCode(10));process.exit(1);}process.exit(Number.isInteger(result.status)?result.status:0);}process.stderr.write('[Stop] WARNING: could not resolve ECC plugin root; skipping hook'+String.fromCharCode(10));process.stdout.write(raw);\"" "command": "node -e \"const fs=require('fs');const path=require('path');const {spawnSync}=require('child_process');const raw=fs.readFileSync(0,'utf8');const rel=path.join('scripts','hooks','run-with-flags.js');const hasRunnerRoot=candidate=>{const value=typeof candidate==='string'?candidate.trim():'';return value.length>0&&fs.existsSync(path.join(path.resolve(value),rel));};const root=(()=>{const envRoot=process.env.CLAUDE_PLUGIN_ROOT||'';if(hasRunnerRoot(envRoot))return path.resolve(envRoot.trim());const home=require('os').homedir();const claudeDir=path.join(home,'.claude');if(hasRunnerRoot(claudeDir))return claudeDir;for(const candidate of [path.join(claudeDir,'plugins','everything-claude-code'),path.join(claudeDir,'plugins','everything-claude-code@everything-claude-code'),path.join(claudeDir,'plugins','marketplace','everything-claude-code')]){if(hasRunnerRoot(candidate))return candidate;}try{const cacheBase=path.join(claudeDir,'plugins','cache','everything-claude-code');for(const org of fs.readdirSync(cacheBase,{withFileTypes:true})){if(!org.isDirectory())continue;for(const version of fs.readdirSync(path.join(cacheBase,org.name),{withFileTypes:true})){if(!version.isDirectory())continue;const candidate=path.join(cacheBase,org.name,version.name);if(hasRunnerRoot(candidate))return candidate;}}}catch{}return claudeDir;})();const script=path.join(root,rel);if(fs.existsSync(script)){const result=spawnSync(process.execPath,[script,'stop:check-console-log','scripts/hooks/check-console-log.js','standard,strict'],{input:raw,encoding:'utf8',env:process.env,cwd:process.cwd(),timeout:30000});const stdout=typeof result.stdout==='string'?result.stdout:'';if(stdout)process.stdout.write(stdout);else process.stdout.write(raw);if(result.stderr)process.stderr.write(result.stderr);if(result.error||result.status===null||result.signal){const reason=result.error?result.error.message:(result.signal?'signal '+result.signal:'missing exit status');process.stderr.write('[Stop] ERROR: hook runner failed: '+reason+String.fromCharCode(10));process.exit(1);}process.exit(Number.isInteger(result.status)?result.status:0);}process.stderr.write('[Stop] WARNING: could not resolve ECC plugin root; skipping hook'+String.fromCharCode(10));process.stdout.write(raw);\""
} }
], ],
"description": "Check for console.log in modified files after each response" "description": "Check for console.log in modified files after each response",
"id": "stop:check-console-log"
}, },
{ {
"matcher": "*", "matcher": "*",
@@ -295,7 +321,8 @@
"timeout": 10 "timeout": 10
} }
], ],
"description": "Persist session state after each response (Stop carries transcript_path)" "description": "Persist session state after each response (Stop carries transcript_path)",
"id": "stop:session-end"
}, },
{ {
"matcher": "*", "matcher": "*",
@@ -307,7 +334,8 @@
"timeout": 10 "timeout": 10
} }
], ],
"description": "Evaluate session for extractable patterns" "description": "Evaluate session for extractable patterns",
"id": "stop:evaluate-session"
}, },
{ {
"matcher": "*", "matcher": "*",
@@ -319,7 +347,8 @@
"timeout": 10 "timeout": 10
} }
], ],
"description": "Track token and cost metrics per session" "description": "Track token and cost metrics per session",
"id": "stop:cost-tracker"
}, },
{ {
"matcher": "*", "matcher": "*",
@@ -331,7 +360,8 @@
"timeout": 10 "timeout": 10
} }
], ],
"description": "Send desktop notification (macOS/WSL) with task summary when Claude responds" "description": "Send desktop notification (macOS/WSL) with task summary when Claude responds",
"id": "stop:desktop-notify"
} }
], ],
"SessionEnd": [ "SessionEnd": [
@@ -345,7 +375,8 @@
"timeout": 10 "timeout": 10
} }
], ],
"description": "Session end lifecycle marker (non-blocking)" "description": "Session end lifecycle marker (non-blocking)",
"id": "session:end:marker"
} }
] ]
} }

View File

@@ -145,6 +145,14 @@
"business-content" "business-content"
] ]
}, },
{
"id": "capability:operators",
"family": "capability",
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, and Google Workspace.",
"modules": [
"operator-workflows"
]
},
{ {
"id": "capability:social", "id": "capability:social",
"family": "capability", "family": "capability",

View File

@@ -303,6 +303,31 @@
"cost": "heavy", "cost": "heavy",
"stability": "stable" "stability": "stable"
}, },
{
"id": "operator-workflows",
"kind": "skills",
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, and Google Workspace.",
"paths": [
"skills/customer-billing-ops",
"skills/google-workspace-ops",
"skills/project-flow-ops",
"skills/workspace-surface-audit"
],
"targets": [
"claude",
"cursor",
"antigravity",
"codex",
"opencode",
"codebuddy"
],
"dependencies": [
"platform-configs"
],
"defaultInstall": false,
"cost": "medium",
"stability": "beta"
},
{ {
"id": "social-distribution", "id": "social-distribution",
"kind": "skills", "kind": "skills",
@@ -333,6 +358,7 @@
"paths": [ "paths": [
"skills/fal-ai-media", "skills/fal-ai-media",
"skills/remotion-video-creation", "skills/remotion-video-creation",
"skills/ui-demo",
"skills/video-editing", "skills/video-editing",
"skills/videodb" "skills/videodb"
], ],

View File

@@ -66,6 +66,7 @@
"security", "security",
"research-apis", "research-apis",
"business-content", "business-content",
"operator-workflows",
"social-distribution", "social-distribution",
"media-generation", "media-generation",
"orchestration", "orchestration",

View File

@@ -1,7 +1,7 @@
{ {
"name": "ecc-universal", "name": "ecc-universal",
"version": "1.9.0", "version": "1.9.0",
"description": "Complete collection of battle-tested Claude Code configs — agents, skills, hooks, commands, and rules evolved over 10+ months of intensive daily use by an Anthropic hackathon winner", "description": "Complete collection of battle-tested Claude Code configs — agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use by an Anthropic hackathon winner",
"keywords": [ "keywords": [
"claude-code", "claude-code",
"ai", "ai",
@@ -102,13 +102,15 @@
}, },
"scripts": { "scripts": {
"postinstall": "echo '\\n ecc-universal installed!\\n Run: npx ecc typescript\\n Compat: npx ecc-install typescript\\n Docs: https://github.com/affaan-m/everything-claude-code\\n'", "postinstall": "echo '\\n ecc-universal installed!\\n Run: npx ecc typescript\\n Compat: npx ecc-install typescript\\n Docs: https://github.com/affaan-m/everything-claude-code\\n'",
"catalog:check": "node scripts/ci/catalog.js --text",
"catalog:sync": "node scripts/ci/catalog.js --write --text",
"lint": "eslint . && markdownlint '**/*.md' --ignore node_modules", "lint": "eslint . && markdownlint '**/*.md' --ignore node_modules",
"harness:audit": "node scripts/harness-audit.js", "harness:audit": "node scripts/harness-audit.js",
"claw": "node scripts/claw.js", "claw": "node scripts/claw.js",
"orchestrate:status": "node scripts/orchestration-status.js", "orchestrate:status": "node scripts/orchestration-status.js",
"orchestrate:worker": "bash scripts/orchestrate-codex-worker.sh", "orchestrate:worker": "bash scripts/orchestrate-codex-worker.sh",
"orchestrate:tmux": "node scripts/orchestrate-worktrees.js", "orchestrate:tmux": "node scripts/orchestrate-worktrees.js",
"test": "node scripts/ci/check-unicode-safety.js && node scripts/ci/validate-agents.js && node scripts/ci/validate-commands.js && node scripts/ci/validate-rules.js && node scripts/ci/validate-skills.js && node scripts/ci/validate-hooks.js && node scripts/ci/validate-install-manifests.js && node scripts/ci/validate-no-personal-paths.js && node scripts/ci/catalog.js --text && node tests/run-all.js", "test": "node scripts/ci/check-unicode-safety.js && node scripts/ci/validate-agents.js && node scripts/ci/validate-commands.js && node scripts/ci/validate-rules.js && node scripts/ci/validate-skills.js && node scripts/ci/validate-hooks.js && node scripts/ci/validate-install-manifests.js && node scripts/ci/validate-no-personal-paths.js && npm run catalog:check && node tests/run-all.js",
"coverage": "c8 --all --include=\"scripts/**/*.js\" --check-coverage --lines 80 --functions 80 --branches 80 --statements 80 --reporter=text --reporter=lcov node tests/run-all.js" "coverage": "c8 --all --include=\"scripts/**/*.js\" --check-coverage --lines 80 --functions 80 --branches 80 --statements 80 --reporter=text --reporter=lcov node tests/run-all.js"
}, },
"dependencies": { "dependencies": {

View File

@@ -1,12 +1,13 @@
#!/usr/bin/env node #!/usr/bin/env node
/** /**
* Verify repo catalog counts against README.md and AGENTS.md. * Verify repo catalog counts against tracked documentation files.
* *
* Usage: * Usage:
* node scripts/ci/catalog.js * node scripts/ci/catalog.js
* node scripts/ci/catalog.js --json * node scripts/ci/catalog.js --json
* node scripts/ci/catalog.js --md * node scripts/ci/catalog.js --md
* node scripts/ci/catalog.js --text * node scripts/ci/catalog.js --text
* node scripts/ci/catalog.js --write --text
*/ */
'use strict'; 'use strict';
@@ -17,6 +18,10 @@ const path = require('path');
const ROOT = path.join(__dirname, '../..'); const ROOT = path.join(__dirname, '../..');
const README_PATH = path.join(ROOT, 'README.md'); const README_PATH = path.join(ROOT, 'README.md');
const AGENTS_PATH = path.join(ROOT, 'AGENTS.md'); const AGENTS_PATH = path.join(ROOT, 'AGENTS.md');
const README_ZH_CN_PATH = path.join(ROOT, 'README.zh-CN.md');
const DOCS_ZH_CN_README_PATH = path.join(ROOT, 'docs', 'zh-CN', 'README.md');
const DOCS_ZH_CN_AGENTS_PATH = path.join(ROOT, 'docs', 'zh-CN', 'AGENTS.md');
const WRITE_MODE = process.argv.includes('--write');
const OUTPUT_MODE = process.argv.includes('--md') const OUTPUT_MODE = process.argv.includes('--md')
? 'md' ? 'md'
@@ -43,8 +48,9 @@ function listMatchingFiles(relativeDir, matcher) {
function buildCatalog() { function buildCatalog() {
const agents = listMatchingFiles('agents', entry => entry.isFile() && entry.name.endsWith('.md')); const agents = listMatchingFiles('agents', entry => entry.isFile() && entry.name.endsWith('.md'));
const commands = listMatchingFiles('commands', entry => entry.isFile() && entry.name.endsWith('.md')); const commands = listMatchingFiles('commands', entry => entry.isFile() && entry.name.endsWith('.md'));
const skills = listMatchingFiles('skills', entry => entry.isDirectory() && fs.existsSync(path.join(ROOT, 'skills', entry.name, 'SKILL.md'))) const skills = listMatchingFiles('skills', entry => (
.map(skillDir => `${skillDir}/SKILL.md`); entry.isDirectory() && fs.existsSync(path.join(ROOT, 'skills', entry.name, 'SKILL.md'))
)).map(skillDir => `${skillDir}/SKILL.md`);
return { return {
agents: { count: agents.length, files: agents, glob: 'agents/*.md' }, agents: { count: agents.length, files: agents, glob: 'agents/*.md' },
@@ -61,10 +67,28 @@ function readFileOrThrow(filePath) {
} }
} }
function writeFileOrThrow(filePath, content) {
try {
fs.writeFileSync(filePath, content, 'utf8');
} catch (error) {
throw new Error(`Failed to write ${path.basename(filePath)}: ${error.message}`);
}
}
function replaceOrThrow(content, regex, replacer, source) {
if (!regex.test(content)) {
throw new Error(`${source} is missing the expected catalog marker`);
}
return content.replace(regex, replacer);
}
function parseReadmeExpectations(readmeContent) { function parseReadmeExpectations(readmeContent) {
const expectations = []; const expectations = [];
const quickStartMatch = readmeContent.match(/access to\s+(\d+)\s+agents,\s+(\d+)\s+skills,\s+and\s+(\d+)\s+commands/i); const quickStartMatch = readmeContent.match(
/access to\s+(\d+)\s+agents,\s+(\d+)\s+skills,\s+and\s+(\d+)\s+(?:commands|legacy command shims?)/i
);
if (!quickStartMatch) { if (!quickStartMatch) {
throw new Error('README.md is missing the quick-start catalog summary'); throw new Error('README.md is missing the quick-start catalog summary');
} }
@@ -95,6 +119,120 @@ function parseReadmeExpectations(readmeContent) {
}); });
} }
const parityPatterns = [
{
category: 'agents',
regex: /^\|\s*(?:\*\*)?Agents(?:\*\*)?\s*\|\s*(\d+)\s*\|\s*Shared\s*\(AGENTS\.md\)\s*\|\s*Shared\s*\(AGENTS\.md\)\s*\|\s*12\s*\|$/im,
source: 'README.md parity table'
},
{
category: 'commands',
regex: /^\|\s*(?:\*\*)?Commands(?:\*\*)?\s*\|\s*(\d+)\s*\|\s*Shared\s*\|\s*Instruction-based\s*\|\s*31\s*\|$/im,
source: 'README.md parity table'
},
{
category: 'skills',
regex: /^\|\s*(?:\*\*)?Skills(?:\*\*)?\s*\|\s*(\d+)\s*\|\s*Shared\s*\|\s*10\s*\(native format\)\s*\|\s*37\s*\|$/im,
source: 'README.md parity table'
}
];
for (const pattern of parityPatterns) {
const match = readmeContent.match(pattern.regex);
if (!match) {
throw new Error(`${pattern.source} is missing the ${pattern.category} row`);
}
expectations.push({
category: pattern.category,
mode: 'exact',
expected: Number(match[1]),
source: `${pattern.source} (${pattern.category})`
});
}
return expectations;
}
function parseZhRootReadmeExpectations(readmeContent) {
const match = readmeContent.match(/你现在可以使用\s+(\d+)\s+个代理、\s*(\d+)\s*个技能和\s*(\d+)\s*个命令/i);
if (!match) {
throw new Error('README.zh-CN.md is missing the quick-start catalog summary');
}
return [
{ category: 'agents', mode: 'exact', expected: Number(match[1]), source: 'README.zh-CN.md quick-start summary' },
{ category: 'skills', mode: 'exact', expected: Number(match[2]), source: 'README.zh-CN.md quick-start summary' },
{ category: 'commands', mode: 'exact', expected: Number(match[3]), source: 'README.zh-CN.md quick-start summary' }
];
}
function parseZhDocsReadmeExpectations(readmeContent) {
const expectations = [];
const quickStartMatch = readmeContent.match(/你现在可以使用\s+(\d+)\s+个智能体、\s*(\d+)\s*项技能和\s*(\d+)\s*个命令了/i);
if (!quickStartMatch) {
throw new Error('docs/zh-CN/README.md is missing the quick-start catalog summary');
}
expectations.push(
{ category: 'agents', mode: 'exact', expected: Number(quickStartMatch[1]), source: 'docs/zh-CN/README.md quick-start summary' },
{ category: 'skills', mode: 'exact', expected: Number(quickStartMatch[2]), source: 'docs/zh-CN/README.md quick-start summary' },
{ category: 'commands', mode: 'exact', expected: Number(quickStartMatch[3]), source: 'docs/zh-CN/README.md quick-start summary' }
);
const tablePatterns = [
{ category: 'agents', regex: /\|\s*智能体\s*\|\s*(?:(?:PASS:|\u2705)\s*)?(\d+)\s*个\s*\|/i, source: 'docs/zh-CN/README.md comparison table' },
{ category: 'commands', regex: /\|\s*命令\s*\|\s*(?:(?:PASS:|\u2705)\s*)?(\d+)\s*个\s*\|/i, source: 'docs/zh-CN/README.md comparison table' },
{ category: 'skills', regex: /\|\s*技能\s*\|\s*(?:(?:PASS:|\u2705)\s*)?(\d+)\s*项\s*\|/i, source: 'docs/zh-CN/README.md comparison table' }
];
for (const pattern of tablePatterns) {
const match = readmeContent.match(pattern.regex);
if (!match) {
throw new Error(`${pattern.source} is missing the ${pattern.category} row`);
}
expectations.push({
category: pattern.category,
mode: 'exact',
expected: Number(match[1]),
source: `${pattern.source} (${pattern.category})`
});
}
const parityPatterns = [
{
category: 'agents',
regex: /^\|\s*(?:\*\*)?智能体(?:\*\*)?\s*\|\s*(\d+)\s*\|\s*共享\s*\(AGENTS\.md\)\s*\|\s*共享\s*\(AGENTS\.md\)\s*\|\s*12\s*\|$/im,
source: 'docs/zh-CN/README.md parity table'
},
{
category: 'commands',
regex: /^\|\s*(?:\*\*)?命令(?:\*\*)?\s*\|\s*(\d+)\s*\|\s*共享\s*\|\s*基于指令\s*\|\s*31\s*\|$/im,
source: 'docs/zh-CN/README.md parity table'
},
{
category: 'skills',
regex: /^\|\s*(?:\*\*)?技能(?:\*\*)?\s*\|\s*(\d+)\s*\|\s*共享\s*\|\s*10\s*\(原生格式\)\s*\|\s*37\s*\|$/im,
source: 'docs/zh-CN/README.md parity table'
}
];
for (const pattern of parityPatterns) {
const match = readmeContent.match(pattern.regex);
if (!match) {
throw new Error(`${pattern.source} is missing the ${pattern.category} row`);
}
expectations.push({
category: pattern.category,
mode: 'exact',
expected: Number(match[1]),
source: `${pattern.source} (${pattern.category})`
});
}
return expectations; return expectations;
} }
@@ -153,6 +291,61 @@ function parseAgentsDocExpectations(agentsContent) {
return expectations; return expectations;
} }
function parseZhAgentsDocExpectations(agentsContent) {
const summaryMatch = agentsContent.match(/提供\s+(\d+)\s+个专业代理、\s*(\d+)(\+)?\s*项技能、\s*(\d+)\s+条命令/i);
if (!summaryMatch) {
throw new Error('docs/zh-CN/AGENTS.md is missing the catalog summary line');
}
const expectations = [
{ category: 'agents', mode: 'exact', expected: Number(summaryMatch[1]), source: 'docs/zh-CN/AGENTS.md summary' },
{
category: 'skills',
mode: summaryMatch[3] ? 'minimum' : 'exact',
expected: Number(summaryMatch[2]),
source: 'docs/zh-CN/AGENTS.md summary'
},
{ category: 'commands', mode: 'exact', expected: Number(summaryMatch[4]), source: 'docs/zh-CN/AGENTS.md summary' }
];
const structurePatterns = [
{
category: 'agents',
mode: 'exact',
regex: /^\s*agents\/\s*[—–-]\s*(\d+)\s+个专业子代理\s*$/im,
source: 'docs/zh-CN/AGENTS.md project structure'
},
{
category: 'skills',
mode: 'minimum',
regex: /^\s*skills\/\s*[—–-]\s*(\d+)(\+)?\s+个工作流技能和领域知识\s*$/im,
source: 'docs/zh-CN/AGENTS.md project structure'
},
{
category: 'commands',
mode: 'exact',
regex: /^\s*commands\/\s*[—–-]\s*(\d+)\s+个斜杠命令\s*$/im,
source: 'docs/zh-CN/AGENTS.md project structure'
}
];
for (const pattern of structurePatterns) {
const match = agentsContent.match(pattern.regex);
if (!match) {
throw new Error(`${pattern.source} is missing the ${pattern.category} entry`);
}
expectations.push({
category: pattern.category,
mode: pattern.mode === 'minimum' && match[2] ? 'minimum' : pattern.mode,
expected: Number(match[1]),
source: `${pattern.source} (${pattern.category})`
});
}
return expectations;
}
function evaluateExpectations(catalog, expectations) { function evaluateExpectations(catalog, expectations) {
return expectations.map(expectation => { return expectations.map(expectation => {
const actual = catalog[expectation.category].count; const actual = catalog[expectation.category].count;
@@ -173,6 +366,208 @@ function formatExpectation(expectation) {
return `${expectation.source}: ${expectation.category} documented ${comparator} ${expectation.expected}, actual ${expectation.actual}`; return `${expectation.source}: ${expectation.category} documented ${comparator} ${expectation.expected}, actual ${expectation.actual}`;
} }
function syncEnglishReadme(content, catalog) {
let nextContent = content;
nextContent = replaceOrThrow(
nextContent,
/(access to\s+)(\d+)(\s+agents,\s+)(\d+)(\s+skills,\s+and\s+)(\d+)(\s+(?:commands|legacy command shims?))/i,
(_, prefix, __, agentsSuffix, ___, skillsSuffix) =>
`${prefix}${catalog.agents.count}${agentsSuffix}${catalog.skills.count}${skillsSuffix}${catalog.commands.count} legacy command shims`,
'README.md quick-start summary'
);
nextContent = replaceOrThrow(
nextContent,
/(\|\s*(?:\*\*)?Agents(?:\*\*)?\s*\|\s*(?:(?:PASS:|\u2705)\s*)?)(\d+)(\s+agents\s*\|)/i,
(_, prefix, __, suffix) => `${prefix}${catalog.agents.count}${suffix}`,
'README.md comparison table (agents)'
);
nextContent = replaceOrThrow(
nextContent,
/(\|\s*(?:\*\*)?Commands(?:\*\*)?\s*\|\s*(?:(?:PASS:|\u2705)\s*)?)(\d+)(\s+commands\s*\|)/i,
(_, prefix, __, suffix) => `${prefix}${catalog.commands.count}${suffix}`,
'README.md comparison table (commands)'
);
nextContent = replaceOrThrow(
nextContent,
/(\|\s*(?:\*\*)?Skills(?:\*\*)?\s*\|\s*(?:(?:PASS:|\u2705)\s*)?)(\d+)(\s+skills\s*\|)/i,
(_, prefix, __, suffix) => `${prefix}${catalog.skills.count}${suffix}`,
'README.md comparison table (skills)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\|\s*(?:\*\*)?Agents(?:\*\*)?\s*\|\s*)(\d+)(\s*\|\s*Shared\s*\(AGENTS\.md\)\s*\|\s*Shared\s*\(AGENTS\.md\)\s*\|\s*12\s*\|)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.agents.count}${suffix}`,
'README.md parity table (agents)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\|\s*(?:\*\*)?Commands(?:\*\*)?\s*\|\s*)(\d+)(\s*\|\s*Shared\s*\|\s*Instruction-based\s*\|\s*31\s*\|)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.commands.count}${suffix}`,
'README.md parity table (commands)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\|\s*(?:\*\*)?Skills(?:\*\*)?\s*\|\s*)(\d+)(\s*\|\s*Shared\s*\|\s*10\s*\(native format\)\s*\|\s*37\s*\|)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.skills.count}${suffix}`,
'README.md parity table (skills)'
);
return nextContent;
}
function syncEnglishAgents(content, catalog) {
let nextContent = content;
nextContent = replaceOrThrow(
nextContent,
/(providing\s+)(\d+)(\s+specialized agents,\s+)(\d+)(\+?)(\s+skills,\s+)(\d+)(\s+commands)/i,
(_, prefix, __, agentsSuffix, ___, skillsPlus, skillsSuffix, ____, commandsSuffix) =>
`${prefix}${catalog.agents.count}${agentsSuffix}${catalog.skills.count}${skillsPlus}${skillsSuffix}${catalog.commands.count}${commandsSuffix}`,
'AGENTS.md summary'
);
nextContent = replaceOrThrow(
nextContent,
/^(\s*agents\/\s*[—–-]\s*)(\d+)(\s+specialized subagents\s*)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.agents.count}${suffix}`,
'AGENTS.md project structure (agents)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\s*skills\/\s*[—–-]\s*)(\d+)(\+?)(\s+workflow skills and domain knowledge\s*)$/im,
(_, prefix, __, plus, suffix) => `${prefix}${catalog.skills.count}${plus}${suffix}`,
'AGENTS.md project structure (skills)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\s*commands\/\s*[—–-]\s*)(\d+)(\s+slash commands\s*)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.commands.count}${suffix}`,
'AGENTS.md project structure (commands)'
);
return nextContent;
}
function syncZhRootReadme(content, catalog) {
return replaceOrThrow(
content,
/(你现在可以使用\s+)(\d+)(\s+个代理、\s*)(\d+)(\s*个技能和\s*)(\d+)(\s*个命令[。.!]?)/i,
(_, prefix, __, agentsSuffix, ___, skillsSuffix, ____, commandsSuffix) =>
`${prefix}${catalog.agents.count}${agentsSuffix}${catalog.skills.count}${skillsSuffix}${catalog.commands.count}${commandsSuffix}`,
'README.zh-CN.md quick-start summary'
);
}
function syncZhDocsReadme(content, catalog) {
let nextContent = content;
nextContent = replaceOrThrow(
nextContent,
/(你现在可以使用\s+)(\d+)(\s+个智能体、\s*)(\d+)(\s*项技能和\s*)(\d+)(\s*个命令了[。.!]?)/i,
(_, prefix, __, agentsSuffix, ___, skillsSuffix, ____, commandsSuffix) =>
`${prefix}${catalog.agents.count}${agentsSuffix}${catalog.skills.count}${skillsSuffix}${catalog.commands.count}${commandsSuffix}`,
'docs/zh-CN/README.md quick-start summary'
);
nextContent = replaceOrThrow(
nextContent,
/(\|\s*智能体\s*\|\s*(?:(?:PASS:|\u2705)\s*)?)(\d+)(\s*个\s*\|)/i,
(_, prefix, __, suffix) => `${prefix}${catalog.agents.count}${suffix}`,
'docs/zh-CN/README.md comparison table (agents)'
);
nextContent = replaceOrThrow(
nextContent,
/(\|\s*命令\s*\|\s*(?:(?:PASS:|\u2705)\s*)?)(\d+)(\s*个\s*\|)/i,
(_, prefix, __, suffix) => `${prefix}${catalog.commands.count}${suffix}`,
'docs/zh-CN/README.md comparison table (commands)'
);
nextContent = replaceOrThrow(
nextContent,
/(\|\s*技能\s*\|\s*(?:(?:PASS:|\u2705)\s*)?)(\d+)(\s*项\s*\|)/i,
(_, prefix, __, suffix) => `${prefix}${catalog.skills.count}${suffix}`,
'docs/zh-CN/README.md comparison table (skills)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\|\s*(?:\*\*)?智能体(?:\*\*)?\s*\|\s*)(\d+)(\s*\|\s*共享\s*\(AGENTS\.md\)\s*\|\s*共享\s*\(AGENTS\.md\)\s*\|\s*12\s*\|)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.agents.count}${suffix}`,
'docs/zh-CN/README.md parity table (agents)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\|\s*(?:\*\*)?命令(?:\*\*)?\s*\|\s*)(\d+)(\s*\|\s*共享\s*\|\s*基于指令\s*\|\s*31\s*\|)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.commands.count}${suffix}`,
'docs/zh-CN/README.md parity table (commands)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\|\s*(?:\*\*)?技能(?:\*\*)?\s*\|\s*)(\d+)(\s*\|\s*共享\s*\|\s*10\s*\(原生格式\)\s*\|\s*37\s*\|)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.skills.count}${suffix}`,
'docs/zh-CN/README.md parity table (skills)'
);
return nextContent;
}
function syncZhAgents(content, catalog) {
let nextContent = content;
nextContent = replaceOrThrow(
nextContent,
/(提供\s+)(\d+)(\s+个专业代理、\s*)(\d+)(\+?)(\s*项技能、\s*)(\d+)(\s+条命令)/i,
(_, prefix, __, agentsSuffix, ___, skillsPlus, skillsSuffix, ____, commandsSuffix) =>
`${prefix}${catalog.agents.count}${agentsSuffix}${catalog.skills.count}${skillsPlus}${skillsSuffix}${catalog.commands.count}${commandsSuffix}`,
'docs/zh-CN/AGENTS.md summary'
);
nextContent = replaceOrThrow(
nextContent,
/^(\s*agents\/\s*[—–-]\s*)(\d+)(\s+个专业子代理\s*)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.agents.count}${suffix}`,
'docs/zh-CN/AGENTS.md project structure (agents)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\s*skills\/\s*[—–-]\s*)(\d+)(\+?)(\s+个工作流技能和领域知识\s*)$/im,
(_, prefix, __, plus, suffix) => `${prefix}${catalog.skills.count}${plus}${suffix}`,
'docs/zh-CN/AGENTS.md project structure (skills)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\s*commands\/\s*[—–-]\s*)(\d+)(\s+个斜杠命令\s*)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.commands.count}${suffix}`,
'docs/zh-CN/AGENTS.md project structure (commands)'
);
return nextContent;
}
const DOCUMENT_SPECS = [
{
filePath: README_PATH,
parseExpectations: parseReadmeExpectations,
syncContent: syncEnglishReadme,
},
{
filePath: AGENTS_PATH,
parseExpectations: parseAgentsDocExpectations,
syncContent: syncEnglishAgents,
},
{
filePath: README_ZH_CN_PATH,
parseExpectations: parseZhRootReadmeExpectations,
syncContent: syncZhRootReadme,
},
{
filePath: DOCS_ZH_CN_README_PATH,
parseExpectations: parseZhDocsReadmeExpectations,
syncContent: syncZhDocsReadme,
},
{
filePath: DOCS_ZH_CN_AGENTS_PATH,
parseExpectations: parseZhAgentsDocExpectations,
syncContent: syncZhAgents,
},
];
function renderText(result) { function renderText(result) {
console.log('Catalog counts:'); console.log('Catalog counts:');
console.log(`- agents: ${result.catalog.agents.count}`); console.log(`- agents: ${result.catalog.agents.count}`);
@@ -215,12 +610,20 @@ function renderMarkdown(result) {
function main() { function main() {
const catalog = buildCatalog(); const catalog = buildCatalog();
const readmeContent = readFileOrThrow(README_PATH);
const agentsContent = readFileOrThrow(AGENTS_PATH); if (WRITE_MODE) {
const expectations = [ for (const spec of DOCUMENT_SPECS) {
...parseReadmeExpectations(readmeContent), const currentContent = readFileOrThrow(spec.filePath);
...parseAgentsDocExpectations(agentsContent) const nextContent = spec.syncContent(currentContent, catalog);
]; if (nextContent !== currentContent) {
writeFileOrThrow(spec.filePath, nextContent);
}
}
}
const expectations = DOCUMENT_SPECS.flatMap(spec => (
spec.parseExpectations(readFileOrThrow(spec.filePath))
));
const checks = evaluateExpectations(catalog, expectations); const checks = evaluateExpectations(catalog, expectations);
const result = { catalog, checks }; const result = { catalog, checks };

View File

@@ -54,7 +54,7 @@ NC='\033[0m'
log() { echo -e "${BLUE}[GAN-HARNESS]${NC} $*"; } log() { echo -e "${BLUE}[GAN-HARNESS]${NC} $*"; }
ok() { echo -e "${GREEN}[✓]${NC} $*"; } ok() { echo -e "${GREEN}[✓]${NC} $*"; }
warn() { echo -e "${YELLOW}[]${NC} $*"; } warn() { echo -e "${YELLOW}[WARN]${NC} $*"; }
fail() { echo -e "${RED}[✗]${NC} $*"; } fail() { echo -e "${RED}[✗]${NC} $*"; }
phase() { echo -e "\n${PURPLE}═══════════════════════════════════════════════${NC}"; echo -e "${PURPLE} $*${NC}"; echo -e "${PURPLE}═══════════════════════════════════════════════${NC}\n"; } phase() { echo -e "\n${PURPLE}═══════════════════════════════════════════════${NC}"; echo -e "${PURPLE} $*${NC}"; echo -e "${PURPLE}═══════════════════════════════════════════════${NC}\n"; }
@@ -159,7 +159,7 @@ for (( i=1; i<=MAX_ITERATIONS; i++ )); do
log "━━━ Iteration $i / $MAX_ITERATIONS ━━━" log "━━━ Iteration $i / $MAX_ITERATIONS ━━━"
# ── GENERATE ── # ── GENERATE ──
echo -e "${GREEN} GENERATOR (iteration $i)${NC}" echo -e "${GREEN}>> GENERATOR (iteration $i)${NC}"
FEEDBACK_CONTEXT="" FEEDBACK_CONTEXT=""
if [ $i -gt 1 ] && [ -f "${FEEDBACK_DIR}/feedback-$(printf '%03d' $((i-1))).md" ]; then if [ $i -gt 1 ] && [ -f "${FEEDBACK_DIR}/feedback-$(printf '%03d' $((i-1))).md" ]; then
@@ -181,7 +181,7 @@ Update gan-harness/generator-state.md." \
ok "Generator completed iteration $i" ok "Generator completed iteration $i"
# ── EVALUATE ── # ── EVALUATE ──
echo -e "${RED} EVALUATOR (iteration $i)${NC}" echo -e "${RED}>> EVALUATOR (iteration $i)${NC}"
claude -p --model "$EVALUATOR_MODEL" \ claude -p --model "$EVALUATOR_MODEL" \
--allowedTools "Read,Write,Bash,Grep,Glob" \ --allowedTools "Read,Write,Bash,Grep,Glob" \
@@ -217,7 +217,7 @@ Include the weighted TOTAL score in the format: | **TOTAL** | | | **X.X** |" \
# ── CHECK PASS ── # ── CHECK PASS ──
if score_passes "$SCORE" "$PASS_THRESHOLD"; then if score_passes "$SCORE" "$PASS_THRESHOLD"; then
echo "" echo ""
ok "🎉 PASSED at iteration $i with score $SCORE (threshold: $PASS_THRESHOLD)" ok "PASSED at iteration $i with score $SCORE (threshold: $PASS_THRESHOLD)"
break break
fi fi
@@ -256,7 +256,7 @@ cat > "${HARNESS_DIR}/build-report.md" << EOF
# GAN Harness Build Report # GAN Harness Build Report
**Brief:** $BRIEF **Brief:** $BRIEF
**Result:** $(score_passes "$FINAL_SCORE" "$PASS_THRESHOLD" && echo "PASS" || echo "FAIL") **Result:** $(score_passes "$FINAL_SCORE" "$PASS_THRESHOLD" && echo "PASS" || echo "FAIL")
**Iterations:** $NUM_ITERATIONS / $MAX_ITERATIONS **Iterations:** $NUM_ITERATIONS / $MAX_ITERATIONS
**Final Score:** $FINAL_SCORE / 10.0 (threshold: $PASS_THRESHOLD) **Final Score:** $FINAL_SCORE / 10.0 (threshold: $PASS_THRESHOLD)
**Elapsed:** $ELAPSED **Elapsed:** $ELAPSED
@@ -287,9 +287,9 @@ ok "Report written to ${HARNESS_DIR}/build-report.md"
echo "" echo ""
log "━━━ Final Results ━━━" log "━━━ Final Results ━━━"
if score_passes "$FINAL_SCORE" "$PASS_THRESHOLD"; then if score_passes "$FINAL_SCORE" "$PASS_THRESHOLD"; then
echo -e "${GREEN} Result: PASS${NC}" echo -e "${GREEN} Result: PASS${NC}"
else else
echo -e "${RED} Result: FAIL${NC}" echo -e "${RED} Result: FAIL${NC}"
fi fi
echo -e " Score: ${CYAN}${FINAL_SCORE}${NC} / 10.0" echo -e " Score: ${CYAN}${FINAL_SCORE}${NC} / 10.0"
echo -e " Iterations: ${NUM_ITERATIONS} / ${MAX_ITERATIONS}" echo -e " Iterations: ${NUM_ITERATIONS} / ${MAX_ITERATIONS}"

View File

@@ -1,7 +1,7 @@
const fs = require('fs'); const fs = require('fs');
const os = require('os'); const os = require('os');
const path = require('path'); const path = require('path');
const { planInstallTargetScaffold } = require('./install-targets/registry'); const { getInstallTargetAdapter, planInstallTargetScaffold } = require('./install-targets/registry');
const DEFAULT_REPO_ROOT = path.join(__dirname, '../..'); const DEFAULT_REPO_ROOT = path.join(__dirname, '../..');
const SUPPORTED_INSTALL_TARGETS = ['claude', 'cursor', 'antigravity', 'codex', 'gemini', 'opencode', 'codebuddy']; const SUPPORTED_INSTALL_TARGETS = ['claude', 'cursor', 'antigravity', 'codex', 'gemini', 'opencode', 'codebuddy'];
@@ -76,6 +76,48 @@ function dedupeStrings(values) {
return [...new Set((Array.isArray(values) ? values : []).map(value => String(value).trim()).filter(Boolean))]; return [...new Set((Array.isArray(values) ? values : []).map(value => String(value).trim()).filter(Boolean))];
} }
function readOptionalStringOption(options, key) {
if (
!Object.prototype.hasOwnProperty.call(options, key)
|| options[key] === null
|| options[key] === undefined
) {
return null;
}
if (typeof options[key] !== 'string' || options[key].trim() === '') {
throw new Error(`${key} must be a non-empty string when provided`);
}
return options[key];
}
function readModuleTargetsOrThrow(module) {
const moduleId = module && module.id ? module.id : '<unknown>';
const targets = module && module.targets;
if (!Array.isArray(targets)) {
throw new Error(`Install module ${moduleId} has invalid targets; expected an array of supported target ids`);
}
const normalizedTargets = targets.map(target => (
typeof target === 'string' ? target.trim() : ''
));
if (normalizedTargets.some(target => target.length === 0)) {
throw new Error(`Install module ${moduleId} has invalid targets; expected an array of supported target ids`);
}
const unsupportedTargets = normalizedTargets.filter(target => !SUPPORTED_INSTALL_TARGETS.includes(target));
if (unsupportedTargets.length > 0) {
throw new Error(
`Install module ${moduleId} has unsupported targets: ${unsupportedTargets.join(', ')}`
);
}
return normalizedTargets;
}
function assertKnownModuleIds(moduleIds, manifests) { function assertKnownModuleIds(moduleIds, manifests) {
const unknownModuleIds = dedupeStrings(moduleIds) const unknownModuleIds = dedupeStrings(moduleIds)
.filter(moduleId => !manifests.modulesById.has(moduleId)); .filter(moduleId => !manifests.modulesById.has(moduleId));
@@ -125,6 +167,11 @@ function loadInstallManifests(options = {}) {
? profilesData.profiles ? profilesData.profiles
: {}; : {};
const components = Array.isArray(componentsData.components) ? componentsData.components : []; const components = Array.isArray(componentsData.components) ? componentsData.components : [];
for (const module of modules) {
readModuleTargetsOrThrow(module);
}
const modulesById = new Map(modules.map(module => [module.id, module])); const modulesById = new Map(modules.map(module => [module.id, module]));
const componentsById = new Map(components.map(component => [component.id, component])); const componentsById = new Map(components.map(component => [component.id, component]));
@@ -361,6 +408,16 @@ function resolveInstallPlan(options = {}) {
`Unknown install target: ${target}. Expected one of ${SUPPORTED_INSTALL_TARGETS.join(', ')}` `Unknown install target: ${target}. Expected one of ${SUPPORTED_INSTALL_TARGETS.join(', ')}`
); );
} }
const validatedProjectRoot = readOptionalStringOption(options, 'projectRoot');
const validatedHomeDir = readOptionalStringOption(options, 'homeDir');
const targetPlanningInput = target
? {
repoRoot: manifests.repoRoot,
projectRoot: validatedProjectRoot || manifests.repoRoot,
homeDir: validatedHomeDir || os.homedir(),
}
: null;
const targetAdapter = target ? getInstallTargetAdapter(target) : null;
const effectiveRequestedIds = dedupeStrings( const effectiveRequestedIds = dedupeStrings(
requestedModuleIds.filter(moduleId => !excludedModuleOwners.has(moduleId)) requestedModuleIds.filter(moduleId => !excludedModuleOwners.has(moduleId))
@@ -396,7 +453,13 @@ function resolveInstallPlan(options = {}) {
return; return;
} }
if (target && !module.targets.includes(target)) { const supportsTarget = !target
|| (
readModuleTargetsOrThrow(module).includes(target)
&& (!targetAdapter || targetAdapter.supportsModule(module, targetPlanningInput))
);
if (!supportsTarget) {
if (dependencyOf) { if (dependencyOf) {
skippedTargetIds.add(rootRequesterId || dependencyOf); skippedTargetIds.add(rootRequesterId || dependencyOf);
return false; return false;
@@ -444,9 +507,9 @@ function resolveInstallPlan(options = {}) {
const scaffoldPlan = target const scaffoldPlan = target
? planInstallTargetScaffold({ ? planInstallTargetScaffold({
target, target,
repoRoot: manifests.repoRoot, repoRoot: targetPlanningInput.repoRoot,
projectRoot: options.projectRoot || manifests.repoRoot, projectRoot: targetPlanningInput.projectRoot,
homeDir: options.homeDir || os.homedir(), homeDir: targetPlanningInput.homeDir,
modules: selectedModules, modules: selectedModules,
}) })
: null; : null;

View File

@@ -4,14 +4,28 @@ const {
createFlatRuleOperations, createFlatRuleOperations,
createInstallTargetAdapter, createInstallTargetAdapter,
createManagedScaffoldOperation, createManagedScaffoldOperation,
normalizeRelativePath,
} = require('./helpers'); } = require('./helpers');
const SUPPORTED_SOURCE_PREFIXES = ['rules', 'commands', 'agents', 'skills', '.agents', 'AGENTS.md'];
function supportsAntigravitySourcePath(sourceRelativePath) {
const normalizedPath = normalizeRelativePath(sourceRelativePath);
return SUPPORTED_SOURCE_PREFIXES.some(prefix => (
normalizedPath === prefix || normalizedPath.startsWith(`${prefix}/`)
));
}
module.exports = createInstallTargetAdapter({ module.exports = createInstallTargetAdapter({
id: 'antigravity-project', id: 'antigravity-project',
target: 'antigravity', target: 'antigravity',
kind: 'project', kind: 'project',
rootSegments: ['.agent'], rootSegments: ['.agent'],
installStatePathSegments: ['ecc-install-state.json'], installStatePathSegments: ['ecc-install-state.json'],
supportsModule(module) {
const paths = Array.isArray(module && module.paths) ? module.paths : [];
return paths.length > 0;
},
planOperations(input, adapter) { planOperations(input, adapter) {
const modules = Array.isArray(input.modules) const modules = Array.isArray(input.modules)
? input.modules ? input.modules
@@ -30,7 +44,9 @@ module.exports = createInstallTargetAdapter({
return modules.flatMap(module => { return modules.flatMap(module => {
const paths = Array.isArray(module.paths) ? module.paths : []; const paths = Array.isArray(module.paths) ? module.paths : [];
return paths.flatMap(sourceRelativePath => { return paths
.filter(supportsAntigravitySourcePath)
.flatMap(sourceRelativePath => {
if (sourceRelativePath === 'rules') { if (sourceRelativePath === 'rules') {
return createFlatRuleOperations({ return createFlatRuleOperations({
moduleId: module.id, moduleId: module.id,
@@ -62,8 +78,8 @@ module.exports = createInstallTargetAdapter({
]; ];
} }
return [adapter.createScaffoldOperation(module.id, sourceRelativePath, planningInput)]; return [adapter.createScaffoldOperation(module.id, sourceRelativePath, planningInput)];
}); });
}); });
}, },
}); });

View File

@@ -276,6 +276,13 @@ function createInstallTargetAdapter(config) {
input input
)); ));
}, },
supportsModule(module, input = {}) {
if (typeof config.supportsModule === 'function') {
return config.supportsModule(module, input, adapter);
}
return true;
},
validate(input = {}) { validate(input = {}) {
if (typeof config.validate === 'function') { if (typeof config.validate === 'function') {
return config.validate(input, adapter); return config.validate(input, adapter);

View File

@@ -20,17 +20,70 @@ function readJsonObject(filePath, label) {
return parsed; return parsed;
} }
function buildLegacyHookSignature(entry) {
if (!entry || typeof entry !== 'object' || Array.isArray(entry)) {
return null;
}
if (typeof entry.matcher !== 'string' || !Array.isArray(entry.hooks)) {
return null;
}
const hookSignature = entry.hooks.map(hook => JSON.stringify({
type: hook && typeof hook === 'object' ? hook.type : undefined,
command: hook && typeof hook === 'object' ? hook.command : undefined,
timeout: hook && typeof hook === 'object' ? hook.timeout : undefined,
async: hook && typeof hook === 'object' ? hook.async : undefined,
}));
return JSON.stringify({
matcher: entry.matcher,
hooks: hookSignature,
});
}
function getHookEntryAliases(entry) {
const aliases = [];
if (!entry || typeof entry !== 'object' || Array.isArray(entry)) {
return aliases;
}
if (typeof entry.id === 'string' && entry.id.trim().length > 0) {
aliases.push(`id:${entry.id.trim()}`);
}
const legacySignature = buildLegacyHookSignature(entry);
if (legacySignature) {
aliases.push(`legacy:${legacySignature}`);
}
aliases.push(`json:${JSON.stringify(entry)}`);
return aliases;
}
function mergeHookEntries(existingEntries, incomingEntries) { function mergeHookEntries(existingEntries, incomingEntries) {
const mergedEntries = []; const mergedEntries = [];
const seenEntries = new Set(); const seenEntries = new Set();
for (const entry of [...existingEntries, ...incomingEntries]) { for (const entry of [...existingEntries, ...incomingEntries]) {
const entryKey = JSON.stringify(entry); if (!entry || typeof entry !== 'object' || Array.isArray(entry)) {
if (seenEntries.has(entryKey)) {
continue; continue;
} }
seenEntries.add(entryKey); if ('id' in entry && typeof entry.id !== 'string') {
continue;
}
const aliases = getHookEntryAliases(entry);
if (aliases.some(alias => seenEntries.has(alias))) {
continue;
}
for (const alias of aliases) {
seenEntries.add(alias);
}
mergedEntries.push(entry); mergedEntries.push(entry);
} }

View File

@@ -6,7 +6,7 @@ origin: ECC
# Article Writing # Article Writing
Write long-form content that sounds like a real person or brand, not generic AI output. Write long-form content that sounds like an actual person with a point of view, not an LLM smoothing itself into paste.
## When to Activate ## When to Activate
@@ -17,69 +17,90 @@ Write long-form content that sounds like a real person or brand, not generic AI
## Core Rules ## Core Rules
1. Lead with the concrete thing: example, output, anecdote, number, screenshot description, or code block. 1. Lead with the concrete thing: artifact, example, output, anecdote, number, screenshot, or code.
2. Explain after the example, not before. 2. Explain after the example, not before.
3. Prefer short, direct sentences over padded ones. 3. Keep sentences tight unless the source voice is intentionally expansive.
4. Use specific numbers when available and sourced. 4. Use proof instead of adjectives.
5. Never invent biographical facts, company metrics, or customer evidence. 5. Never invent facts, credibility, or customer evidence.
## Voice Capture Workflow ## Voice Capture Workflow
If the user wants a specific voice, collect one or more of: If the user wants a specific voice, collect one or more of:
- published articles - published articles
- newsletters - newsletters
- X / LinkedIn posts - X posts or threads
- docs or memos - docs or memos
- a short style guide - launch notes
- a style guide
Then extract: Then extract:
- sentence length and rhythm - sentence length and rhythm
- whether the voice is formal, conversational, or sharp - whether the writing is compressed, explanatory, sharp, or formal
- favored rhetorical devices such as parentheses, lists, fragments, or questions - how parentheses are used
- tolerance for humor, opinion, and contrarian framing - how often the writer asks questions
- formatting habits such as headers, bullets, code blocks, and pull quotes - whether the writer uses fragments, lists, or hard pivots
- formatting habits such as headers, bullets, code blocks, pull quotes
- what the writer clearly avoids
If no voice references are given, default to a direct, operator-style voice: concrete, practical, and low on hype. If no voice references are given, default to a sharp operator voice: concrete, unsentimental, useful.
## Affaan / ECC Voice Reference
When matching Affaan / ECC voice, bias toward:
- direct claims over scene-setting
- high specificity
- parentheticals used for qualification or over-clarification, not comedy
- capitalization chosen situationally, not as a gimmick
- very low tolerance for fake thought-leadership cadence
- almost no bait questions
## Banned Patterns ## Banned Patterns
Delete and rewrite any of these: Delete and rewrite any of these:
- generic openings like "In today's rapidly evolving landscape" - "In today's rapidly evolving landscape"
- filler transitions such as "Moreover" and "Furthermore" - "game-changer", "cutting-edge", "revolutionary"
- hype phrases like "game-changer", "cutting-edge", or "revolutionary" - "no fluff"
- vague claims without evidence - "not X, just Y"
- biography or credibility claims not backed by provided context - "here's why this matters" as a standalone bridge
- fake vulnerability arcs
- a closing question added only to juice engagement
- forced lowercase
- corny parenthetical asides
- biography padding that does not move the argument
## Writing Process ## Writing Process
1. Clarify the audience and purpose. 1. Clarify the audience and purpose.
2. Build a skeletal outline with one purpose per section. 2. Build a hard outline with one job per section.
3. Start each section with evidence, example, or scene. 3. Start sections with proof, artifact, conflict, or example.
4. Expand only where the next sentence earns its place. 4. Expand only where the next sentence earns space.
5. Remove anything that sounds templated or self-congratulatory. 5. Cut anything that sounds templated, overexplained, or self-congratulatory.
## Structure Guidance ## Structure Guidance
### Technical Guides ### Technical Guides
- open with what the reader gets
- use code or terminal examples in every major section
- end with concrete takeaways, not a soft summary
### Essays / Opinion Pieces - open with what the reader gets
- start with tension, contradiction, or a sharp observation - use code, commands, screenshots, or concrete output in major sections
- end with actionable takeaways, not a soft recap
### Essays / Opinion
- start with tension, contradiction, or a specific observation
- keep one argument thread per section - keep one argument thread per section
- use examples that earn the opinion - make opinions answer to evidence
### Newsletters ### Newsletters
- keep the first screen strong
- mix insight with updates, not diary filler - keep the first screen doing real work
- use clear section labels and easy skim structure - do not front-load diary filler
- use section labels only when they improve scanability
## Quality Gate ## Quality Gate
Before delivering: Before delivering:
- verify factual claims against provided sources - factual claims are backed by provided sources
- remove filler and corporate language - generic AI transitions are gone
- confirm the voice matches the supplied examples - the voice matches the supplied examples
- ensure every section adds new information - every section adds something new
- check formatting for the intended platform - formatting matches the intended medium

View File

@@ -140,7 +140,7 @@ For each selected category, print the full list of skills below and ask the user
| Skill | Description | | Skill | Description |
|-------|-------------| |-------|-------------|
| `continuous-learning` | Auto-extract reusable patterns from sessions as learned skills | | `continuous-learning` | Auto-extract reusable patterns from sessions as learned skills |
| `continuous-learning-v2` | Instinct-based learning with confidence scoring, evolves into skills/commands/agents | | `continuous-learning-v2` | Instinct-based learning with confidence scoring, evolves into skills, agents, and optional legacy command shims |
| `eval-harness` | Formal evaluation framework for eval-driven development (EDD) | | `eval-harness` | Formal evaluation framework for eval-driven development (EDD) |
| `iterative-retrieval` | Progressive context refinement for subagent context problem | | `iterative-retrieval` | Progressive context refinement for subagent context problem |
| `security-review` | Security checklist: auth, input, secrets, API, payment features | | `security-review` | Security checklist: auth, input, secrets, API, payment features |

View File

@@ -6,83 +6,145 @@ origin: ECC
# Content Engine # Content Engine
Turn one idea into strong, platform-native content instead of posting the same thing everywhere. Build platform-native content without flattening the author's real voice into platform slop.
## When to Activate ## When to Activate
- writing X posts or threads - writing X posts or threads
- drafting LinkedIn posts or launch updates - drafting LinkedIn posts or launch updates
- scripting short-form video or YouTube explainers - scripting short-form video or YouTube explainers
- repurposing articles, podcasts, demos, or docs into social content - repurposing articles, podcasts, demos, docs, or internal notes into public content
- building a lightweight content plan around a launch, milestone, or theme - building a launch sequence or ongoing content system around a product, insight, or narrative
## First Questions ## Non-Negotiables
Clarify: 1. Start from source material, not generic post formulas.
- source asset: what are we adapting from 2. Adapt the format for the platform, not the persona.
- audience: builders, investors, customers, operators, or general audience 3. One post should carry one actual claim.
- platform: X, LinkedIn, TikTok, YouTube, newsletter, or multi-platform 4. Specificity beats adjectives.
- goal: awareness, conversion, recruiting, authority, launch support, or engagement 5. No engagement bait unless the user explicitly asks for it.
## Core Rules ## Source-First Workflow
1. Adapt for the platform. Do not cross-post the same copy. Before drafting, identify the source set:
2. Hooks matter more than summaries. - published articles
3. Every post should carry one clear idea. - notes or internal memos
4. Use specifics over slogans. - product demos
5. Keep the ask small and clear. - docs or changelogs
- transcripts
- screenshots
- prior posts from the same author
## Platform Guidance If the user wants a specific voice, build a voice profile from real examples before writing.
## Voice Capture Workflow
Collect 5 to 20 examples when available. Good sources:
- articles or essays
- X posts or threads
- docs or release notes
- newsletters
- previous launch posts
If live X access is available, use `x-api` to pull recent original posts before drafting. If not, use the examples already provided or present in the repo.
Extract and write down:
- sentence length and rhythm
- how compressed or explanatory the writing is
- whether capitalization is conventional, mixed, or situational
- how parentheses are used
- whether the writer uses fragments, lists, or abrupt pivots
- how often the writer asks questions
- how sharp, formal, opinionated, or dry the voice is
- what the writer never does
Do not start drafting until the voice profile is clear enough to enforce.
## Affaan / ECC Voice Reference
When the user wants Affaan / ECC voice specifically, default to this unless newer examples clearly override it:
- direct, compressed, concrete
- strong preference for specific claims, numbers, mechanisms, and receipts
- parentheticals used to qualify, narrow, or over-clarify, not to do corny bits
- lowercase is optional, not mandatory
- questions are rare and should not be added as bait
- transitions should feel earned, not polished
- tone can be sharp or blunt, but should not sound like a content marketer
## Hard Bans
Delete and rewrite any of these:
- "In today's rapidly evolving landscape"
- "game-changer", "revolutionary", "cutting-edge"
- "no fluff"
- "not X, just Y"
- "here's why this matters" unless it is followed immediately by something concrete
- "Excited to share"
- fake curiosity gaps
- ending with a LinkedIn-style question just to farm replies
- forced lowercase when the source voice does not call for it
- forced casualness on LinkedIn
- parenthetical jokes that were not present in the source voice
## Platform Adaptation Rules
### X ### X
- open fast
- one idea per post or per tweet in a thread - open with the strongest claim, artifact, or tension
- keep links out of the main body unless necessary - keep the compression if the source voice is compressed
- avoid hashtag spam - if writing a thread, each post must advance the argument
- do not pad with context the audience does not need
### LinkedIn ### LinkedIn
- strong first line
- short paragraphs
- more explicit framing around lessons, results, and takeaways
### TikTok / Short Video - expand only enough for people outside the immediate niche to follow
- first 3 seconds must interrupt attention - do not turn it into a fake lesson post unless the source material actually is reflective
- script around visuals, not just narration - no corporate inspiration cadence
- one demo, one claim, one CTA - no praise-stacking, no "journey" filler
### Short Video
- script around the visual sequence and proof points
- first seconds should show the result, problem, or punch
- do not write narration that sounds better on paper than on screen
### YouTube ### YouTube
- show the result early
- structure by chapter - show the result or tension early
- refresh the visual every 20-30 seconds - organize by argument or progression, not filler sections
- use chaptering only when it helps clarity
### Newsletter ### Newsletter
- deliver one clear lens, not a bundle of unrelated items
- make section titles skimmable - open with the point, conflict, or artifact
- keep the opening paragraph doing real work - do not spend the first paragraph warming up
- every section needs to add something new
## Repurposing Flow ## Repurposing Flow
Default cascade: 1. Pick the anchor asset.
1. anchor asset: article, video, demo, memo, or launch doc 2. Extract 3 to 7 atomic claims or scenes.
2. extract 3-7 atomic ideas 3. Rank them by sharpness, novelty, and proof.
3. write platform-native variants 4. Assign one strong idea per output.
4. trim repetition across outputs 5. Adapt structure for each platform.
5. align CTAs with platform intent 6. Strip platform-shaped filler.
7. Run the quality gate.
## Deliverables ## Deliverables
When asked for a campaign, return: When asked for a campaign, return:
- a short voice profile if voice matching matters
- the core angle - the core angle
- platform-specific drafts - platform-native drafts
- optional posting order - posting order only if it helps execution
- optional CTA variants - gaps that must be filled before publishing
- any missing inputs needed before publishing
## Quality Gate ## Quality Gate
Before delivering: Before delivering:
- each draft reads natively for its platform - every draft sounds like the intended author, not the platform stereotype
- hooks are strong and specific - every draft contains a real claim, proof point, or concrete observation
- no generic hype language - no generic hype language remains
- no fake engagement bait remains
- no duplicated copy across platforms unless requested - no duplicated copy across platforms unless requested
- the CTA matches the content and audience - any CTA is earned and user-approved

View File

@@ -6,185 +6,109 @@ origin: ECC
# Crosspost # Crosspost
Distribute content across multiple social platforms with platform-native adaptation. Distribute content across platforms without turning it into the same fake post in four costumes.
## When to Activate ## When to Activate
- User wants to post content to multiple platforms - the user wants to publish the same underlying idea across multiple platforms
- Publishing announcements, launches, or updates across social media - a launch, update, release, or essay needs platform-specific versions
- Repurposing a post from one platform to others - the user says "crosspost", "post this everywhere", or "adapt this for X and LinkedIn"
- User says "crosspost", "post everywhere", "share on all platforms", or "distribute this"
## Core Rules ## Core Rules
1. **Never post identical content cross-platform.** Each platform gets a native adaptation. 1. Do not publish identical copy across platforms.
2. **Primary platform first.** Post to the main platform, then adapt for others. 2. Preserve the author's voice across platforms.
3. **Respect platform conventions.** Length limits, formatting, link handling all differ. 3. Adapt for constraints, not stereotypes.
4. **One idea per post.** If the source content has multiple ideas, split across posts. 4. One post should still be about one thing.
5. **Attribution matters.** If crossposting someone else's content, credit the source. 5. Do not invent a CTA, question, or moral if the source did not earn one.
## Platform Specifications
| Platform | Max Length | Link Handling | Hashtags | Media |
|----------|-----------|---------------|----------|-------|
| X | 280 chars (4000 for Premium) | Counted in length | Minimal (1-2 max) | Images, video, GIFs |
| LinkedIn | 3000 chars | Not counted in length | 3-5 relevant | Images, video, docs, carousels |
| Threads | 500 chars | Separate link attachment | None typical | Images, video |
| Bluesky | 300 chars | Via facets (rich text) | None (use feeds) | Images |
## Workflow ## Workflow
### Step 1: Create Source Content ### Step 1: Start with the Primary Version
Start with the core idea. Use `content-engine` skill for high-quality drafts: Pick the strongest source version first:
- Identify the single core message - the original X post
- Determine the primary platform (where the audience is biggest) - the original article
- Draft the primary platform version first - the launch note
- the thread
- the memo or changelog
### Step 2: Identify Target Platforms Use `content-engine` first if the source still needs voice shaping.
Ask the user or determine from context: ### Step 2: Capture the Voice Fingerprint
- Which platforms to target
- Priority order (primary gets the best version)
- Any platform-specific requirements (e.g., LinkedIn needs professional tone)
### Step 3: Adapt Per Platform Before adapting, note:
- how blunt or explanatory the source is
- whether the source uses fragments, lists, or longer transitions
- whether the source uses parentheses
- whether the source avoids questions, hashtags, or CTA language
For each target platform, transform the content: The adaptation should preserve that fingerprint.
**X adaptation:** ### Step 3: Adapt by Platform Constraint
- Open with a hook, not a summary
- Cut to the core insight fast
- Keep links out of main body when possible
- Use thread format for longer content
**LinkedIn adaptation:** ### X
- Strong first line (visible before "see more")
- Short paragraphs with line breaks
- Frame around lessons, results, or professional takeaways
- More explicit context than X (LinkedIn audience needs framing)
**Threads adaptation:** - keep it compressed
- Conversational, casual tone - lead with the sharpest claim or artifact
- Shorter than LinkedIn, less compressed than X - use a thread only when a single post would collapse the argument
- Visual-first if possible - avoid hashtags and generic filler
**Bluesky adaptation:** ### LinkedIn
- Direct and concise (300 char limit)
- Community-oriented tone
- Use feeds/lists for topic targeting instead of hashtags
### Step 4: Post Primary Platform - add only the context needed for people outside the niche
- do not turn it into a fake founder-reflection post
- do not add a closing question just because it is LinkedIn
- do not force a polished "professional tone" if the author is naturally sharper
Post to the primary platform first: ### Threads
- Use `x-api` skill for X
- Use platform-specific APIs or tools for others
- Capture the post URL for cross-referencing
### Step 5: Post to Secondary Platforms - keep it readable and direct
- do not write fake hyper-casual creator copy
- do not paste the LinkedIn version and shorten it
Post adapted versions to remaining platforms: ### Bluesky
- Stagger timing (not all at once — 30-60 min gaps)
- Include cross-platform references where appropriate ("longer thread on X" etc.)
## Content Adaptation Examples - keep it concise
- preserve the author's cadence
- do not rely on hashtags or feed-gaming language
### Source: Product Launch ## Posting Order
**X version:** Default:
``` 1. post the strongest native version first
We just shipped [feature]. 2. adapt for the secondary platforms
3. stagger timing only if the user wants sequencing help
[One specific thing it does that's impressive] Do not add cross-platform references unless useful. Most of the time, the post should stand on its own.
[Link] ## Banned Patterns
```
**LinkedIn version:** Delete and rewrite any of these:
``` - "Excited to share"
Excited to share: we just launched [feature] at [Company]. - "Here's what I learned"
- "What do you think?"
- "link in bio" unless that is literally true
- "not X, just Y"
- generic "professional takeaway" paragraphs that were not in the source
Here's why it matters: ## Output Format
[2-3 short paragraphs with context] Return:
- the primary platform version
[Takeaway for the audience] - adapted variants for each requested platform
- a short note on what changed and why
[Link] - any publishing constraint the user still needs to resolve
```
**Threads version:**
```
just shipped something cool — [feature]
[casual explanation of what it does]
link in bio
```
### Source: Technical Insight
**X version:**
```
TIL: [specific technical insight]
[Why it matters in one sentence]
```
**LinkedIn version:**
```
A pattern I've been using that's made a real difference:
[Technical insight with professional framing]
[How it applies to teams/orgs]
#relevantHashtag
```
## API Integration
### Batch Crossposting Service (Example Pattern)
If using a crossposting service (e.g., Postbridge, Buffer, or a custom API), the pattern looks like:
```python
import os
import requests
resp = requests.post(
"https://your-crosspost-service.example/api/posts",
headers={"Authorization": f"Bearer {os.environ['POSTBRIDGE_API_KEY']}"},
json={
"platforms": ["twitter", "linkedin", "threads"],
"content": {
"twitter": {"text": x_version},
"linkedin": {"text": linkedin_version},
"threads": {"text": threads_version}
}
},
timeout=30,
)
resp.raise_for_status()
```
### Manual Posting
Without Postbridge, post to each platform using its native API:
- X: Use `x-api` skill patterns
- LinkedIn: LinkedIn API v2 with OAuth 2.0
- Threads: Threads API (Meta)
- Bluesky: AT Protocol API
## Quality Gate ## Quality Gate
Before posting: Before delivering:
- [ ] Each platform version reads naturally for that platform - each version reads like the same author under different constraints
- [ ] No identical content across platforms - no platform version feels padded or sanitized
- [ ] Length limits respected - no copy is duplicated verbatim across platforms
- [ ] Links work and are placed appropriately - any extra context added for LinkedIn or newsletter use is actually necessary
- [ ] Tone matches platform conventions
- [ ] Media is sized correctly for each platform
## Related Skills ## Related Skills
- `content-engine` — Generate platform-native content - `content-engine` for voice capture and source shaping
- `x-api` — X/Twitter API integration - `x-api` for X publishing workflows

View File

@@ -0,0 +1,140 @@
---
name: customer-billing-ops
description: Operate customer billing workflows such as subscriptions, refunds, churn triage, billing-portal recovery, and plan analysis using connected billing tools like Stripe. Use when the user needs to help a customer, inspect subscription state, or manage revenue-impacting billing operations.
origin: ECC
---
# Customer Billing Ops
Use this skill for real customer operations, not generic payment API design.
The goal is to help the operator answer: who is this customer, what happened, what is the safest fix, and what follow-up should we send?
## When to Use
- Customer says billing is broken, they want a refund, or they cannot cancel
- Investigating duplicate subscriptions, accidental charges, failed renewals, or churn risk
- Reviewing plan mix, active subscriptions, yearly vs monthly conversion, or team-seat confusion
- Creating or validating a billing portal flow
- Auditing support complaints that touch subscriptions, invoices, refunds, or payment methods
## Preferred Tool Surface
- Use connected billing tools such as Stripe first
- Use email, GitHub, or issue trackers only as supporting evidence
- Prefer hosted billing/customer portals over custom account-management code when the platform already provides the needed controls
## Guardrails
- Never expose secret keys, full card details, or unnecessary customer PII in the response
- Do not refund blindly; first classify the issue
- Distinguish among:
- accidental duplicate purchase
- deliberate multi-seat or team purchase
- broken product / unmet value
- failed or incomplete checkout
- cancellation due to missing self-serve controls
- For annual plans, team plans, and prorated states, verify the contract shape before taking action
## Workflow
### 1. Identify the customer cleanly
Start from the strongest identifier available:
- customer email
- Stripe customer ID
- subscription ID
- invoice ID
- GitHub username or support email if it is known to map back to billing
Return a concise identity summary:
- customer
- active subscriptions
- canceled subscriptions
- invoices
- obvious anomalies such as duplicate active subscriptions
### 2. Classify the issue
Put the case into one bucket before acting:
| Case | Typical action |
|------|----------------|
| Duplicate personal subscription | cancel extras, consider refund |
| Real multi-seat/team intent | preserve seats, clarify billing model |
| Failed payment / incomplete checkout | recover via portal or update payment method |
| Missing self-serve controls | provide portal, cancellation path, or invoice access |
| Product failure or trust break | refund, apologize, log product issue |
### 3. Take the safest reversible action first
Preferred order:
1. restore self-serve management
2. fix duplicate or broken billing state
3. refund only the affected charge or duplicate
4. document the reason
5. send a short customer follow-up
If the fix requires product work, separate:
- customer remediation now
- product bug / workflow gap for backlog
### 4. Check operator-side product gaps
If the customer pain comes from a missing operator surface, call it out explicitly. Common examples:
- no billing portal
- no usage/rate-limit visibility
- no plan/seat explanation
- no cancellation flow
- no duplicate-subscription guard
Treat those as ECC or website follow-up items, not just support incidents.
### 5. Produce the operator handoff
End with:
- customer state summary
- action taken
- revenue impact
- follow-up text to send
- product or backlog issue to create
## Output Format
Use this structure:
```text
CUSTOMER
- name / email
- relevant account identifiers
BILLING STATE
- active subscriptions
- invoice or renewal state
- anomalies
DECISION
- issue classification
- why this action is correct
ACTION TAKEN
- refund / cancel / portal / no-op
FOLLOW-UP
- short customer message
PRODUCT GAP
- what should be fixed in the product or website
```
## Examples of Good Recommendations
- "The right fix is a billing portal, not a custom dashboard yet"
- "This looks like duplicate personal checkout, not a real team-seat purchase"
- "Refund one duplicate charge, keep the remaining active subscription, then convert the customer to org billing later if needed"

View File

@@ -48,14 +48,14 @@ This is the same dynamic as GANs (Generative Adversarial Networks): the Generato
│ FEEDBACK LOOP │ │ FEEDBACK LOOP │
│ │ │ │
│ ┌──────────┐ │ │ ┌──────────┐ │
│ │GENERATOR │──build──▶│──┐ │ │GENERATOR │--build-->│──┐
│ │(Opus 4.6)│ │ │ │ │(Opus 4.6)│ │ │
│ └────▲─────┘ │ │ │ └────▲─────┘ │ │
│ │ │ │ live app │ │ │ │ live app
│ feedback │ │ │ feedback │ │
│ │ │ │ │ │ │ │
│ ┌────┴─────┐ │ │ │ ┌────┴─────┐ │ │
│ │EVALUATOR │◀─test───│──┘ │ │EVALUATOR │<-test----│──┘
│ │(Opus 4.6)│ │ │ │(Opus 4.6)│ │
│ │+Playwright│ │ │ │+Playwright│ │
│ └──────────┘ │ │ └──────────┘ │

View File

@@ -0,0 +1,95 @@
---
name: google-workspace-ops
description: Operate across Google Drive, Docs, Sheets, and Slides as one workflow surface for plans, trackers, decks, and shared documents. Use when the user needs to find, summarize, edit, migrate, or clean up Google Workspace assets without dropping to raw tool calls.
origin: ECC
---
# Google Workspace Ops
This skill is for operating shared docs, spreadsheets, and decks as working systems, not just editing one file in isolation.
## When to Use
- User needs to find a doc, sheet, or deck and update it in place
- Consolidating plans, trackers, notes, or customer lists stored in Google Drive
- Cleaning or restructuring a shared spreadsheet
- Importing, repairing, or reformatting a Google Slides deck
- Producing summaries from Docs, Sheets, or Slides for decision-making
## Preferred Tool Surface
Use Google Drive as the entry point, then switch to the right specialist:
- Google Docs for text-heavy docs
- Google Sheets for tabular work, formulas, and charts
- Google Slides for decks, imports, template migration, and cleanup
Do not guess structure from filenames alone. Inspect first.
## Workflow
### 1. Find the asset
Start with the Drive search surface to locate:
- the exact file
- sibling assets
- likely duplicates
- recently modified versions
If several documents look similar, confirm by title, owner, modified time, or folder.
### 2. Inspect before editing
Before making changes:
- summarize current structure
- identify tabs, headings, or slide count
- detect whether the task is local cleanup or structural surgery
Pick the smallest tool that can safely perform the work.
### 3. Edit with precision
- For Docs: use index-aware edits, not vague rewrites
- For Sheets: operate on explicit tabs and ranges
- For Slides: distinguish content edits from visual cleanup or template migration
If the requested work is visual or layout-sensitive, iterate with inspection and verification instead of one giant blind update.
### 4. Keep the working system clean
When the file is part of a larger workflow, also surface:
- duplicate trackers
- outdated decks
- stale docs vs canonical docs
- whether the asset should be archived, merged, or renamed
## Output Format
Use:
```text
ASSET
- file name
- type
- why this is the right file
CURRENT STATE
- structure summary
- key problems or blockers
ACTION
- edits made or recommended
FOLLOW-UPS
- archive / merge / duplicate cleanup / next file to update
```
## Good Use Cases
- "Find the active planning doc and condense it"
- "Clean up this customer spreadsheet and show me the churn-risk rows"
- "Import this deck into Slides and make it presentable"
- "Find the current tracker, not the stale duplicate"

View File

@@ -6,7 +6,7 @@ origin: ECC
# Investor Outreach # Investor Outreach
Write investor communication that is short, personalized, and easy to act on. Write investor communication that is short, concrete, and easy to act on.
## When to Activate ## When to Activate
@@ -20,17 +20,28 @@ Write investor communication that is short, personalized, and easy to act on.
1. Personalize every outbound message. 1. Personalize every outbound message.
2. Keep the ask low-friction. 2. Keep the ask low-friction.
3. Use proof, not adjectives. 3. Use proof instead of adjectives.
4. Stay concise. 4. Stay concise.
5. Never send generic copy that could go to any investor. 5. Never send copy that could go to any investor.
## Hard Bans
Delete and rewrite any of these:
- "I'd love to connect"
- "excited to share"
- generic thesis praise without a real tie-in
- vague founder adjectives
- "no fluff"
- begging language
- soft closing questions when a direct ask is clearer
## Cold Email Structure ## Cold Email Structure
1. subject line: short and specific 1. subject line: short and specific
2. opener: why this investor specifically 2. opener: why this investor specifically
3. pitch: what the company does, why now, what proof matters 3. pitch: what the company does, why now, and what proof matters
4. ask: one concrete next step 4. ask: one concrete next step
5. sign-off: name, role, one credibility anchor if needed 5. sign-off: name, role, and one credibility anchor if needed
## Personalization Sources ## Personalization Sources
@@ -40,14 +51,14 @@ Reference one or more of:
- a mutual connection - a mutual connection
- a clear market or product fit with the investor's focus - a clear market or product fit with the investor's focus
If that context is missing, ask for it or state that the draft is a template awaiting personalization. If that context is missing, state that the draft still needs personalization instead of pretending it is finished.
## Follow-Up Cadence ## Follow-Up Cadence
Default: Default:
- day 0: initial outbound - day 0: initial outbound
- day 4-5: short follow-up with one new data point - day 4 or 5: short follow-up with one new data point
- day 10-12: final follow-up with a clean close - day 10 to 12: final follow-up with a clean close
Do not keep nudging after that unless the user wants a longer sequence. Do not keep nudging after that unless the user wants a longer sequence.
@@ -69,8 +80,8 @@ Include:
## Quality Gate ## Quality Gate
Before delivering: Before delivering:
- message is personalized - the message is genuinely personalized
- the ask is explicit - the ask is explicit
- there is no fluff or begging language
- the proof point is concrete - the proof point is concrete
- filler praise and softener language are gone
- word count stays tight - word count stays tight

View File

@@ -0,0 +1,111 @@
---
name: project-flow-ops
description: Operate execution flow across GitHub and Linear by triaging issues and pull requests, linking active work, and keeping GitHub public-facing while Linear remains the internal execution layer. Use when the user wants backlog control, PR triage, or GitHub-to-Linear coordination.
origin: ECC
---
# Project Flow Ops
This skill turns disconnected GitHub issues, PRs, and Linear tasks into one execution flow.
Use it when the problem is coordination, not coding.
## When to Use
- Triage open PR or issue backlogs
- Decide what belongs in Linear vs what should remain GitHub-only
- Link active GitHub work to internal execution lanes
- Classify PRs into merge, port/rebuild, close, or park
- Audit whether review comments, CI failures, or stale issues are blocking execution
## Operating Model
- **GitHub** is the public and community truth
- **Linear** is the internal execution truth for active scheduled work
- Not every GitHub issue needs a Linear issue
- Create or update Linear only when the work is:
- active
- delegated
- scheduled
- cross-functional
- important enough to track internally
## Core Workflow
### 1. Read the public surface first
Gather:
- GitHub issue or PR state
- author and branch status
- review comments
- CI status
- linked issues
### 2. Classify the work
Every item should end up in one of these states:
| State | Meaning |
|-------|---------|
| Merge | self-contained, policy-compliant, ready |
| Port/Rebuild | useful idea, but should be manually re-landed inside ECC |
| Close | wrong direction, stale, unsafe, or duplicated |
| Park | potentially useful, but not scheduled now |
### 3. Decide whether Linear is warranted
Create or update Linear only if:
- execution is actively planned
- multiple repos or workstreams are involved
- the work needs internal ownership or sequencing
- the issue is part of a larger program lane
Do not mirror everything mechanically.
### 4. Keep the two systems consistent
When work is active:
- GitHub issue/PR should say what is happening publicly
- Linear should track owner, priority, and execution lane internally
When work ships or is rejected:
- post the public resolution back to GitHub
- mark the Linear task accordingly
## Review Rules
- Never merge from title, summary, or trust alone; use the full diff
- External-source features should be rebuilt inside ECC when they are valuable but not self-contained
- CI red means classify and fix or block; do not pretend it is merge-ready
- If the real blocker is product direction, say so instead of hiding behind tooling
## Output Format
Return:
```text
PUBLIC STATUS
- issue / PR state
- CI / review state
CLASSIFICATION
- merge / port-rebuild / close / park
- one-paragraph rationale
LINEAR ACTION
- create / update / no Linear item needed
- project / lane if applicable
NEXT OPERATOR ACTION
- exact next move
```
## Good Use Cases
- "Audit the open PR backlog and tell me what to merge vs rebuild"
- "Map GitHub issues into our ECC 1.x and ECC 2.0 program lanes"
- "Check whether this needs a Linear issue or should stay GitHub-only"

View File

@@ -7,12 +7,12 @@ metadata:
# Using Three.js and React Three Fiber in Remotion # Using Three.js and React Three Fiber in Remotion
Follow React Three Fiber and Three.js best practices. Follow React Three Fiber and Three.js best practices.
Only the following Remotion-specific rules need to be followed: Only the following Remotion-specific rules need to be followed:
## Prerequisites ## Prerequisites
First, the `@remotion/three` package needs to be installed. First, the `@remotion/three` package needs to be installed.
If it is not, use the following command: If it is not, use the following command:
```bash ```bash
@@ -24,7 +24,7 @@ pnpm exec remotion add @remotion/three # If project uses pnpm
## Using ThreeCanvas ## Using ThreeCanvas
You MUST wrap 3D content in `<ThreeCanvas>` and include proper lighting. You MUST wrap 3D content in `<ThreeCanvas>` and include proper lighting.
`<ThreeCanvas>` MUST have a `width` and `height` prop. `<ThreeCanvas>` MUST have a `width` and `height` prop.
```tsx ```tsx
@@ -45,9 +45,9 @@ const { width, height } = useVideoConfig();
## No animations not driven by `useCurrentFrame()` ## No animations not driven by `useCurrentFrame()`
Shaders, models etc MUST NOT animate by themselves. Shaders, models etc MUST NOT animate by themselves.
No animations are allowed unless they are driven by `useCurrentFrame()`. No animations are allowed unless they are driven by `useCurrentFrame()`.
Otherwise, it will cause flickering during rendering. Otherwise, it will cause flickering during rendering.
Using `useFrame()` from `@react-three/fiber` is forbidden. Using `useFrame()` from `@react-three/fiber` is forbidden.

View File

@@ -5,7 +5,7 @@ metadata:
tags: animations, transitions, frames, useCurrentFrame tags: animations, transitions, frames, useCurrentFrame
--- ---
All animations MUST be driven by the `useCurrentFrame()` hook. All animations MUST be driven by the `useCurrentFrame()` hook.
Write animations in seconds and multiply them by the `fps` value from `useVideoConfig()`. Write animations in seconds and multiply them by the `fps` value from `useVideoConfig()`.
```tsx ```tsx
@@ -18,12 +18,12 @@ export const FadeIn = () => {
const opacity = interpolate(frame, [0, 2 * fps], [0, 1], { const opacity = interpolate(frame, [0, 2 * fps], [0, 1], {
extrapolateRight: 'clamp', extrapolateRight: 'clamp',
}); });
return ( return (
<div style={{ opacity }}>Hello World!</div> <div style={{ opacity }}>Hello World!</div>
); );
}; };
``` ```
CSS transitions or animations are FORBIDDEN - they will not render correctly. CSS transitions or animations are FORBIDDEN - they will not render correctly.
Tailwind animation class names are FORBIDDEN - they will not render correctly. Tailwind animation class names are FORBIDDEN - they will not render correctly.

View File

@@ -11,8 +11,8 @@ You can create bar charts in Remotion by using regular React code - HTML and SVG
## No animations not powered by `useCurrentFrame()` ## No animations not powered by `useCurrentFrame()`
Disable all animations by third party libraries. Disable all animations by third party libraries.
They will cause flickering during rendering. They will cause flickering during rendering.
Instead, drive all animations from `useCurrentFrame()`. Instead, drive all animations from `useCurrentFrame()`.
## Bar Chart Animations ## Bar Chart Animations

View File

@@ -29,7 +29,7 @@ export const RemotionRoot = () => {
## Default Props ## Default Props
Pass `defaultProps` to provide initial values for your component. Pass `defaultProps` to provide initial values for your component.
Values must be JSON-serializable (`Date`, `Map`, `Set`, and `staticFile()` are supported). Values must be JSON-serializable (`Date`, `Map`, `Set`, and `staticFile()` are supported).
```tsx ```tsx
@@ -58,7 +58,7 @@ Use `type` declarations for props rather than `interface` to ensure `defaultProp
## Folders ## Folders
Use `<Folder>` to organize compositions in the sidebar. Use `<Folder>` to organize compositions in the sidebar.
Folder names can only contain letters, numbers, and hyphens. Folder names can only contain letters, numbers, and hyphens.
```tsx ```tsx

View File

@@ -9,7 +9,7 @@ metadata:
## Prerequisites ## Prerequisites
First, the @remotion/lottie package needs to be installed. First, the @remotion/lottie package needs to be installed.
If it is not, use the following command: If it is not, use the following command:
```bash ```bash

View File

@@ -20,7 +20,7 @@ const {fps} = useVideoConfig();
</Sequence> </Sequence>
``` ```
This will by default wrap the component in an absolute fill element. This will by default wrap the component in an absolute fill element.
If the items should not be wrapped, use the `layout` prop: If the items should not be wrapped, use the `layout` prop:
```tsx ```tsx
@@ -31,7 +31,7 @@ If the items should not be wrapped, use the `layout` prop:
## Premounting ## Premounting
This loads the component in the timeline before it is actually played. This loads the component in the timeline before it is actually played.
Always premount any `<Sequence>`! Always premount any `<Sequence>`!
```tsx ```tsx

View File

@@ -6,6 +6,6 @@ metadata:
You can and should use TailwindCSS in Remotion, if TailwindCSS is installed in the project. You can and should use TailwindCSS in Remotion, if TailwindCSS is installed in the project.
Don't use `transition-*` or `animate-*` classes - always animate using the `useCurrentFrame()` hook. Don't use `transition-*` or `animate-*` classes - always animate using the `useCurrentFrame()` hook.
Tailwind must be installed and enabled first in a Remotion project - fetch <https://www.remotion.dev/docs/tailwind> using WebFetch for instructions. Tailwind must be installed and enabled first in a Remotion project - fetch <https://www.remotion.dev/docs/tailwind> using WebFetch for instructions.

View File

@@ -13,7 +13,7 @@ import {interpolate} from 'remotion';
const opacity = interpolate(frame, [0, 100], [0, 1]); const opacity = interpolate(frame, [0, 100], [0, 1]);
``` ```
By default, the values are not clamped, so the value can go outside the range [0, 1]. By default, the values are not clamped, so the value can go outside the range [0, 1].
Here is how they can be clamped: Here is how they can be clamped:
```ts title="Going from 0 to 1 over 100 frames with extrapolation" ```ts title="Going from 0 to 1 over 100 frames with extrapolation"
@@ -25,7 +25,7 @@ const opacity = interpolate(frame, [0, 100], [0, 1], {
## Spring animations ## Spring animations
Spring animations have a more natural motion. Spring animations have a more natural motion.
They go from 0 to 1 over time. They go from 0 to 1 over time.
```ts title="Spring animation from 0 to 1 over 100 frames" ```ts title="Spring animation from 0 to 1 over 100 frames"
@@ -42,7 +42,7 @@ const scale = spring({
### Physical properties ### Physical properties
The default configuration is: `mass: 1, damping: 10, stiffness: 100`. The default configuration is: `mass: 1, damping: 10, stiffness: 100`.
This leads to the animation having a bit of bounce before it settles. This leads to the animation having a bit of bounce before it settles.
The config can be overwritten like this: The config can be overwritten like this:
@@ -68,7 +68,7 @@ const heavy = {damping: 15, stiffness: 80, mass: 2}; // Heavy, slow, small bounc
### Delay ### Delay
The animation starts immediately by default. The animation starts immediately by default.
Use the `delay` parameter to delay the animation by a number of frames. Use the `delay` parameter to delay the animation by a number of frames.
```tsx ```tsx
@@ -81,7 +81,7 @@ const entrance = spring({
### Duration ### Duration
A `spring()` has a natural duration based on the physical properties. A `spring()` has a natural duration based on the physical properties.
To stretch the animation to a specific duration, use the `durationInFrames` parameter. To stretch the animation to a specific duration, use the `durationInFrames` parameter.
```tsx ```tsx
@@ -144,7 +144,7 @@ const value1 = interpolate(frame, [0, 100], [0, 1], {
}); });
``` ```
The default easing is `Easing.linear`. The default easing is `Easing.linear`.
There are various other convexities: There are various other convexities:
- `Easing.in` for starting slow and accelerating - `Easing.in` for starting slow and accelerating

View File

@@ -7,12 +7,12 @@ metadata:
## Fullscreen transitions ## Fullscreen transitions
Using `<TransitionSeries>` to animate between multiple scenes or clips. Using `<TransitionSeries>` to animate between multiple scenes or clips.
This will absolutely position the children. This will absolutely position the children.
## Prerequisites ## Prerequisites
First, the @remotion/transitions package needs to be installed. First, the @remotion/transitions package needs to be installed.
If it is not, use the following command: If it is not, use the following command:
```bash ```bash

View File

@@ -9,7 +9,7 @@ metadata:
## Prerequisites ## Prerequisites
First, the @remotion/media package needs to be installed. First, the @remotion/media package needs to be installed.
If it is not, use the following command: If it is not, use the following command:
```bash ```bash

465
skills/ui-demo/SKILL.md Normal file
View File

@@ -0,0 +1,465 @@
---
name: ui-demo
description: Record polished UI demo videos using Playwright. Use when the user asks to create a demo, walkthrough, screen recording, or tutorial video of a web application. Produces WebM videos with visible cursor, natural pacing, and professional feel.
origin: ECC
---
# UI Demo Video Recorder
Record polished demo videos of web applications using Playwright's video recording with an injected cursor overlay, natural pacing, and storytelling flow.
## When to Use
- User asks for a "demo video", "screen recording", "walkthrough", or "tutorial"
- User wants to showcase a feature or workflow visually
- User needs a video for documentation, onboarding, or stakeholder presentation
## Three-Phase Process
Every demo goes through three phases: **Discover -> Rehearse -> Record**. Never skip straight to recording.
---
## Phase 1: Discover
Before writing any script, explore the target pages to understand what is actually there.
### Why
You cannot script what you have not seen. Fields may be `<input>` not `<textarea>`, dropdowns may be custom components not `<select>`, and comment boxes may support `@mentions` or `#tags`. Assumptions break recordings silently.
### How
Navigate to each page in the flow and dump its interactive elements:
```javascript
// Run this for each page in the flow BEFORE writing the demo script
const fields = await page.evaluate(() => {
const els = [];
document.querySelectorAll('input, select, textarea, button, [contenteditable]').forEach(el => {
if (el.offsetParent !== null) {
els.push({
tag: el.tagName,
type: el.type || '',
name: el.name || '',
placeholder: el.placeholder || '',
text: el.textContent?.trim().substring(0, 40) || '',
contentEditable: el.contentEditable === 'true',
role: el.getAttribute('role') || '',
});
}
});
return els;
});
console.log(JSON.stringify(fields, null, 2));
```
### What to look for
- **Form fields**: Are they `<select>`, `<input>`, custom dropdowns, or comboboxes?
- **Select options**: Dump option values AND text. Placeholders often have `value="0"` or `value=""` which looks non-empty. Use `Array.from(el.options).map(o => ({ value: o.value, text: o.text }))`. Skip options where text includes "Select" or value is `"0"`.
- **Rich text**: Does the comment box support `@mentions`, `#tags`, markdown, or emoji? Check placeholder text.
- **Required fields**: Which fields block form submission? Check `required`, `*` in labels, and try submitting empty to see validation errors.
- **Dynamic content**: Do fields appear after other fields are filled?
- **Button labels**: Exact text such as `"Submit"`, `"Submit Request"`, or `"Send"`.
- **Table column headers**: For table-driven modals, map each `input[type="number"]` to its column header instead of assuming all numeric inputs mean the same thing.
### Output
A field map for each page, used to write correct selectors in the script. Example:
```text
/purchase-requests/new:
- Budget Code: <select> (first select on page, 4 options)
- Desired Delivery: <input type="date">
- Context: <textarea> (not input)
- BOM table: inline-editable cells with span.cursor-pointer -> input pattern
- Submit: <button> text="Submit"
/purchase-requests/N (detail):
- Comment: <input placeholder="Type a message..."> supports @user and #PR tags
- Send: <button> text="Send" (disabled until input has content)
```
---
## Phase 2: Rehearse
Run through all steps without recording. Verify every selector resolves.
### Why
Silent selector failures are the main reason demo recordings break. Rehearsal catches them before you waste a recording.
### How
Use `ensureVisible`, a wrapper that logs and fails loudly:
```javascript
async function ensureVisible(page, locator, label) {
const el = typeof locator === 'string' ? page.locator(locator).first() : locator;
const visible = await el.isVisible().catch(() => false);
if (!visible) {
const msg = `REHEARSAL FAIL: "${label}" not found - selector: ${typeof locator === 'string' ? locator : '(locator object)'}`;
console.error(msg);
const found = await page.evaluate(() => {
return Array.from(document.querySelectorAll('button, input, select, textarea, a'))
.filter(el => el.offsetParent !== null)
.map(el => `${el.tagName}[${el.type || ''}] "${el.textContent?.trim().substring(0, 30)}"`)
.join('\n ');
});
console.error(' Visible elements:\n ' + found);
return false;
}
console.log(`REHEARSAL OK: "${label}"`);
return true;
}
```
### Rehearsal script structure
```javascript
const steps = [
{ label: 'Login email field', selector: '#email' },
{ label: 'Login submit', selector: 'button[type="submit"]' },
{ label: 'New Request button', selector: 'button:has-text("New Request")' },
{ label: 'Budget Code select', selector: 'select' },
{ label: 'Delivery date', selector: 'input[type="date"]:visible' },
{ label: 'Description field', selector: 'textarea:visible' },
{ label: 'Add Item button', selector: 'button:has-text("Add Item")' },
{ label: 'Submit button', selector: 'button:has-text("Submit")' },
];
let allOk = true;
for (const step of steps) {
if (!await ensureVisible(page, step.selector, step.label)) {
allOk = false;
}
}
if (!allOk) {
console.error('REHEARSAL FAILED - fix selectors before recording');
process.exit(1);
}
console.log('REHEARSAL PASSED - all selectors verified');
```
### When rehearsal fails
1. Read the visible-element dump.
2. Find the correct selector.
3. Update the script.
4. Re-run rehearsal.
5. Only proceed when every selector passes.
---
## Phase 3: Record
Only after discovery and rehearsal pass should you create the recording.
### Recording Principles
#### 1. Storytelling Flow
Plan the video as a story. Follow user-specified order, or use this default:
- **Entry**: Login or navigate to the starting point
- **Context**: Pan the surroundings so viewers orient themselves
- **Action**: Perform the main workflow steps
- **Variation**: Show a secondary feature such as settings, theme, or localization
- **Result**: Show the outcome, confirmation, or new state
#### 2. Pacing
- After login: `4s`
- After navigation: `3s`
- After clicking a button: `2s`
- Between major steps: `1.5-2s`
- After the final action: `3s`
- Typing delay: `25-40ms` per character
#### 3. Cursor Overlay
Inject an SVG arrow cursor that follows mouse movements:
```javascript
async function injectCursor(page) {
await page.evaluate(() => {
if (document.getElementById('demo-cursor')) return;
const cursor = document.createElement('div');
cursor.id = 'demo-cursor';
cursor.innerHTML = `<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M5 3L19 12L12 13L9 20L5 3Z" fill="white" stroke="black" stroke-width="1.5" stroke-linejoin="round"/>
</svg>`;
cursor.style.cssText = `
position: fixed; z-index: 999999; pointer-events: none;
width: 24px; height: 24px;
transition: left 0.1s, top 0.1s;
filter: drop-shadow(1px 1px 2px rgba(0,0,0,0.3));
`;
cursor.style.left = '0px';
cursor.style.top = '0px';
document.body.appendChild(cursor);
document.addEventListener('mousemove', (e) => {
cursor.style.left = e.clientX + 'px';
cursor.style.top = e.clientY + 'px';
});
});
}
```
Call `injectCursor(page)` after every page navigation because the overlay is destroyed on navigate.
#### 4. Mouse Movement
Never teleport the cursor. Move to the target before clicking:
```javascript
async function moveAndClick(page, locator, label, opts = {}) {
const { postClickDelay = 800, ...clickOpts } = opts;
const el = typeof locator === 'string' ? page.locator(locator).first() : locator;
const visible = await el.isVisible().catch(() => false);
if (!visible) {
console.error(`WARNING: moveAndClick skipped - "${label}" not visible`);
return false;
}
try {
await el.scrollIntoViewIfNeeded();
await page.waitForTimeout(300);
const box = await el.boundingBox();
if (box) {
await page.mouse.move(box.x + box.width / 2, box.y + box.height / 2, { steps: 10 });
await page.waitForTimeout(400);
}
await el.click(clickOpts);
} catch (e) {
console.error(`WARNING: moveAndClick failed on "${label}": ${e.message}`);
return false;
}
await page.waitForTimeout(postClickDelay);
return true;
}
```
Every call should include a descriptive `label` for debugging.
#### 5. Typing
Type visibly, not instant-fill:
```javascript
async function typeSlowly(page, locator, text, label, charDelay = 35) {
const el = typeof locator === 'string' ? page.locator(locator).first() : locator;
const visible = await el.isVisible().catch(() => false);
if (!visible) {
console.error(`WARNING: typeSlowly skipped - "${label}" not visible`);
return false;
}
await moveAndClick(page, el, label);
await el.fill('');
await el.pressSequentially(text, { delay: charDelay });
await page.waitForTimeout(500);
return true;
}
```
#### 6. Scrolling
Use smooth scroll instead of jumps:
```javascript
await page.evaluate(() => window.scrollTo({ top: 400, behavior: 'smooth' }));
await page.waitForTimeout(1500);
```
#### 7. Dashboard Panning
When showing a dashboard or overview page, move the cursor across key elements:
```javascript
async function panElements(page, selector, maxCount = 6) {
const elements = await page.locator(selector).all();
for (let i = 0; i < Math.min(elements.length, maxCount); i++) {
try {
const box = await elements[i].boundingBox();
if (box && box.y < 700) {
await page.mouse.move(box.x + box.width / 2, box.y + box.height / 2, { steps: 8 });
await page.waitForTimeout(600);
}
} catch (e) {
console.warn(`WARNING: panElements skipped element ${i} (selector: "${selector}"): ${e.message}`);
}
}
}
```
#### 8. Subtitles
Inject a subtitle bar at the bottom of the viewport:
```javascript
async function injectSubtitleBar(page) {
await page.evaluate(() => {
if (document.getElementById('demo-subtitle')) return;
const bar = document.createElement('div');
bar.id = 'demo-subtitle';
bar.style.cssText = `
position: fixed; bottom: 0; left: 0; right: 0; z-index: 999998;
text-align: center; padding: 12px 24px;
background: rgba(0, 0, 0, 0.75);
color: white; font-family: -apple-system, "Segoe UI", sans-serif;
font-size: 16px; font-weight: 500; letter-spacing: 0.3px;
transition: opacity 0.3s;
pointer-events: none;
`;
bar.textContent = '';
bar.style.opacity = '0';
document.body.appendChild(bar);
});
}
async function showSubtitle(page, text) {
await page.evaluate((t) => {
const bar = document.getElementById('demo-subtitle');
if (!bar) return;
if (t) {
bar.textContent = t;
bar.style.opacity = '1';
} else {
bar.style.opacity = '0';
}
}, text);
if (text) await page.waitForTimeout(800);
}
```
Call `injectSubtitleBar(page)` alongside `injectCursor(page)` after every navigation.
Usage pattern:
```javascript
await showSubtitle(page, 'Step 1 - Logging in');
await showSubtitle(page, 'Step 2 - Dashboard overview');
await showSubtitle(page, '');
```
Guidelines:
- Keep subtitle text short, ideally under 60 characters.
- Use `Step N - Action` format for consistency.
- Clear the subtitle during long pauses where the UI can speak for itself.
## Script Template
```javascript
'use strict';
const { chromium } = require('playwright');
const path = require('path');
const fs = require('fs');
const BASE_URL = process.env.QA_BASE_URL || 'http://localhost:3000';
const VIDEO_DIR = path.join(__dirname, 'screenshots');
const OUTPUT_NAME = 'demo-FEATURE.webm';
const REHEARSAL = process.argv.includes('--rehearse');
// Paste injectCursor, injectSubtitleBar, showSubtitle, moveAndClick,
// typeSlowly, ensureVisible, and panElements here.
(async () => {
const browser = await chromium.launch({ headless: true });
if (REHEARSAL) {
const context = await browser.newContext({ viewport: { width: 1280, height: 720 } });
const page = await context.newPage();
// Navigate through the flow and run ensureVisible for each selector.
await browser.close();
return;
}
const context = await browser.newContext({
recordVideo: { dir: VIDEO_DIR, size: { width: 1280, height: 720 } },
viewport: { width: 1280, height: 720 }
});
const page = await context.newPage();
try {
await injectCursor(page);
await injectSubtitleBar(page);
await showSubtitle(page, 'Step 1 - Logging in');
// login actions
await page.goto(`${BASE_URL}/dashboard`);
await injectCursor(page);
await injectSubtitleBar(page);
await showSubtitle(page, 'Step 2 - Dashboard overview');
// pan dashboard
await showSubtitle(page, 'Step 3 - Main workflow');
// action sequence
await showSubtitle(page, 'Step 4 - Result');
// final reveal
await showSubtitle(page, '');
} catch (err) {
console.error('DEMO ERROR:', err.message);
} finally {
await context.close();
const video = page.video();
if (video) {
const src = await video.path();
const dest = path.join(VIDEO_DIR, OUTPUT_NAME);
try {
fs.copyFileSync(src, dest);
console.log('Video saved:', dest);
} catch (e) {
console.error('ERROR: Failed to copy video:', e.message);
console.error(' Source:', src);
console.error(' Destination:', dest);
}
}
await browser.close();
}
})();
```
Usage:
```bash
# Phase 2: Rehearse
node demo-script.cjs --rehearse
# Phase 3: Record
node demo-script.cjs
```
## Checklist Before Recording
- [ ] Discovery phase completed
- [ ] Rehearsal passes with all selectors OK
- [ ] Headless mode enabled
- [ ] Resolution set to `1280x720`
- [ ] Cursor and subtitle overlays re-injected after every navigation
- [ ] `showSubtitle(page, 'Step N - ...')` used at major transitions
- [ ] `moveAndClick` used for all clicks with descriptive labels
- [ ] `typeSlowly` used for visible input
- [ ] No silent catches; helpers log warnings
- [ ] Smooth scrolling used for content reveal
- [ ] Key pauses are visible to a human viewer
- [ ] Flow matches the requested story order
- [ ] Script reflects the actual UI discovered in phase 1
## Common Pitfalls
1. Cursor disappears after navigation - re-inject it.
2. Video is too fast - add pauses.
3. Cursor is a dot instead of an arrow - use the SVG overlay.
4. Cursor teleports - move before clicking.
5. Select dropdowns look wrong - show the move, then pick the option.
6. Modals feel abrupt - add a read pause before confirming.
7. Video file path is random - copy it to a stable output name.
8. Selector failures are swallowed - never use silent catch blocks.
9. Field types were assumed - discover them first.
10. Features were assumed - inspect the actual UI before scripting.
11. Placeholder select values look real - watch for `"0"` and `"Select..."`.
12. Popups create separate videos - capture popup pages explicitly and merge later if needed.

View File

@@ -0,0 +1,125 @@
---
name: workspace-surface-audit
description: Audit the active repo, MCP servers, plugins, connectors, env surfaces, and harness setup, then recommend the highest-value ECC-native skills, hooks, agents, and operator workflows. Use when the user wants help setting up Claude Code or understanding what capabilities are actually available in their environment.
origin: ECC
---
# Workspace Surface Audit
Read-only audit skill for answering the question "what can this workspace and machine actually do right now, and what should we add or enable next?"
This is the ECC-native answer to setup-audit plugins. It does not modify files unless the user explicitly asks for follow-up implementation.
## When to Use
- User says "set up Claude Code", "recommend automations", "what plugins or MCPs should I use?", or "what am I missing?"
- Auditing a machine or repo before installing more skills, hooks, or connectors
- Comparing official marketplace plugins against ECC-native coverage
- Reviewing `.env`, `.mcp.json`, plugin settings, or connected-app surfaces to find missing workflow layers
- Deciding whether a capability should be a skill, hook, agent, MCP, or external connector
## Non-Negotiable Rules
- Never print secret values. Surface only provider names, capability names, file paths, and whether a key or config exists.
- Prefer ECC-native workflows over generic "install another plugin" advice when ECC can reasonably own the surface.
- Treat external plugins as benchmarks and inspiration, not authoritative product boundaries.
- Separate three things clearly:
- already available now
- available but not wrapped well in ECC
- not available and would require a new integration
## Audit Inputs
Inspect only the files and settings needed to answer the question well:
1. Repo surface
- `package.json`, lockfiles, language markers, framework config, `README.md`
- `.mcp.json`, `.lsp.json`, `.claude/settings*.json`, `.codex/*`
- `AGENTS.md`, `CLAUDE.md`, install manifests, hook configs
2. Environment surface
- `.env*` files in the active repo and obvious adjacent ECC workspaces
- Surface only key names such as `STRIPE_API_KEY`, `TWILIO_AUTH_TOKEN`, `FAL_KEY`
3. Connected tool surface
- Installed plugins, enabled connectors, MCP servers, LSPs, and app integrations
4. ECC surface
- Existing skills, commands, hooks, agents, and install modules that already cover the need
## Audit Process
### Phase 1: Inventory What Exists
Produce a compact inventory:
- active harness targets
- installed plugins and connected apps
- configured MCP servers
- configured LSP servers
- env-backed services implied by key names
- existing ECC skills already relevant to the workspace
If a surface exists only as a primitive, call that out. Example:
- "Stripe is available via connected app, but ECC lacks a billing-operator skill"
- "Google Drive is connected, but there is no ECC-native Google Workspace operator workflow"
### Phase 2: Benchmark Against Official and Installed Surfaces
Compare the workspace against:
- official Claude plugins that overlap with setup, review, docs, design, or workflow quality
- locally installed plugins in Claude or Codex
- the user's currently connected app surfaces
Do not just list names. For each comparison, answer:
1. what they actually do
2. whether ECC already has parity
3. whether ECC only has primitives
4. whether ECC is missing the workflow entirely
### Phase 3: Turn Gaps Into ECC Decisions
For every real gap, recommend the correct ECC-native shape:
| Gap Type | Preferred ECC Shape |
|----------|---------------------|
| Repeatable operator workflow | Skill |
| Automatic enforcement or side-effect | Hook |
| Specialized delegated role | Agent |
| External tool bridge | MCP server or connector |
| Install/bootstrap guidance | Setup or audit skill |
Default to user-facing skills that orchestrate existing tools when the need is operational rather than infrastructural.
## Output Format
Return five sections in this order:
1. **Current surface**
- what is already usable right now
2. **Parity**
- where ECC already matches or exceeds the benchmark
3. **Primitive-only gaps**
- tools exist, but ECC lacks a clean operator skill
4. **Missing integrations**
- capability not available yet
5. **Top 3-5 next moves**
- concrete ECC-native additions, ordered by impact
## Recommendation Rules
- Recommend at most 1-2 highest-value ideas per category.
- Favor skills with obvious user intent and business value:
- setup audit
- billing/customer ops
- issue/program ops
- Google Workspace ops
- deployment/ops control
- If a connector is company-specific, recommend it only when it is genuinely available or clearly useful to the user's workflow.
- If ECC already has a strong primitive, propose a wrapper skill instead of inventing a brand-new subsystem.
## Good Outcomes
- The user can immediately see what is connected, what is missing, and what ECC should own next.
- Recommendations are specific enough to implement in the repo without another discovery pass.
- The final answer is organized around workflows, not API brands.

View File

@@ -156,12 +156,19 @@ function runCatalogValidator(overrides = {}) {
const validatorPath = path.join(validatorsDir, 'catalog.js'); const validatorPath = path.join(validatorsDir, 'catalog.js');
let source = fs.readFileSync(validatorPath, 'utf8'); let source = fs.readFileSync(validatorPath, 'utf8');
source = stripShebang(source); source = stripShebang(source);
source = `process.argv.push('--text');\n${source}`; const argv = Array.isArray(overrides.argv) && overrides.argv.length > 0
? overrides.argv
: ['--text'];
const argvPreamble = argv.map(arg => `process.argv.push(${JSON.stringify(arg)});`).join('\n');
source = `${argvPreamble}\n${source}`;
const resolvedOverrides = { const resolvedOverrides = {
ROOT: repoRoot, ROOT: repoRoot,
README_PATH: path.join(repoRoot, 'README.md'), README_PATH: path.join(repoRoot, 'README.md'),
AGENTS_PATH: path.join(repoRoot, 'AGENTS.md'), AGENTS_PATH: path.join(repoRoot, 'AGENTS.md'),
README_ZH_CN_PATH: path.join(repoRoot, 'README.zh-CN.md'),
DOCS_ZH_CN_README_PATH: path.join(repoRoot, 'docs', 'zh-CN', 'README.md'),
DOCS_ZH_CN_AGENTS_PATH: path.join(repoRoot, 'docs', 'zh-CN', 'AGENTS.md'),
...overrides, ...overrides,
}; };
@@ -176,29 +183,50 @@ function runCatalogValidator(overrides = {}) {
function writeCatalogFixture(testDir, options = {}) { function writeCatalogFixture(testDir, options = {}) {
const { const {
readmeCounts = { agents: 1, skills: 1, commands: 1 }, readmeCounts = { agents: 1, skills: 1, commands: 1 },
readmeTableCounts = readmeCounts,
readmeParityCounts = readmeCounts,
readmeUnrelatedSkillsCount = 16,
summaryCounts = { agents: 1, skills: 1, commands: 1 }, summaryCounts = { agents: 1, skills: 1, commands: 1 },
structureLines = [ structureLines = [
'agents/ — 1 specialized subagents', 'agents/ — 1 specialized subagents',
'skills/ — 1 workflow skills and domain knowledge', 'skills/ — 1 workflow skills and domain knowledge',
'commands/ — 1 slash commands', 'commands/ — 1 slash commands',
], ],
zhRootReadmeCounts = { agents: 1, skills: 1, commands: 1 },
zhDocsReadmeCounts = { agents: 1, skills: 1, commands: 1 },
zhDocsTableCounts = zhDocsReadmeCounts,
zhDocsParityCounts = zhDocsReadmeCounts,
zhDocsUnrelatedSkillsCount = 16,
zhAgentsSummaryCounts = { agents: 1, skills: 1, commands: 1 },
zhAgentsStructureLines = [
'agents/ — 1 个专业子代理',
'skills/ — 1 个工作流技能和领域知识',
'commands/ — 1 个斜杠命令',
],
} = options; } = options;
const readmePath = path.join(testDir, 'README.md'); const readmePath = path.join(testDir, 'README.md');
const agentsPath = path.join(testDir, 'AGENTS.md'); const agentsPath = path.join(testDir, 'AGENTS.md');
const zhRootReadmePath = path.join(testDir, 'README.zh-CN.md');
const zhDocsReadmePath = path.join(testDir, 'docs', 'zh-CN', 'README.md');
const zhAgentsPath = path.join(testDir, 'docs', 'zh-CN', 'AGENTS.md');
fs.mkdirSync(path.join(testDir, 'agents'), { recursive: true }); fs.mkdirSync(path.join(testDir, 'agents'), { recursive: true });
fs.mkdirSync(path.join(testDir, 'commands'), { recursive: true }); fs.mkdirSync(path.join(testDir, 'commands'), { recursive: true });
fs.mkdirSync(path.join(testDir, 'skills', 'demo-skill'), { recursive: true }); fs.mkdirSync(path.join(testDir, 'skills', 'demo-skill'), { recursive: true });
fs.mkdirSync(path.join(testDir, 'docs', 'zh-CN'), { recursive: true });
fs.writeFileSync(path.join(testDir, 'agents', 'planner.md'), '---\nmodel: sonnet\ntools: Read\n---\n# Planner'); fs.writeFileSync(path.join(testDir, 'agents', 'planner.md'), '---\nmodel: sonnet\ntools: Read\n---\n# Planner');
fs.writeFileSync(path.join(testDir, 'commands', 'plan.md'), '---\ndescription: Plan\n---\n# Plan'); fs.writeFileSync(path.join(testDir, 'commands', 'plan.md'), '---\ndescription: Plan\n---\n# Plan');
fs.writeFileSync(path.join(testDir, 'skills', 'demo-skill', 'SKILL.md'), '---\nname: demo-skill\ndescription: Demo skill\norigin: ECC\n---\n# Demo Skill'); fs.writeFileSync(path.join(testDir, 'skills', 'demo-skill', 'SKILL.md'), '---\nname: demo-skill\ndescription: Demo skill\norigin: ECC\n---\n# Demo Skill');
fs.writeFileSync(readmePath, `Access to ${readmeCounts.agents} agents, ${readmeCounts.skills} skills, and ${readmeCounts.commands} commands.\n| Feature | Claude Code | Cursor IDE | Codex CLI | OpenCode |\n|---------|------------|------------|-----------|----------|\n| Agents | PASS: ${readmeCounts.agents} agents | Shared | Shared | 1 |\n| Commands | PASS: ${readmeCounts.commands} commands | Shared | Shared | 1 |\n| Skills | PASS: ${readmeCounts.skills} skills | Shared | Shared | 1 |\n`); fs.writeFileSync(readmePath, `Access to ${readmeCounts.agents} agents, ${readmeCounts.skills} skills, and ${readmeCounts.commands} commands.\n| Feature | Claude Code | Cursor IDE | Codex CLI | OpenCode |\n|---------|------------|------------|-----------|----------|\n| Agents | PASS: ${readmeTableCounts.agents} agents | Shared | Shared | 1 |\n| Commands | PASS: ${readmeTableCounts.commands} commands | Shared | Shared | 1 |\n| Skills | PASS: ${readmeTableCounts.skills} skills | Shared | Shared | 1 |\n\n| Feature | Count | Format |\n|-----------|-------|---------|\n| Skills | ${readmeUnrelatedSkillsCount} | .agents/skills/ |\n\n## Cross-Tool Feature Parity\n\n| Feature | Claude Code | Cursor IDE | Codex CLI | OpenCode |\n|---------|------------|------------|-----------|----------|\n| **Agents** | ${readmeParityCounts.agents} | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |\n| **Commands** | ${readmeParityCounts.commands} | Shared | Instruction-based | 31 |\n| **Skills** | ${readmeParityCounts.skills} | Shared | 10 (native format) | 37 |\n`);
fs.writeFileSync(agentsPath, `This is a **production-ready AI coding plugin** providing ${summaryCounts.agents} specialized agents, ${summaryCounts.skills} skills, ${summaryCounts.commands} commands, and automated hook workflows for software development.\n\n\`\`\`\n${structureLines.join('\n')}\n\`\`\`\n`); fs.writeFileSync(agentsPath, `This is a **production-ready AI coding plugin** providing ${summaryCounts.agents} specialized agents, ${summaryCounts.skills} skills, ${summaryCounts.commands} commands, and automated hook workflows for software development.\n\n\`\`\`\n${structureLines.join('\n')}\n\`\`\`\n`);
fs.writeFileSync(zhRootReadmePath, `**完成!** 你现在可以使用 ${zhRootReadmeCounts.agents} 个代理、${zhRootReadmeCounts.skills} 个技能和 ${zhRootReadmeCounts.commands} 个命令。\n`);
fs.writeFileSync(zhDocsReadmePath, `**搞定!** 你现在可以使用 ${zhDocsReadmeCounts.agents} 个智能体、${zhDocsReadmeCounts.skills} 项技能和 ${zhDocsReadmeCounts.commands} 个命令了。\n| 功能特性 | Claude Code | OpenCode | 状态 |\n|---------|-------------|----------|--------|\n| 智能体 | \u2705 ${zhDocsTableCounts.agents} 个 | \u2705 12 个 | **Claude Code 领先** |\n| 命令 | \u2705 ${zhDocsTableCounts.commands} 个 | \u2705 31 个 | **Claude Code 领先** |\n| 技能 | \u2705 ${zhDocsTableCounts.skills} 项 | \u2705 37 项 | **Claude Code 领先** |\n\n| 功能特性 | 数量 | 格式 |\n|-----------|-------|---------|\n| 技能 | ${zhDocsUnrelatedSkillsCount} | .agents/skills/ |\n\n## 跨工具功能对等\n\n| 功能特性 | Claude Code | Cursor IDE | Codex CLI | OpenCode |\n|---------|------------|------------|-----------|----------|\n| **智能体** | ${zhDocsParityCounts.agents} | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 |\n| **命令** | ${zhDocsParityCounts.commands} | 共享 | 基于指令 | 31 |\n| **技能** | ${zhDocsParityCounts.skills} | 共享 | 10 (原生格式) | 37 |\n`);
fs.writeFileSync(zhAgentsPath, `这是一个**生产就绪的 AI 编码插件**,提供 ${zhAgentsSummaryCounts.agents} 个专业代理、${zhAgentsSummaryCounts.skills} 项技能、${zhAgentsSummaryCounts.commands} 条命令以及自动化钩子工作流,用于软件开发。\n\n\`\`\`\n${zhAgentsStructureLines.join('\n')}\n\`\`\`\n`);
return { readmePath, agentsPath }; return { readmePath, agentsPath, zhRootReadmePath, zhDocsReadmePath, zhAgentsPath };
} }
function runTests() { function runTests() {
@@ -341,20 +369,41 @@ function runTests() {
if (test('fails when README and AGENTS catalog counts drift', () => { if (test('fails when README and AGENTS catalog counts drift', () => {
const testDir = createTestDir(); const testDir = createTestDir();
const { readmePath, agentsPath } = writeCatalogFixture(testDir, { const {
readmePath,
agentsPath,
zhRootReadmePath,
zhDocsReadmePath,
zhAgentsPath,
} = writeCatalogFixture(testDir, {
readmeCounts: { agents: 99, skills: 99, commands: 99 }, readmeCounts: { agents: 99, skills: 99, commands: 99 },
readmeTableCounts: { agents: 99, skills: 99, commands: 99 },
readmeParityCounts: { agents: 99, skills: 99, commands: 99 },
summaryCounts: { agents: 99, skills: 99, commands: 99 }, summaryCounts: { agents: 99, skills: 99, commands: 99 },
structureLines: [ structureLines: [
'agents/ — 99 specialized subagents', 'agents/ — 99 specialized subagents',
'skills/ — 99 workflow skills and domain knowledge', 'skills/ — 99 workflow skills and domain knowledge',
'commands/ — 99 slash commands', 'commands/ — 99 slash commands',
], ],
zhRootReadmeCounts: { agents: 99, skills: 99, commands: 99 },
zhDocsReadmeCounts: { agents: 99, skills: 99, commands: 99 },
zhDocsTableCounts: { agents: 99, skills: 99, commands: 99 },
zhDocsParityCounts: { agents: 99, skills: 99, commands: 99 },
zhAgentsSummaryCounts: { agents: 99, skills: 99, commands: 99 },
zhAgentsStructureLines: [
'agents/ — 99 个专业子代理',
'skills/ — 99 个工作流技能和领域知识',
'commands/ — 99 个斜杠命令',
],
}); });
const result = runCatalogValidator({ const result = runCatalogValidator({
ROOT: testDir, ROOT: testDir,
README_PATH: readmePath, README_PATH: readmePath,
AGENTS_PATH: agentsPath, AGENTS_PATH: agentsPath,
README_ZH_CN_PATH: zhRootReadmePath,
DOCS_ZH_CN_README_PATH: zhDocsReadmePath,
DOCS_ZH_CN_AGENTS_PATH: zhAgentsPath,
}); });
assert.strictEqual(result.code, 1, 'Should fail when catalog counts drift'); assert.strictEqual(result.code, 1, 'Should fail when catalog counts drift');
@@ -362,20 +411,154 @@ function runTests() {
cleanupTestDir(testDir); cleanupTestDir(testDir);
})) passed++; else failed++; })) passed++; else failed++;
if (test('fails when README parity table counts drift', () => {
const testDir = createTestDir();
const {
readmePath,
agentsPath,
zhRootReadmePath,
zhDocsReadmePath,
zhAgentsPath,
} = writeCatalogFixture(testDir, {
readmeCounts: { agents: 1, skills: 1, commands: 1 },
readmeTableCounts: { agents: 1, skills: 1, commands: 1 },
readmeParityCounts: { agents: 9, skills: 8, commands: 7 },
summaryCounts: { agents: 1, skills: 1, commands: 1 },
});
const result = runCatalogValidator({
ROOT: testDir,
README_PATH: readmePath,
AGENTS_PATH: agentsPath,
README_ZH_CN_PATH: zhRootReadmePath,
DOCS_ZH_CN_README_PATH: zhDocsReadmePath,
DOCS_ZH_CN_AGENTS_PATH: zhAgentsPath,
});
assert.strictEqual(result.code, 1, 'Should fail when README parity table drifts');
assert.ok(
(result.stdout + result.stderr).includes('README.md parity table'),
'Should mention the README parity table mismatch'
);
cleanupTestDir(testDir);
})) passed++; else failed++;
if (test('fails when a tracked catalog document is missing', () => {
const testDir = createTestDir();
const {
readmePath,
agentsPath,
zhRootReadmePath,
zhDocsReadmePath,
} = writeCatalogFixture(testDir);
const missingZhAgentsPath = path.join(testDir, 'docs', 'zh-CN', 'AGENTS.md');
fs.rmSync(missingZhAgentsPath);
const result = runCatalogValidator({
ROOT: testDir,
README_PATH: readmePath,
AGENTS_PATH: agentsPath,
README_ZH_CN_PATH: zhRootReadmePath,
DOCS_ZH_CN_README_PATH: zhDocsReadmePath,
DOCS_ZH_CN_AGENTS_PATH: missingZhAgentsPath,
});
assert.strictEqual(result.code, 1, 'Should fail when a tracked doc is missing');
assert.ok(
(result.stdout + result.stderr).includes('Failed to read AGENTS.md'),
'Should mention the missing tracked document'
);
cleanupTestDir(testDir);
})) passed++; else failed++;
if (test('syncs tracked catalog docs in write mode without rewriting unrelated tables', () => {
const testDir = createTestDir();
const {
readmePath,
agentsPath,
zhRootReadmePath,
zhDocsReadmePath,
zhAgentsPath,
} = writeCatalogFixture(testDir, {
readmeCounts: { agents: 9, skills: 9, commands: 9 },
readmeTableCounts: { agents: 8, skills: 8, commands: 8 },
readmeParityCounts: { agents: 7, skills: 7, commands: 7 },
summaryCounts: { agents: 6, skills: 6, commands: 6 },
zhRootReadmeCounts: { agents: 10, skills: 10, commands: 10 },
zhDocsReadmeCounts: { agents: 11, skills: 11, commands: 11 },
zhDocsTableCounts: { agents: 12, skills: 12, commands: 12 },
zhDocsParityCounts: { agents: 13, skills: 13, commands: 13 },
zhAgentsSummaryCounts: { agents: 14, skills: 14, commands: 14 },
zhAgentsStructureLines: [
'agents/ — 15 个专业子代理',
'skills/ — 16 个工作流技能和领域知识',
'commands/ — 17 个斜杠命令',
],
});
const result = runCatalogValidator({
argv: ['--write', '--text'],
ROOT: testDir,
README_PATH: readmePath,
AGENTS_PATH: agentsPath,
README_ZH_CN_PATH: zhRootReadmePath,
DOCS_ZH_CN_README_PATH: zhDocsReadmePath,
DOCS_ZH_CN_AGENTS_PATH: zhAgentsPath,
});
assert.strictEqual(result.code, 0, `Should sync and pass, got stderr: ${result.stderr}`);
const readme = fs.readFileSync(readmePath, 'utf8');
const agentsDoc = fs.readFileSync(agentsPath, 'utf8');
const zhRootReadme = fs.readFileSync(zhRootReadmePath, 'utf8');
const zhDocsReadme = fs.readFileSync(zhDocsReadmePath, 'utf8');
const zhAgentsDoc = fs.readFileSync(zhAgentsPath, 'utf8');
assert.ok(readme.includes('Access to 1 agents, 1 skills, and 1 legacy command shims'), 'Should sync README quick-start summary');
assert.ok(readme.includes('| Agents | PASS: 1 agents |'), 'Should sync README comparison table');
assert.ok(readme.includes('| Skills | 16 | .agents/skills/ |'), 'Should not rewrite unrelated README tables');
assert.ok(readme.includes('| **Agents** | 1 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |'), 'Should sync README parity table');
assert.ok(agentsDoc.includes('providing 1 specialized agents, 1 skills, 1 commands'), 'Should sync AGENTS summary');
assert.ok(agentsDoc.includes('skills/ — 1 workflow skills and domain knowledge'), 'Should sync AGENTS structure');
assert.ok(zhRootReadme.includes('你现在可以使用 1 个代理、1 个技能和 1 个命令'), 'Should sync README.zh-CN quick-start summary');
assert.ok(zhDocsReadme.includes('你现在可以使用 1 个智能体、1 项技能和 1 个命令了'), 'Should sync docs/zh-CN/README quick-start summary');
assert.ok(zhDocsReadme.includes('| 智能体 | \u2705 1 个 |'), 'Should sync docs/zh-CN/README comparison table');
assert.ok(zhDocsReadme.includes('| 技能 | 16 | .agents/skills/ |'), 'Should not rewrite unrelated docs/zh-CN/README tables');
assert.ok(zhDocsReadme.includes('| **智能体** | 1 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 |'), 'Should sync docs/zh-CN/README parity table');
assert.ok(zhAgentsDoc.includes('提供 1 个专业代理、1 项技能、1 条命令'), 'Should sync docs/zh-CN/AGENTS summary');
assert.ok(zhAgentsDoc.includes('commands/ — 1 个斜杠命令'), 'Should sync docs/zh-CN/AGENTS structure');
cleanupTestDir(testDir);
})) passed++; else failed++;
if (test('accepts AGENTS project structure entries with varied spacing and dash styles', () => { if (test('accepts AGENTS project structure entries with varied spacing and dash styles', () => {
const testDir = createTestDir(); const testDir = createTestDir();
const { readmePath, agentsPath } = writeCatalogFixture(testDir, { const {
readmePath,
agentsPath,
zhRootReadmePath,
zhDocsReadmePath,
zhAgentsPath,
} = writeCatalogFixture(testDir, {
structureLines: [ structureLines: [
' agents/ - 1 specialized subagents ', ' agents/ - 1 specialized subagents ',
'\tskills/\t\t1+ workflow skills and domain knowledge\t', '\tskills/\t\t1+ workflow skills and domain knowledge\t',
' commands/ — 1 slash commands ', ' commands/ — 1 slash commands ',
], ],
zhAgentsStructureLines: [
' agents/ - 1 个专业子代理 ',
'\tskills/\t\t1+ 个工作流技能和领域知识\t',
' commands/ — 1 个斜杠命令 ',
],
}); });
const result = runCatalogValidator({ const result = runCatalogValidator({
ROOT: testDir, ROOT: testDir,
README_PATH: readmePath, README_PATH: readmePath,
AGENTS_PATH: agentsPath, AGENTS_PATH: agentsPath,
README_ZH_CN_PATH: zhRootReadmePath,
DOCS_ZH_CN_README_PATH: zhDocsReadmePath,
DOCS_ZH_CN_AGENTS_PATH: zhAgentsPath,
}); });
assert.strictEqual(result.code, 0, `Should accept formatting variations, got stderr: ${result.stderr}`); assert.strictEqual(result.code, 0, `Should accept formatting variations, got stderr: ${result.stderr}`);

View File

@@ -25,7 +25,7 @@ async function runTests() {
try { try {
store = await import(pathToFileURL(storePath).href) store = await import(pathToFileURL(storePath).href)
} catch (err) { } catch (err) {
console.log('\n Skipping: build .opencode first (cd .opencode && npm run build)\n') console.log('\n[warn] Skipping: build .opencode first (cd .opencode && npm run build)\n')
process.exit(0) process.exit(0)
} }

View File

@@ -253,46 +253,142 @@ function runTests() {
); );
})) passed++; else failed++; })) passed++; else failed++;
if (test('validates projectRoot and homeDir option types before adapter planning', () => {
assert.throws(
() => resolveInstallPlan({ profileId: 'core', target: 'cursor', projectRoot: 42 }),
/projectRoot must be a non-empty string when provided/
);
assert.throws(
() => resolveInstallPlan({ profileId: 'core', target: 'claude', homeDir: {} }),
/homeDir must be a non-empty string when provided/
);
})) passed++; else failed++;
if (test('skips a requested module when its dependency chain does not support the target', () => { if (test('skips a requested module when its dependency chain does not support the target', () => {
const repoRoot = createTestRepo(); const repoRoot = createTestRepo();
writeJson(path.join(repoRoot, 'manifests', 'install-modules.json'), { try {
version: 1, writeJson(path.join(repoRoot, 'manifests', 'install-modules.json'), {
modules: [ version: 1,
{ modules: [
id: 'parent', {
kind: 'skills', id: 'parent',
description: 'Parent', kind: 'skills',
paths: ['parent'], description: 'Parent',
targets: ['claude'], paths: ['parent'],
dependencies: ['child'], targets: ['claude'],
defaultInstall: false, dependencies: ['child'],
cost: 'light', defaultInstall: false,
stability: 'stable' cost: 'light',
}, stability: 'stable'
{ },
id: 'child', {
kind: 'skills', id: 'child',
description: 'Child', kind: 'skills',
paths: ['child'], description: 'Child',
targets: ['cursor'], paths: ['child'],
dependencies: [], targets: ['cursor'],
defaultInstall: false, dependencies: [],
cost: 'light', defaultInstall: false,
stability: 'stable' cost: 'light',
stability: 'stable'
}
]
});
writeJson(path.join(repoRoot, 'manifests', 'install-profiles.json'), {
version: 1,
profiles: {
core: { description: 'Core', modules: ['parent'] }
} }
] });
});
writeJson(path.join(repoRoot, 'manifests', 'install-profiles.json'), {
version: 1,
profiles: {
core: { description: 'Core', modules: ['parent'] }
}
});
const plan = resolveInstallPlan({ repoRoot, profileId: 'core', target: 'claude' }); const plan = resolveInstallPlan({ repoRoot, profileId: 'core', target: 'claude' });
assert.deepStrictEqual(plan.selectedModuleIds, []); assert.deepStrictEqual(plan.selectedModuleIds, []);
assert.deepStrictEqual(plan.skippedModuleIds, ['parent']); assert.deepStrictEqual(plan.skippedModuleIds, ['parent']);
cleanupTestRepo(repoRoot); } finally {
cleanupTestRepo(repoRoot);
}
})) passed++; else failed++;
if (test('fails fast when install manifest module targets is not an array', () => {
const repoRoot = createTestRepo();
try {
writeJson(path.join(repoRoot, 'manifests', 'install-modules.json'), {
version: 1,
modules: [
{
id: 'parent',
kind: 'skills',
description: 'Parent',
paths: ['parent'],
targets: 'claude',
dependencies: [],
defaultInstall: false,
cost: 'light',
stability: 'stable'
}
]
});
writeJson(path.join(repoRoot, 'manifests', 'install-profiles.json'), {
version: 1,
profiles: {
core: { description: 'Core', modules: ['parent'] }
}
});
assert.throws(
() => resolveInstallPlan({ repoRoot, profileId: 'core', target: 'claude' }),
/Install module parent has invalid targets; expected an array of supported target ids/
);
} finally {
cleanupTestRepo(repoRoot);
}
})) passed++; else failed++;
if (test('keeps antigravity modules selected while filtering unsupported source paths', () => {
const repoRoot = createTestRepo();
try {
writeJson(path.join(repoRoot, 'manifests', 'install-modules.json'), {
version: 1,
modules: [
{
id: 'unsupported-antigravity',
kind: 'skills',
description: 'Unsupported',
paths: ['.cursor', 'skills/example'],
targets: ['antigravity'],
dependencies: [],
defaultInstall: false,
cost: 'light',
stability: 'stable'
}
]
});
writeJson(path.join(repoRoot, 'manifests', 'install-profiles.json'), {
version: 1,
profiles: {
core: { description: 'Core', modules: ['unsupported-antigravity'] }
}
});
const plan = resolveInstallPlan({
repoRoot,
profileId: 'core',
target: 'antigravity',
projectRoot: '/workspace/app',
});
assert.deepStrictEqual(plan.selectedModuleIds, ['unsupported-antigravity']);
assert.deepStrictEqual(plan.skippedModuleIds, []);
assert.ok(
plan.operations.every(operation => operation.sourceRelativePath !== '.cursor'),
'Unsupported antigravity paths should be filtered from planned operations'
);
assert.ok(
plan.operations.some(operation => operation.sourceRelativePath === 'skills/example'),
'Supported antigravity skill paths should still be planned'
);
} finally {
cleanupTestRepo(repoRoot);
}
})) passed++; else failed++; })) passed++; else failed++;
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`); console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);

View File

@@ -423,6 +423,49 @@ function runTests() {
} }
})) passed++; else failed++; })) passed++; else failed++;
if (test('reinstall deduplicates legacy hooks without ids against new managed ids', () => {
const homeDir = createTempDir('install-apply-home-');
const projectDir = createTempDir('install-apply-project-');
try {
const firstInstall = run(['--profile', 'core'], { cwd: projectDir, homeDir });
assert.strictEqual(firstInstall.code, 0, firstInstall.stderr);
const settingsPath = path.join(homeDir, '.claude', 'settings.json');
const afterFirstInstall = readJson(settingsPath);
const legacySettings = JSON.parse(JSON.stringify(afterFirstInstall));
for (const entries of Object.values(legacySettings.hooks)) {
if (!Array.isArray(entries)) {
continue;
}
for (const entry of entries) {
delete entry.id;
}
}
fs.writeFileSync(settingsPath, JSON.stringify(legacySettings, null, 2));
const legacyPreToolUseLength = legacySettings.hooks.PreToolUse.length;
const secondInstall = run(['--profile', 'core'], { cwd: projectDir, homeDir });
assert.strictEqual(secondInstall.code, 0, secondInstall.stderr);
const afterSecondInstall = readJson(settingsPath);
assert.strictEqual(
afterSecondInstall.hooks.PreToolUse.length,
legacyPreToolUseLength,
'legacy hook installs should not duplicate when ids are introduced'
);
assert.ok(
afterSecondInstall.hooks.PreToolUse.every(entry => entry && typeof entry === 'object'),
'merged hook entries should remain valid objects'
);
} finally {
cleanup(homeDir);
cleanup(projectDir);
}
})) passed++; else failed++;
if (test('fails when existing settings.json is malformed', () => { if (test('fails when existing settings.json is malformed', () => {
const homeDir = createTempDir('install-apply-home-'); const homeDir = createTempDir('install-apply-home-');
const projectDir = createTempDir('install-apply-project-'); const projectDir = createTempDir('install-apply-project-');

View File

@@ -12,24 +12,24 @@ Here's my complete setup after 10 months of daily use: skills, hooks, subagents,
## Skills and Commands ## Skills and Commands
Skills operate like rules, constricted to certain scopes and workflows. They're shorthand to prompts when you need to execute a particular workflow. Skills are the primary workflow surface. They act like scoped workflow bundles: reusable prompts, structure, supporting files, and codemaps when you need a particular execution pattern.
After a long session of coding with Opus 4.5, you want to clean out dead code and loose .md files? Run `/refactor-clean`. Need testing? `/tdd`, `/e2e`, `/test-coverage`. Skills can also include codemaps - a way for Claude to quickly navigate your codebase without burning context on exploration. After a long session of coding with Opus 4.5, you want to clean out dead code and loose .md files? Run `/refactor-clean`. Need testing? `/tdd`, `/e2e`, `/test-coverage`. Those slash entries are convenient, but the real durable unit is the underlying skill. Skills can also include codemaps - a way for Claude to quickly navigate your codebase without burning context on exploration.
![Terminal showing chained commands](./assets/images/shortform/02-chaining-commands.jpeg) ![Terminal showing chained commands](./assets/images/shortform/02-chaining-commands.jpeg)
*Chaining commands together* *Chaining commands together*
Commands are skills executed via slash commands. They overlap but are stored differently: ECC still ships a `commands/` layer, but it is best thought of as legacy slash-entry compatibility during migration. The durable logic should live in skills.
- **Skills**: `~/.claude/skills/` - broader workflow definitions - **Skills**: `~/.claude/skills/` - canonical workflow definitions
- **Commands**: `~/.claude/commands/` - quick executable prompts - **Commands**: `~/.claude/commands/` - legacy slash-entry shims when you still need them
```bash ```bash
# Example skill structure # Example skill structure
~/.claude/skills/ ~/.claude/skills/
pmx-guidelines.md # Project-specific patterns pmx-guidelines.md # Project-specific patterns
coding-standards.md # Language best practices coding-standards.md # Language best practices
tdd-workflow/ # Multi-file skill with README.md tdd-workflow/ # Multi-file skill with SKILL.md
security-review/ # Checklist-based skill security-review/ # Checklist-based skill
``` ```
@@ -149,7 +149,7 @@ Your 200k context window before compacting might only be 70k with too many tools
# Check enabled MCPs # Check enabled MCPs
/mcp /mcp
# Disable unused ones in ~/.claude.json under projects.disabledMcpServers # Disable unused ones in ~/.claude/settings.json or in the current repo's .mcp.json
``` ```
--- ---