Compare commits

..

38 Commits

Author SHA1 Message Date
ecc-tools[bot]
df95c67eee feat: add everything-claude-code-conventions ECC bundle (.claude/commands/add-ecc-bundle-component.md) 2026-04-03 12:33:38 +00:00
ecc-tools[bot]
3b4905c60e feat: add everything-claude-code-conventions ECC bundle (.claude/commands/feature-development.md) 2026-04-03 12:33:36 +00:00
ecc-tools[bot]
7f11e1a626 feat: add everything-claude-code-conventions ECC bundle (.codex/agents/docs-researcher.toml) 2026-04-03 12:33:35 +00:00
ecc-tools[bot]
c6e999f162 feat: add everything-claude-code-conventions ECC bundle (.codex/agents/reviewer.toml) 2026-04-03 12:33:34 +00:00
ecc-tools[bot]
93306a48e4 feat: add everything-claude-code-conventions ECC bundle (.codex/agents/explorer.toml) 2026-04-03 12:33:33 +00:00
ecc-tools[bot]
dedfe6e94e feat: add everything-claude-code-conventions ECC bundle (.claude/identity.json) 2026-04-03 12:33:32 +00:00
ecc-tools[bot]
444789468d feat: add everything-claude-code-conventions ECC bundle (.agents/skills/everything-claude-code/agents/openai.yaml) 2026-04-03 12:33:32 +00:00
ecc-tools[bot]
747a9499e0 feat: add everything-claude-code-conventions ECC bundle (.agents/skills/everything-claude-code/SKILL.md) 2026-04-03 12:33:31 +00:00
ecc-tools[bot]
c0f57e0f15 feat: add everything-claude-code-conventions ECC bundle (.claude/skills/everything-claude-code/SKILL.md) 2026-04-03 12:33:30 +00:00
ecc-tools[bot]
3ccdae3945 feat: add everything-claude-code-conventions ECC bundle (.claude/ecc-tools.json) 2026-04-03 12:33:29 +00:00
Affaan Mustafa
5df943ed2b feat: add nestjs development patterns 2026-04-02 18:27:51 -07:00
Affaan Mustafa
b10080c7dd fix: respect home overrides in hook utilities 2026-04-02 18:22:07 -07:00
Affaan Mustafa
8bd5a7a5d9 fix: restore markdownlint baseline on main 2026-04-02 18:05:27 -07:00
Affaan Mustafa
16e9b17ad7 fix: clean up observer sessions on lifecycle end 2026-04-02 18:02:29 -07:00
Affaan Mustafa
be0c56957b feat: add jira integration workflow 2026-04-02 17:51:03 -07:00
Affaan Mustafa
badccc3db9 feat: add C# and Dart language support 2026-04-02 17:48:43 -07:00
Affaan Mustafa
31c9f7c33e feat: add web frontend rules and design quality hook 2026-04-02 17:33:17 -07:00
Affaan Mustafa
a60d5fbc00 fix: port ci and markdown cleanup from backlog 2026-04-02 17:09:21 -07:00
Affaan Mustafa
70be8f9f44 fix: add POSIX fallback for observer lazy-start 2026-04-02 17:07:32 -07:00
Affaan Mustafa
635fcbd715 refactor: consolidate writing voice rules 2026-04-02 15:45:19 -07:00
Affaan Mustafa
bf3fd69d2c refactor: extract social graph ranking core 2026-04-02 02:51:24 -07:00
Affaan Mustafa
31525854b5 feat(skills): add brand voice and network ops lanes 2026-04-01 19:41:03 -07:00
Affaan Mustafa
8f63697113 fix: port safe ci cleanup from backlog 2026-04-01 16:09:54 -07:00
Affaan Mustafa
9a6080f2e1 feat: sync the codex baseline and agent roles 2026-04-01 16:08:03 -07:00
Affaan Mustafa
dba5ae779b fix: harden install and codex sync portability 2026-04-01 16:04:56 -07:00
Affaan Mustafa
401966bc18 feat: expand lead intelligence outreach channels 2026-04-01 11:37:26 -07:00
Affaan Mustafa
1abeff9be7 feat: add connected operator workflow skills 2026-04-01 02:37:42 -07:00
Affaan Mustafa
975100db55 refactor: collapse legacy command bodies into skills 2026-04-01 02:25:42 -07:00
Affaan Mustafa
6833454778 fix: dedupe managed hooks by semantic identity 2026-04-01 02:16:32 -07:00
Affaan Mustafa
e134e492cb docs: close bundle drift and sync plugin guidance 2026-04-01 02:13:09 -07:00
Affaan Mustafa
f3db349984 docs: shift repo guidance to skills-first workflows 2026-04-01 02:11:24 -07:00
Affaan Mustafa
5194d2000a docs: tighten voice-driven content skills 2026-04-01 01:31:40 -07:00
Affaan Mustafa
43ac81f1ac fix: harden reusable release tag validation 2026-03-31 23:00:58 -07:00
Affaan Mustafa
e1bc08fa6e fix: harden install planning and sync tracked catalogs 2026-03-31 22:57:48 -07:00
Affaan Mustafa
03c4a90ffa fix: update ecc2 ratatui dependency 2026-03-31 18:23:29 -07:00
Affaan Mustafa
d4b5ca7483 docs: tighten pr backlog classification 2026-03-31 18:21:05 -07:00
Affaan Mustafa
51a87d86d9 docs: add working context file 2026-03-31 18:08:59 -07:00
Affaan Mustafa
a273c62f35 fix: restore ci lockfile and hook validation 2026-03-31 18:00:07 -07:00
153 changed files with 10400 additions and 2724 deletions

View File

@@ -6,7 +6,7 @@ origin: ECC
# Article Writing
Write long-form content that sounds like a real person or brand, not generic AI output.
Write long-form content that sounds like an actual person with a point of view, not an LLM smoothing itself into paste.
## When to Activate
@@ -17,69 +17,63 @@ Write long-form content that sounds like a real person or brand, not generic AI
## Core Rules
1. Lead with the concrete thing: example, output, anecdote, number, screenshot description, or code block.
1. Lead with the concrete thing: artifact, example, output, anecdote, number, screenshot, or code.
2. Explain after the example, not before.
3. Prefer short, direct sentences over padded ones.
4. Use specific numbers when available and sourced.
5. Never invent biographical facts, company metrics, or customer evidence.
3. Keep sentences tight unless the source voice is intentionally expansive.
4. Use proof instead of adjectives.
5. Never invent facts, credibility, or customer evidence.
## Voice Capture Workflow
## Voice Handling
If the user wants a specific voice, collect one or more of:
- published articles
- newsletters
- X / LinkedIn posts
- docs or memos
- a short style guide
If the user wants a specific voice, run `brand-voice` first and reuse its `VOICE PROFILE`.
Do not duplicate a second style-analysis pass here unless the user explicitly asks for one.
Then extract:
- sentence length and rhythm
- whether the voice is formal, conversational, or sharp
- favored rhetorical devices such as parentheses, lists, fragments, or questions
- tolerance for humor, opinion, and contrarian framing
- formatting habits such as headers, bullets, code blocks, and pull quotes
If no voice references are given, default to a direct, operator-style voice: concrete, practical, and low on hype.
If no voice references are given, default to a sharp operator voice: concrete, unsentimental, useful.
## Banned Patterns
Delete and rewrite any of these:
- generic openings like "In today's rapidly evolving landscape"
- filler transitions such as "Moreover" and "Furthermore"
- hype phrases like "game-changer", "cutting-edge", or "revolutionary"
- vague claims without evidence
- biography or credibility claims not backed by provided context
- "In today's rapidly evolving landscape"
- "game-changer", "cutting-edge", "revolutionary"
- "here's why this matters" as a standalone bridge
- fake vulnerability arcs
- a closing question added only to juice engagement
- biography padding that does not move the argument
- generic AI throat-clearing that delays the point
## Writing Process
1. Clarify the audience and purpose.
2. Build a skeletal outline with one purpose per section.
3. Start each section with evidence, example, or scene.
4. Expand only where the next sentence earns its place.
5. Remove anything that sounds templated or self-congratulatory.
2. Build a hard outline with one job per section.
3. Start sections with proof, artifact, conflict, or example.
4. Expand only where the next sentence earns space.
5. Cut anything that sounds templated, overexplained, or self-congratulatory.
## Structure Guidance
### Technical Guides
- open with what the reader gets
- use code or terminal examples in every major section
- end with concrete takeaways, not a soft summary
### Essays / Opinion Pieces
- start with tension, contradiction, or a sharp observation
- open with what the reader gets
- use code, commands, screenshots, or concrete output in major sections
- end with actionable takeaways, not a soft recap
### Essays / Opinion
- start with tension, contradiction, or a specific observation
- keep one argument thread per section
- use examples that earn the opinion
- make opinions answer to evidence
### Newsletters
- keep the first screen strong
- mix insight with updates, not diary filler
- use clear section labels and easy skim structure
- keep the first screen doing real work
- do not front-load diary filler
- use section labels only when they improve scanability
## Quality Gate
Before delivering:
- verify factual claims against provided sources
- remove filler and corporate language
- confirm the voice matches the supplied examples
- ensure every section adds new information
- check formatting for the intended platform
- factual claims are backed by provided sources
- generic AI transitions are gone
- the voice matches the supplied examples or the agreed `VOICE PROFILE`
- every section adds something new
- formatting matches the intended medium

View File

@@ -0,0 +1,97 @@
---
name: brand-voice
description: Build a source-derived writing style profile from real posts, essays, launch notes, docs, or site copy, then reuse that profile across content, outreach, and social workflows. Use when the user wants voice consistency without generic AI writing tropes.
origin: ECC
---
# Brand Voice
Build a durable voice profile from real source material, then use that profile everywhere instead of re-deriving style from scratch or defaulting to generic AI copy.
## When to Activate
- the user wants content or outreach in a specific voice
- writing for X, LinkedIn, email, launch posts, threads, or product updates
- adapting a known author's tone across channels
- the existing content lane needs a reusable style system instead of one-off mimicry
## Source Priority
Use the strongest real source set available, in this order:
1. recent original X posts and threads
2. articles, essays, memos, launch notes, or newsletters
3. real outbound emails or DMs that worked
4. product docs, changelogs, README framing, and site copy
Do not use generic platform exemplars as source material.
## Collection Workflow
1. Gather 5 to 20 representative samples when available.
2. Prefer recent material over old material unless the user says the older writing is more canonical.
3. Separate "public launch voice" from "private working voice" if the source set clearly splits.
4. If live X access is available, use `x-api` to pull recent original posts before drafting.
5. If site copy matters, include the current ECC landing page and repo/plugin framing.
## What to Extract
- rhythm and sentence length
- compression vs explanation
- capitalization norms
- parenthetical use
- question frequency and purpose
- how sharply claims are made
- how often numbers, mechanisms, or receipts show up
- how transitions work
- what the author never does
## Output Contract
Produce a reusable `VOICE PROFILE` block that downstream skills can consume directly. Use the schema in [references/voice-profile-schema.md](references/voice-profile-schema.md).
Keep the profile structured and short enough to reuse in session context. The point is not literary criticism. The point is operational reuse.
## Affaan / ECC Defaults
If the user wants Affaan / ECC voice and live sources are thin, start here unless newer source material overrides it:
- direct, compressed, concrete
- specifics, mechanisms, receipts, and numbers beat adjectives
- parentheticals are for qualification, narrowing, or over-clarification
- capitalization is conventional unless there is a real reason to break it
- questions are rare and should not be used as bait
- tone can be sharp, blunt, skeptical, or dry
- transitions should feel earned, not smoothed over
## Hard Bans
Delete and rewrite any of these:
- fake curiosity hooks
- "not X, just Y"
- "no fluff"
- forced lowercase
- LinkedIn thought-leader cadence
- bait questions
- "Excited to share"
- generic founder-journey filler
- corny parentheticals
## Persistence Rules
- Reuse the latest confirmed `VOICE PROFILE` across related tasks in the same session.
- If the user asks for a durable artifact, save the profile in the requested workspace location or memory surface.
- Do not create repo-tracked files that store personal voice fingerprints unless the user explicitly asks for that.
## Downstream Use
Use this skill before or inside:
- `content-engine`
- `crosspost`
- `lead-intelligence`
- article or launch writing
- cold or warm outbound across X, LinkedIn, and email
If another skill already has a partial voice capture section, this skill is the canonical source of truth.

View File

@@ -0,0 +1,55 @@
# Voice Profile Schema
Use this exact structure when building a reusable voice profile:
```text
VOICE PROFILE
=============
Author:
Goal:
Confidence:
Source Set
- source 1
- source 2
- source 3
Rhythm
- short note on sentence length, pacing, and fragmentation
Compression
- how dense or explanatory the writing is
Capitalization
- conventional, mixed, or situational
Parentheticals
- how they are used and how they are not used
Question Use
- rare, frequent, rhetorical, direct, or mostly absent
Claim Style
- how claims are framed, supported, and sharpened
Preferred Moves
- concrete moves the author does use
Banned Moves
- specific patterns the author does not use
CTA Rules
- how, when, or whether to close with asks
Channel Notes
- X:
- LinkedIn:
- Email:
```
Guidelines:
- Keep the profile concrete and source-backed.
- Use short bullets, not essay paragraphs.
- Every banned move should be observable in the source set or explicitly requested by the user.
- If the source set conflicts, call out the split instead of averaging it into mush.

View File

@@ -6,83 +6,126 @@ origin: ECC
# Content Engine
Turn one idea into strong, platform-native content instead of posting the same thing everywhere.
Build platform-native content without flattening the author's real voice into platform slop.
## When to Activate
- writing X posts or threads
- drafting LinkedIn posts or launch updates
- scripting short-form video or YouTube explainers
- repurposing articles, podcasts, demos, or docs into social content
- building a lightweight content plan around a launch, milestone, or theme
- repurposing articles, podcasts, demos, docs, or internal notes into public content
- building a launch sequence or ongoing content system around a product, insight, or narrative
## First Questions
## Non-Negotiables
Clarify:
- source asset: what are we adapting from
- audience: builders, investors, customers, operators, or general audience
- platform: X, LinkedIn, TikTok, YouTube, newsletter, or multi-platform
- goal: awareness, conversion, recruiting, authority, launch support, or engagement
1. Start from source material, not generic post formulas.
2. Adapt the format for the platform, not the persona.
3. One post should carry one actual claim.
4. Specificity beats adjectives.
5. No engagement bait unless the user explicitly asks for it.
## Core Rules
## Source-First Workflow
1. Adapt for the platform. Do not cross-post the same copy.
2. Hooks matter more than summaries.
3. Every post should carry one clear idea.
4. Use specifics over slogans.
5. Keep the ask small and clear.
Before drafting, identify the source set:
- published articles
- notes or internal memos
- product demos
- docs or changelogs
- transcripts
- screenshots
- prior posts from the same author
## Platform Guidance
If the user wants a specific voice, build a voice profile from real examples before writing.
Use `brand-voice` as the canonical workflow when voice consistency matters across more than one output.
## Voice Handling
`brand-voice` is the canonical voice layer.
Run it first when:
- there are multiple downstream outputs
- the user explicitly cares about writing style
- the content is launch, outreach, or reputation-sensitive
Reuse the resulting `VOICE PROFILE` here instead of rebuilding a second voice model.
If the user wants Affaan / ECC voice specifically, still treat `brand-voice` as the source of truth and feed it the best live or source-derived material available.
## Hard Bans
Delete and rewrite any of these:
- "In today's rapidly evolving landscape"
- "game-changer", "revolutionary", "cutting-edge"
- "here's why this matters" unless it is followed immediately by something concrete
- ending with a LinkedIn-style question just to farm replies
- forced casualness on LinkedIn
- fake engagement padding that was not present in the source material
## Platform Adaptation Rules
### X
- open fast
- one idea per post or per tweet in a thread
- keep links out of the main body unless necessary
- avoid hashtag spam
- open with the strongest claim, artifact, or tension
- keep the compression if the source voice is compressed
- if writing a thread, each post must advance the argument
- do not pad with context the audience does not need
### LinkedIn
- strong first line
- short paragraphs
- more explicit framing around lessons, results, and takeaways
### TikTok / Short Video
- first 3 seconds must interrupt attention
- script around visuals, not just narration
- one demo, one claim, one CTA
- expand only enough for people outside the immediate niche to follow
- do not turn it into a fake lesson post unless the source material actually is reflective
- no corporate inspiration cadence
- no praise-stacking, no "journey" filler
### Short Video
- script around the visual sequence and proof points
- first seconds should show the result, problem, or punch
- do not write narration that sounds better on paper than on screen
### YouTube
- show the result early
- structure by chapter
- refresh the visual every 20-30 seconds
- show the result or tension early
- organize by argument or progression, not filler sections
- use chaptering only when it helps clarity
### Newsletter
- deliver one clear lens, not a bundle of unrelated items
- make section titles skimmable
- keep the opening paragraph doing real work
- open with the point, conflict, or artifact
- do not spend the first paragraph warming up
- every section needs to add something new
## Repurposing Flow
Default cascade:
1. anchor asset: article, video, demo, memo, or launch doc
2. extract 3-7 atomic ideas
3. write platform-native variants
4. trim repetition across outputs
5. align CTAs with platform intent
1. Pick the anchor asset.
2. Extract 3 to 7 atomic claims or scenes.
3. Rank them by sharpness, novelty, and proof.
4. Assign one strong idea per output.
5. Adapt structure for each platform.
6. Strip platform-shaped filler.
7. Run the quality gate.
## Deliverables
When asked for a campaign, return:
- a short voice profile if voice matching matters
- the core angle
- platform-specific drafts
- optional posting order
- optional CTA variants
- any missing inputs needed before publishing
- platform-native drafts
- posting order only if it helps execution
- gaps that must be filled before publishing
## Quality Gate
Before delivering:
- each draft reads natively for its platform
- hooks are strong and specific
- no generic hype language
- every draft sounds like the intended author, not the platform stereotype
- every draft contains a real claim, proof point, or concrete observation
- no generic hype language remains
- no fake engagement bait remains
- no duplicated copy across platforms unless requested
- the CTA matches the content and audience
- any CTA is earned and user-approved
## Related Skills
- `brand-voice` for source-derived voice profiles
- `crosspost` for platform-specific distribution
- `x-api` for sourcing recent posts and publishing approved X output

View File

@@ -6,183 +6,106 @@ origin: ECC
# Crosspost
Distribute content across multiple social platforms with platform-native adaptation.
Distribute content across platforms without turning it into the same fake post in four costumes.
## When to Activate
- User wants to post content to multiple platforms
- Publishing announcements, launches, or updates across social media
- Repurposing a post from one platform to others
- User says "crosspost", "post everywhere", "share on all platforms", or "distribute this"
- the user wants to publish the same underlying idea across multiple platforms
- a launch, update, release, or essay needs platform-specific versions
- the user says "crosspost", "post this everywhere", or "adapt this for X and LinkedIn"
## Core Rules
1. **Never post identical content cross-platform.** Each platform gets a native adaptation.
2. **Primary platform first.** Post to the main platform, then adapt for others.
3. **Respect platform conventions.** Length limits, formatting, link handling all differ.
4. **One idea per post.** If the source content has multiple ideas, split across posts.
5. **Attribution matters.** If crossposting someone else's content, credit the source.
## Platform Specifications
| Platform | Max Length | Link Handling | Hashtags | Media |
|----------|-----------|---------------|----------|-------|
| X | 280 chars (4000 for Premium) | Counted in length | Minimal (1-2 max) | Images, video, GIFs |
| LinkedIn | 3000 chars | Not counted in length | 3-5 relevant | Images, video, docs, carousels |
| Threads | 500 chars | Separate link attachment | None typical | Images, video |
| Bluesky | 300 chars | Via facets (rich text) | None (use feeds) | Images |
1. Do not publish identical copy across platforms.
2. Preserve the author's voice across platforms.
3. Adapt for constraints, not stereotypes.
4. One post should still be about one thing.
5. Do not invent a CTA, question, or moral if the source did not earn one.
## Workflow
### Step 1: Create Source Content
### Step 1: Start with the Primary Version
Start with the core idea. Use `content-engine` skill for high-quality drafts:
- Identify the single core message
- Determine the primary platform (where the audience is biggest)
- Draft the primary platform version first
Pick the strongest source version first:
- the original X post
- the original article
- the launch note
- the thread
- the memo or changelog
### Step 2: Identify Target Platforms
Use `content-engine` first if the source still needs voice shaping.
Ask the user or determine from context:
- Which platforms to target
- Priority order (primary gets the best version)
- Any platform-specific requirements (e.g., LinkedIn needs professional tone)
### Step 2: Capture the Voice Fingerprint
### Step 3: Adapt Per Platform
Run `brand-voice` first if the source voice is not already captured in the current session.
For each target platform, transform the content:
Reuse the resulting `VOICE PROFILE` directly.
Do not build a second ad hoc voice checklist here unless the user explicitly wants a fresh override for this campaign.
**X adaptation:**
- Open with a hook, not a summary
- Cut to the core insight fast
- Keep links out of main body when possible
- Use thread format for longer content
### Step 3: Adapt by Platform Constraint
**LinkedIn adaptation:**
- Strong first line (visible before "see more")
- Short paragraphs with line breaks
- Frame around lessons, results, or professional takeaways
- More explicit context than X (LinkedIn audience needs framing)
### X
**Threads adaptation:**
- Conversational, casual tone
- Shorter than LinkedIn, less compressed than X
- Visual-first if possible
- keep it compressed
- lead with the sharpest claim or artifact
- use a thread only when a single post would collapse the argument
- avoid hashtags and generic filler
**Bluesky adaptation:**
- Direct and concise (300 char limit)
- Community-oriented tone
- Use feeds/lists for topic targeting instead of hashtags
### LinkedIn
### Step 4: Post Primary Platform
- add only the context needed for people outside the niche
- do not turn it into a fake founder-reflection post
- do not add a closing question just because it is LinkedIn
- do not force a polished "professional tone" if the author is naturally sharper
Post to the primary platform first:
- Use `x-api` skill for X
- Use platform-specific APIs or tools for others
- Capture the post URL for cross-referencing
### Threads
### Step 5: Post to Secondary Platforms
- keep it readable and direct
- do not write fake hyper-casual creator copy
- do not paste the LinkedIn version and shorten it
Post adapted versions to remaining platforms:
- Stagger timing (not all at once — 30-60 min gaps)
- Include cross-platform references where appropriate ("longer thread on X" etc.)
### Bluesky
## Content Adaptation Examples
- keep it concise
- preserve the author's cadence
- do not rely on hashtags or feed-gaming language
### Source: Product Launch
## Posting Order
**X version:**
```
We just shipped [feature].
Default:
1. post the strongest native version first
2. adapt for the secondary platforms
3. stagger timing only if the user wants sequencing help
[One specific thing it does that's impressive]
Do not add cross-platform references unless useful. Most of the time, the post should stand on its own.
[Link]
```
## Banned Patterns
**LinkedIn version:**
```
Excited to share: we just launched [feature] at [Company].
Delete and rewrite any of these:
- "Excited to share"
- "Here's what I learned"
- "What do you think?"
- "link in bio" unless that is literally true
- generic "professional takeaway" paragraphs that were not in the source
Here's why it matters:
## Output Format
[2-3 short paragraphs with context]
[Takeaway for the audience]
[Link]
```
**Threads version:**
```
just shipped something cool — [feature]
[casual explanation of what it does]
link in bio
```
### Source: Technical Insight
**X version:**
```
TIL: [specific technical insight]
[Why it matters in one sentence]
```
**LinkedIn version:**
```
A pattern I've been using that's made a real difference:
[Technical insight with professional framing]
[How it applies to teams/orgs]
#relevantHashtag
```
## API Integration
### Batch Crossposting Service (Example Pattern)
If using a crossposting service (e.g., Postbridge, Buffer, or a custom API), the pattern looks like:
```python
import os
import requests
resp = requests.post(
"https://api.postbridge.io/v1/posts",
headers={"Authorization": f"Bearer {os.environ['POSTBRIDGE_API_KEY']}"},
json={
"platforms": ["twitter", "linkedin", "threads"],
"content": {
"twitter": {"text": x_version},
"linkedin": {"text": linkedin_version},
"threads": {"text": threads_version}
}
}
)
```
### Manual Posting
Without Postbridge, post to each platform using its native API:
- X: Use `x-api` skill patterns
- LinkedIn: LinkedIn API v2 with OAuth 2.0
- Threads: Threads API (Meta)
- Bluesky: AT Protocol API
Return:
- the primary platform version
- adapted variants for each requested platform
- a short note on what changed and why
- any publishing constraint the user still needs to resolve
## Quality Gate
Before posting:
- [ ] Each platform version reads naturally for that platform
- [ ] No identical content across platforms
- [ ] Length limits respected
- [ ] Links work and are placed appropriately
- [ ] Tone matches platform conventions
- [ ] Media is sized correctly for each platform
Before delivering:
- each version reads like the same author under different constraints
- no platform version feels padded or sanitized
- no copy is duplicated verbatim across platforms
- any extra context added for LinkedIn or newsletter use is actually necessary
## Related Skills
- `content-engine` — Generate platform-native content
- `x-api` — X/Twitter API integration
- `brand-voice` for reusable source-derived voice capture
- `content-engine` for voice capture and source shaping
- `x-api` for X publishing workflows

View File

@@ -1,11 +1,11 @@
---
name: everything-claude-code-conventions
description: Development conventions and patterns for everything-claude-code. JavaScript project with conventional commits.
description: Development conventions and patterns for everything-claude-code. TypeScript project with conventional commits.
---
# Everything Claude Code Conventions
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-04-01
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-04-03
## Overview
@@ -13,7 +13,7 @@ This skill teaches Claude the development patterns and conventions used in every
## Tech Stack
- **Primary Language**: JavaScript
- **Primary Language**: TypeScript
- **Architecture**: hybrid module organization
- **Test Location**: separate
@@ -27,20 +27,17 @@ Activate this skill when:
## Commit Conventions
Follow these commit message conventions based on 500 analyzed commits.
Follow these commit message conventions based on 10 analyzed commits.
### Commit Style: Conventional Commits
### Prefixes Used
- `fix`
- `feat`
- `docs`
- `chore`
### Message Guidelines
- Average message length: ~57 characters
- Average message length: ~94 characters
- Keep first line concise and descriptive
- Use imperative mood ("Add feature" not "Added feature")
@@ -48,49 +45,37 @@ Follow these commit message conventions based on 500 analyzed commits.
*Commit message example*
```text
feat: add everything-claude-code ECC bundle (.claude/commands/add-new-agent-or-skill.md)
feat: add everything-claude-code-conventions ECC bundle (.claude/ecc-tools.json)
```
*Commit message example*
```text
docs: add Claude Code troubleshooting workarounds
feat: add everything-claude-code-conventions ECC bundle (.claude/identity.json)
```
*Commit message example*
```text
refactor: fold social graph ranking into lead intelligence
feat: add everything-claude-code-conventions ECC bundle (.codex/agents/explorer.toml)
```
*Commit message example*
```text
chore: ignore local orchestration artifacts
feat: add everything-claude-code-conventions ECC bundle (.codex/agents/reviewer.toml)
```
*Commit message example*
```text
fix(security): remove evalview-agent-testing skill — external dependency
feat: add everything-claude-code-conventions ECC bundle (.codex/agents/docs-researcher.toml)
```
*Commit message example*
```text
feat: add everything-claude-code ECC bundle (.claude/commands/add-new-install-target.md)
```
*Commit message example*
```text
feat: add everything-claude-code ECC bundle (.claude/commands/feature-development.md)
```
*Commit message example*
```text
feat: add everything-claude-code ECC bundle (.claude/enterprise/controls.md)
feat: add everything-claude-code-conventions ECC bundle (.claude/commands/feature-development.md)
```
## Architecture
@@ -99,21 +84,6 @@ feat: add everything-claude-code ECC bundle (.claude/enterprise/controls.md)
This project uses **hybrid** module organization.
### Configuration Files
- `.github/workflows/ci.yml`
- `.github/workflows/maintenance.yml`
- `.github/workflows/monthly-metrics.yml`
- `.github/workflows/release.yml`
- `.github/workflows/reusable-release.yml`
- `.github/workflows/reusable-test.yml`
- `.github/workflows/reusable-validate.yml`
- `.opencode/package.json`
- `.opencode/tsconfig.json`
- `.prettierrc`
- `eslint.config.js`
- `package.json`
### Guidelines
- This project uses a hybrid organization
@@ -121,7 +91,7 @@ This project uses **hybrid** module organization.
## Code Style
### Language: JavaScript
### Language: TypeScript
### Naming Conventions
@@ -134,7 +104,7 @@ This project uses **hybrid** module organization.
### Import Style: Relative Imports
### Export Style: Mixed Style
### Export Style: Named Exports
*Preferred import style*
@@ -145,39 +115,13 @@ import { Button } from '../components/Button'
import { useAuth } from './hooks/useAuth'
```
## Testing
### Test Framework
No specific test framework detected — use the repository's existing test patterns.
### File Pattern: `*.test.js`
### Test Types
- **Unit tests**: Test individual functions and components in isolation
- **Integration tests**: Test interactions between multiple components/services
### Coverage
This project has coverage reporting configured. Aim for 80%+ coverage.
## Error Handling
### Error Handling Style: Try-Catch Blocks
*Standard error handling pattern*
*Preferred export style*
```typescript
try {
const result = await riskyOperation()
return result
} catch (error) {
console.error('Operation failed:', error)
throw new Error('User-friendly message')
}
// Use named exports
export function calculateTotal() { ... }
export const TAX_RATE = 0.1
export interface Order { ... }
```
## Common Workflows
@@ -188,7 +132,7 @@ These workflows were detected from analyzing commit patterns.
Standard feature implementation workflow
**Frequency**: ~20 times per month
**Frequency**: ~30 times per month
**Steps**:
1. Add feature implementation
@@ -196,185 +140,37 @@ Standard feature implementation workflow
3. Update documentation
**Files typically involved**:
- `skills/remotion-video-creation/rules/assets/*`
- `.opencode/*`
- `.opencode/plugins/*`
- `**/*.test.*`
- `.claude/commands/*`
**Example commit sequence**:
```
fix: CI fixes, security audit, remotion skill, lead-intelligence, npm audit (#1039)
chore(deps-dev): bump globals in the minor-and-patch group (#1062)
chore(deps): bump actions/github-script from 7.1.0 to 8.0.0 (#1059)
feat: add everything-claude-code-conventions ECC bundle (.claude/ecc-tools.json)
feat: add everything-claude-code-conventions ECC bundle (.claude/skills/everything-claude-code/SKILL.md)
feat: add everything-claude-code-conventions ECC bundle (.agents/skills/everything-claude-code/SKILL.md)
```
### Add New Install Target
### Add Ecc Bundle Component
Adds support for a new install target (e.g., CodeBuddy, Gemini) to the system, including scripts, manifests, schemas, and tests.
Adds a new component to the everything-claude-code-conventions ECC bundle, such as a tool definition, skill file, identity, or command documentation.
**Frequency**: ~2 times per month
**Frequency**: ~5 times per month
**Steps**:
1. Create new install scripts (install.js, install.sh, uninstall.js, uninstall.sh) in a dedicated directory (e.g., .codebuddy/ or .gemini/).
2. Add or update a README or documentation file for the new target.
3. Update manifests/install-modules.json to register the new target.
4. Update schemas/ecc-install-config.schema.json and schemas/install-modules.schema.json as needed.
5. Update scripts/lib/install-manifests.js and scripts/lib/install-targets/<target>-project.js.
6. Update or add tests in tests/lib/install-targets.test.js.
7. Update registry or related files if necessary.
1. Create or update a file under a relevant ECC bundle directory (e.g., .claude/ecc-tools.json, .claude/skills/everything-claude-code/SKILL.md, .agents/skills/everything-claude-code/SKILL.md, .claude/identity.json, .claude/commands/*.md)
2. Commit the file with a message indicating addition to the ECC bundle
**Files typically involved**:
- `manifests/install-modules.json`
- `schemas/ecc-install-config.schema.json`
- `schemas/install-modules.schema.json`
- `scripts/lib/install-manifests.js`
- `scripts/lib/install-targets/*.js`
- `tests/lib/install-targets.test.js`
- `.*/README.md`
- `.*/install.js`
- `.*/install.sh`
- `.*/uninstall.js`
- `.*/uninstall.sh`
- `.claude/ecc-tools.json`
- `.claude/skills/everything-claude-code/SKILL.md`
- `.agents/skills/everything-claude-code/SKILL.md`
- `.claude/identity.json`
- `.claude/commands/feature-development.md`
- `.claude/commands/add-ecc-bundle-component.md`
**Example commit sequence**:
```
Create new install scripts (install.js, install.sh, uninstall.js, uninstall.sh) in a dedicated directory (e.g., .codebuddy/ or .gemini/).
Add or update a README or documentation file for the new target.
Update manifests/install-modules.json to register the new target.
Update schemas/ecc-install-config.schema.json and schemas/install-modules.schema.json as needed.
Update scripts/lib/install-manifests.js and scripts/lib/install-targets/<target>-project.js.
Update or add tests in tests/lib/install-targets.test.js.
Update registry or related files if necessary.
```
### Add New Skill Or Agent
Adds a new skill or agent to the system, including documentation and registration in manifests.
**Frequency**: ~3 times per month
**Steps**:
1. Create SKILL.md in skills/<skill-name>/ or .claude/skills/<skill-name>/ or .agents/skills/<skill-name>/.
2. Create agent definition(s) in agents/<agent-name>.md if needed.
3. Update manifests/install-modules.json or other relevant manifest files.
4. Add or update documentation (README.md, AGENTS.md, etc.).
**Files typically involved**:
- `skills/*/SKILL.md`
- `.claude/skills/*/SKILL.md`
- `.agents/skills/*/SKILL.md`
- `agents/*.md`
- `manifests/install-modules.json`
- `README.md`
- `AGENTS.md`
**Example commit sequence**:
```
Create SKILL.md in skills/<skill-name>/ or .claude/skills/<skill-name>/ or .agents/skills/<skill-name>/.
Create agent definition(s) in agents/<agent-name>.md if needed.
Update manifests/install-modules.json or other relevant manifest files.
Add or update documentation (README.md, AGENTS.md, etc.).
```
### Add Or Update Command Workflow
Adds or extends user-facing commands (e.g., PRP, santa-loop, GAN harness), often with review/feedback cycles.
**Frequency**: ~2 times per month
**Steps**:
1. Create or update command markdown files in commands/ or .claude/commands/.
2. Add YAML frontmatter, usage, and output sections as per convention.
3. Address review feedback with follow-up fixes (e.g., parameter quoting, API usage, documentation).
4. If needed, update related files (e.g., hooks, tests, AGENTS.md).
**Files typically involved**:
- `commands/*.md`
- `.claude/commands/*.md`
**Example commit sequence**:
```
Create or update command markdown files in commands/ or .claude/commands/.
Add YAML frontmatter, usage, and output sections as per convention.
Address review feedback with follow-up fixes (e.g., parameter quoting, API usage, documentation).
If needed, update related files (e.g., hooks, tests, AGENTS.md).
```
### Add Or Update Opencode Agent
Adds new OpenCode agent prompt files and updates agent registration in opencode.json.
**Frequency**: ~2 times per month
**Steps**:
1. Add new prompt files in .opencode/prompts/agents/*.txt.
2. Update .opencode/opencode.json to register new agents.
3. Update AGENTS.md to reflect new agent count or details.
4. Address review feedback (e.g., remove agents, update documentation).
**Files typically involved**:
- `.opencode/prompts/agents/*.txt`
- `.opencode/opencode.json`
- `AGENTS.md`
**Example commit sequence**:
```
Add new prompt files in .opencode/prompts/agents/*.txt.
Update .opencode/opencode.json to register new agents.
Update AGENTS.md to reflect new agent count or details.
Address review feedback (e.g., remove agents, update documentation).
```
### Add Or Update Hook Orci Script
Implements or fixes hooks and related scripts for formatting, typechecking, session management, or CI integration.
**Frequency**: ~2 times per month
**Steps**:
1. Edit or add scripts in scripts/hooks/ or hooks/hooks.json.
2. Update or add tests in tests/hooks/.
3. Address review feedback (e.g., race conditions, timeouts, path normalization, security).
4. If needed, update related files (e.g., .cursor/hooks/after-file-edit.js, .github/workflows/*).
**Files typically involved**:
- `scripts/hooks/*.js`
- `scripts/hooks/*.sh`
- `hooks/hooks.json`
- `tests/hooks/*.js`
- `.cursor/hooks/*.js`
- `.github/workflows/*.yml`
**Example commit sequence**:
```
Edit or add scripts in scripts/hooks/ or hooks/hooks.json.
Update or add tests in tests/hooks/.
Address review feedback (e.g., race conditions, timeouts, path normalization, security).
If needed, update related files (e.g., .cursor/hooks/after-file-edit.js, .github/workflows/*).
```
### Dependency Update Via Dependabot
Automated or manual update of dependencies (npm packages or GitHub Actions) across multiple workflow files.
**Frequency**: ~4 times per month
**Steps**:
1. Update dependency version in package.json, yarn.lock, or relevant workflow YAML.
2. Update .github/workflows/*.yml to use new action versions.
3. Commit with standardized message referencing the dependency and version bump.
**Files typically involved**:
- `package.json`
- `yarn.lock`
- `package-lock.json`
- `.github/workflows/*.yml`
**Example commit sequence**:
```
Update dependency version in package.json, yarn.lock, or relevant workflow YAML.
Update .github/workflows/*.yml to use new action versions.
Commit with standardized message referencing the dependency and version bump.
Create or update a file under a relevant ECC bundle directory (e.g., .claude/ecc-tools.json, .claude/skills/everything-claude-code/SKILL.md, .agents/skills/everything-claude-code/SKILL.md, .claude/identity.json, .claude/commands/*.md)
Commit the file with a message indicating addition to the ECC bundle
```
@@ -385,14 +181,12 @@ Based on analysis of the codebase, follow these practices:
### Do
- Use conventional commit format (feat:, fix:, etc.)
- Follow *.test.js naming pattern
- Use camelCase for file names
- Prefer mixed exports
- Prefer named exports
### Don't
- Don't write vague commit messages
- Don't skip tests for new features
- Don't deviate from established patterns without discussion
---

View File

@@ -6,7 +6,7 @@ origin: ECC
# Investor Outreach
Write investor communication that is short, personalized, and easy to act on.
Write investor communication that is short, concrete, and easy to act on.
## When to Activate
@@ -20,17 +20,32 @@ Write investor communication that is short, personalized, and easy to act on.
1. Personalize every outbound message.
2. Keep the ask low-friction.
3. Use proof, not adjectives.
3. Use proof instead of adjectives.
4. Stay concise.
5. Never send generic copy that could go to any investor.
5. Never send copy that could go to any investor.
## Voice Handling
If the user's voice matters, run `brand-voice` first and reuse its `VOICE PROFILE`.
This skill should keep the investor-specific structure and ask discipline, not recreate its own parallel voice system.
## Hard Bans
Delete and rewrite any of these:
- "I'd love to connect"
- "excited to share"
- generic thesis praise without a real tie-in
- vague founder adjectives
- begging language
- soft closing questions when a direct ask is clearer
## Cold Email Structure
1. subject line: short and specific
2. opener: why this investor specifically
3. pitch: what the company does, why now, what proof matters
3. pitch: what the company does, why now, and what proof matters
4. ask: one concrete next step
5. sign-off: name, role, one credibility anchor if needed
5. sign-off: name, role, and one credibility anchor if needed
## Personalization Sources
@@ -40,14 +55,14 @@ Reference one or more of:
- a mutual connection
- a clear market or product fit with the investor's focus
If that context is missing, ask for it or state that the draft is a template awaiting personalization.
If that context is missing, state that the draft still needs personalization instead of pretending it is finished.
## Follow-Up Cadence
Default:
- day 0: initial outbound
- day 4-5: short follow-up with one new data point
- day 10-12: final follow-up with a clean close
- day 4 or 5: short follow-up with one new data point
- day 10 to 12: final follow-up with a clean close
Do not keep nudging after that unless the user wants a longer sequence.
@@ -69,8 +84,8 @@ Include:
## Quality Gate
Before delivering:
- message is personalized
- the message is genuinely personalized
- the ask is explicit
- there is no fluff or begging language
- the proof point is concrete
- filler praise and softener language are gone
- word count stays tight

View File

@@ -19,7 +19,7 @@ Programmatic interaction with X (Twitter) for posting, reading, searching, and a
## Authentication
### OAuth 2.0 (App-Only / User Context)
### OAuth 2.0 Bearer Token (App-Only)
Best for: read-heavy operations, search, public data.
@@ -46,25 +46,27 @@ tweets = resp.json()
### OAuth 1.0a (User Context)
Required for: posting tweets, managing account, DMs.
Required for: posting tweets, managing account, DMs, and any write flow.
```bash
# Environment setup — source before use
export X_API_KEY="your-api-key"
export X_API_SECRET="your-api-secret"
export X_CONSUMER_KEY="your-consumer-key"
export X_CONSUMER_SECRET="your-consumer-secret"
export X_ACCESS_TOKEN="your-access-token"
export X_ACCESS_SECRET="your-access-secret"
export X_ACCESS_TOKEN_SECRET="your-access-token-secret"
```
Legacy aliases such as `X_API_KEY`, `X_API_SECRET`, and `X_ACCESS_SECRET` may exist in older setups. Prefer the `X_CONSUMER_*` and `X_ACCESS_TOKEN_SECRET` names when documenting or wiring new flows.
```python
import os
from requests_oauthlib import OAuth1Session
oauth = OAuth1Session(
os.environ["X_API_KEY"],
client_secret=os.environ["X_API_SECRET"],
os.environ["X_CONSUMER_KEY"],
client_secret=os.environ["X_CONSUMER_SECRET"],
resource_owner_key=os.environ["X_ACCESS_TOKEN"],
resource_owner_secret=os.environ["X_ACCESS_SECRET"],
resource_owner_secret=os.environ["X_ACCESS_TOKEN_SECRET"],
)
```
@@ -92,7 +94,6 @@ def post_thread(oauth, tweets: list[str]) -> list[str]:
if reply_to:
payload["reply"] = {"in_reply_to_tweet_id": reply_to}
resp = oauth.post("https://api.x.com/2/tweets", json=payload)
resp.raise_for_status()
tweet_id = resp.json()["data"]["id"]
ids.append(tweet_id)
reply_to = tweet_id
@@ -126,6 +127,21 @@ resp = requests.get(
)
```
### Pull Recent Original Posts for Voice Modeling
```python
resp = requests.get(
"https://api.x.com/2/tweets/search/recent",
headers=headers,
params={
"query": "from:affaanmustafa -is:retweet -is:reply",
"max_results": 25,
"tweet.fields": "created_at,public_metrics",
}
)
voice_samples = resp.json()
```
### Get User by Username
```python
@@ -155,17 +171,12 @@ resp = oauth.post(
)
```
## Rate Limits Reference
## Rate Limits
| Endpoint | Limit | Window |
|----------|-------|--------|
| POST /2/tweets | 200 | 15 min |
| GET /2/tweets/search/recent | 450 | 15 min |
| GET /2/users/:id/tweets | 1500 | 15 min |
| GET /2/users/by/username | 300 | 15 min |
| POST media/upload | 415 | 15 min |
Always check `x-rate-limit-remaining` and `x-rate-limit-reset` headers.
X API rate limits vary by endpoint, auth method, and account tier, and they change over time. Always:
- Check the current X developer docs before hardcoding assumptions
- Read `x-rate-limit-remaining` and `x-rate-limit-reset` headers at runtime
- Back off automatically instead of relying on static tables in code
```python
import time
@@ -202,13 +213,18 @@ else:
## Integration with Content Engine
Use `content-engine` skill to generate platform-native content, then post via X API:
1. Generate content with content-engine (X platform format)
2. Validate length (280 chars for single tweet)
3. Post via X API using patterns above
4. Track engagement via public_metrics
Use `brand-voice` plus `content-engine` to generate platform-native content, then post via X API:
1. Pull recent original posts when voice matching matters
2. Build or reuse a `VOICE PROFILE`
3. Generate content with `content-engine` in X-native format
4. Validate length and thread structure
5. Return the draft for approval unless the user explicitly asked to post now
6. Post via X API only after approval
7. Track engagement via public_metrics
## Related Skills
- `brand-voice` — Build a reusable voice profile from real X and site/source material
- `content-engine` — Generate platform-native content for X
- `crosspost` — Distribute content across X, LinkedIn, and other platforms
- `connections-optimizer` — Reorganize the X graph before drafting network-driven outreach

View File

@@ -6,7 +6,7 @@ These constraints are not obvious from public examples and have caused repeated
### Custom Endpoints and Gateways
ECC does not override Claude Code transport settings. If Claude Code is configured to run through an official LLM gateway or a compatible custom endpoint, the plugin continues to work because hooks, commands, and skills execute locally after the CLI starts successfully.
ECC does not override Claude Code transport settings. If Claude Code is configured to run through an official LLM gateway or a compatible custom endpoint, the plugin continues to work because hooks, skills, and any retained legacy command shims execute locally after the CLI starts successfully.
Use Claude Code's own environment/configuration for transport selection, for example:

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://anthropic.com/claude-code/marketplace.schema.json",
"name": "everything-claude-code",
"description": "Battle-tested Claude Code configurations from an Anthropic hackathon winner — agents, skills, hooks, commands, and rules evolved over 10+ months of intensive daily use",
"description": "Battle-tested Claude Code configurations from an Anthropic hackathon winner — agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use",
"owner": {
"name": "Affaan Mustafa",
"email": "me@affaanmustafa.com"
@@ -13,7 +13,7 @@
{
"name": "everything-claude-code",
"source": "./",
"description": "The most comprehensive Claude Code plugin — 14+ agents, 56+ skills, 33+ commands, and production-ready hooks for TDD, security scanning, code review, and continuous learning",
"description": "The most comprehensive Claude Code plugin — 36 agents, 142 skills, 68 legacy command shims, and production-ready hooks for TDD, security scanning, code review, and continuous learning",
"version": "1.9.0",
"author": {
"name": "Affaan Mustafa",

View File

@@ -1,7 +1,7 @@
{
"name": "everything-claude-code",
"version": "1.9.0",
"description": "Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, and rules evolved over 10+ months of intensive daily use",
"description": "Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use",
"author": {
"name": "Affaan Mustafa",
"url": "https://x.com/affaanmustafa"

View File

@@ -0,0 +1,39 @@
---
name: add-ecc-bundle-component
description: Workflow command scaffold for add-ecc-bundle-component in everything-claude-code.
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
---
# /add-ecc-bundle-component
Use this workflow when working on **add-ecc-bundle-component** in `everything-claude-code`.
## Goal
Adds a new component to the everything-claude-code-conventions ECC bundle, such as a tool definition, skill file, identity, or command documentation.
## Common Files
- `.claude/ecc-tools.json`
- `.claude/skills/everything-claude-code/SKILL.md`
- `.agents/skills/everything-claude-code/SKILL.md`
- `.claude/identity.json`
- `.claude/commands/feature-development.md`
- `.claude/commands/add-ecc-bundle-component.md`
## Suggested Sequence
1. Understand the current state and failure mode before editing.
2. Make the smallest coherent change that satisfies the workflow goal.
3. Run the most relevant verification for touched files.
4. Summarize what changed and what still needs review.
## Typical Commit Signals
- Create or update a file under a relevant ECC bundle directory (e.g., .claude/ecc-tools.json, .claude/skills/everything-claude-code/SKILL.md, .agents/skills/everything-claude-code/SKILL.md, .claude/identity.json, .claude/commands/*.md)
- Commit the file with a message indicating addition to the ECC bundle
## Notes
- Treat this as a scaffold, not a hard-coded script.
- Update the command if the workflow evolves materially.

View File

@@ -1,42 +0,0 @@
---
name: add-new-install-target
description: Workflow command scaffold for add-new-install-target in everything-claude-code.
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
---
# /add-new-install-target
Use this workflow when working on **add-new-install-target** in `everything-claude-code`.
## Goal
Adds support for a new install target (e.g., CodeBuddy, Gemini) to the system, including scripts, manifests, schemas, and tests.
## Common Files
- `manifests/install-modules.json`
- `schemas/ecc-install-config.schema.json`
- `schemas/install-modules.schema.json`
- `scripts/lib/install-manifests.js`
- `scripts/lib/install-targets/*.js`
- `tests/lib/install-targets.test.js`
## Suggested Sequence
1. Understand the current state and failure mode before editing.
2. Make the smallest coherent change that satisfies the workflow goal.
3. Run the most relevant verification for touched files.
4. Summarize what changed and what still needs review.
## Typical Commit Signals
- Create new install scripts (install.js, install.sh, uninstall.js, uninstall.sh) in a dedicated directory (e.g., .codebuddy/ or .gemini/).
- Add or update a README or documentation file for the new target.
- Update manifests/install-modules.json to register the new target.
- Update schemas/ecc-install-config.schema.json and schemas/install-modules.schema.json as needed.
- Update scripts/lib/install-manifests.js and scripts/lib/install-targets/<target>-project.js.
## Notes
- Treat this as a scaffold, not a hard-coded script.
- Update the command if the workflow evolves materially.

View File

@@ -1,41 +0,0 @@
---
name: add-new-skill-or-agent
description: Workflow command scaffold for add-new-skill-or-agent in everything-claude-code.
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
---
# /add-new-skill-or-agent
Use this workflow when working on **add-new-skill-or-agent** in `everything-claude-code`.
## Goal
Adds a new skill or agent to the system, including documentation and registration in manifests.
## Common Files
- `skills/*/SKILL.md`
- `.claude/skills/*/SKILL.md`
- `.agents/skills/*/SKILL.md`
- `agents/*.md`
- `manifests/install-modules.json`
- `README.md`
## Suggested Sequence
1. Understand the current state and failure mode before editing.
2. Make the smallest coherent change that satisfies the workflow goal.
3. Run the most relevant verification for touched files.
4. Summarize what changed and what still needs review.
## Typical Commit Signals
- Create SKILL.md in skills/<skill-name>/ or .claude/skills/<skill-name>/ or .agents/skills/<skill-name>/.
- Create agent definition(s) in agents/<agent-name>.md if needed.
- Update manifests/install-modules.json or other relevant manifest files.
- Add or update documentation (README.md, AGENTS.md, etc.).
## Notes
- Treat this as a scaffold, not a hard-coded script.
- Update the command if the workflow evolves materially.

View File

@@ -14,10 +14,7 @@ Standard feature implementation workflow
## Common Files
- `skills/remotion-video-creation/rules/assets/*`
- `.opencode/*`
- `.opencode/plugins/*`
- `**/*.test.*`
- `.claude/commands/*`
## Suggested Sequence

View File

@@ -2,36 +2,28 @@
"version": "1.3",
"schemaVersion": "1.0",
"generatedBy": "ecc-tools",
"generatedAt": "2026-03-31T23:03:58.867Z",
"generatedAt": "2026-04-03T10:14:32.602Z",
"repo": "https://github.com/affaan-m/everything-claude-code",
"profiles": {
"requested": "full",
"recommended": "full",
"effective": "full",
"requestedAlias": "full",
"recommendedAlias": "full",
"effectiveAlias": "full"
"requested": "developer",
"recommended": "developer",
"effective": "developer",
"requestedAlias": "developer",
"recommendedAlias": "developer",
"effectiveAlias": "developer"
},
"requestedProfile": "full",
"profile": "full",
"recommendedProfile": "full",
"effectiveProfile": "full",
"requestedProfile": "developer",
"profile": "developer",
"recommendedProfile": "developer",
"effectiveProfile": "developer",
"tier": "enterprise",
"requestedComponents": [
"repo-baseline",
"workflow-automation",
"security-audits",
"research-tooling",
"team-rollout",
"governance-controls"
"workflow-automation"
],
"selectedComponents": [
"repo-baseline",
"workflow-automation",
"security-audits",
"research-tooling",
"team-rollout",
"governance-controls"
"workflow-automation"
],
"requestedAddComponents": [],
"requestedRemoveComponents": [],
@@ -39,45 +31,25 @@
"tierFilteredComponents": [],
"requestedRootPackages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
"workflow-pack"
],
"selectedRootPackages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
"workflow-pack"
],
"requestedPackages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
"workflow-pack"
],
"requestedAddPackages": [],
"requestedRemovePackages": [],
"selectedPackages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
"workflow-pack"
],
"packages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
"workflow-pack"
],
"blockedRemovalPackages": [],
"tierFilteredRootPackages": [],
@@ -87,51 +59,23 @@
"runtime-core": [],
"workflow-pack": [
"runtime-core"
],
"agentshield-pack": [
"workflow-pack"
],
"research-pack": [
"workflow-pack"
],
"team-config-sync": [
"runtime-core"
],
"enterprise-controls": [
"team-config-sync"
]
},
"resolutionOrder": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
"workflow-pack"
],
"requestedModules": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
"workflow-pack"
],
"selectedModules": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
"workflow-pack"
],
"modules": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
"workflow-pack"
],
"managedFiles": [
".claude/skills/everything-claude-code/SKILL.md",
@@ -144,13 +88,8 @@
".codex/agents/reviewer.toml",
".codex/agents/docs-researcher.toml",
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml",
".claude/rules/everything-claude-code-guardrails.md",
".claude/research/everything-claude-code-research-playbook.md",
".claude/team/everything-claude-code-team-config.json",
".claude/enterprise/controls.md",
".claude/commands/feature-development.md",
".claude/commands/add-new-install-target.md",
".claude/commands/add-new-skill-or-agent.md"
".claude/commands/add-ecc-bundle-component.md"
],
"packageFiles": {
"runtime-core": [
@@ -165,22 +104,9 @@
".codex/agents/docs-researcher.toml",
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml"
],
"agentshield-pack": [
".claude/rules/everything-claude-code-guardrails.md"
],
"research-pack": [
".claude/research/everything-claude-code-research-playbook.md"
],
"team-config-sync": [
".claude/team/everything-claude-code-team-config.json"
],
"enterprise-controls": [
".claude/enterprise/controls.md"
],
"workflow-pack": [
".claude/commands/feature-development.md",
".claude/commands/add-new-install-target.md",
".claude/commands/add-new-skill-or-agent.md"
".claude/commands/add-ecc-bundle-component.md"
]
},
"moduleFiles": {
@@ -196,22 +122,9 @@
".codex/agents/docs-researcher.toml",
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml"
],
"agentshield-pack": [
".claude/rules/everything-claude-code-guardrails.md"
],
"research-pack": [
".claude/research/everything-claude-code-research-playbook.md"
],
"team-config-sync": [
".claude/team/everything-claude-code-team-config.json"
],
"enterprise-controls": [
".claude/enterprise/controls.md"
],
"workflow-pack": [
".claude/commands/feature-development.md",
".claude/commands/add-new-install-target.md",
".claude/commands/add-new-skill-or-agent.md"
".claude/commands/add-ecc-bundle-component.md"
]
},
"files": [
@@ -265,26 +178,6 @@
"path": ".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml",
"description": "Continuous-learning instincts derived from repository patterns."
},
{
"moduleId": "agentshield-pack",
"path": ".claude/rules/everything-claude-code-guardrails.md",
"description": "Repository guardrails distilled from analysis for security and workflow review."
},
{
"moduleId": "research-pack",
"path": ".claude/research/everything-claude-code-research-playbook.md",
"description": "Research workflow playbook for source attribution and long-context tasks."
},
{
"moduleId": "team-config-sync",
"path": ".claude/team/everything-claude-code-team-config.json",
"description": "Team config scaffold that points collaborators at the shared ECC bundle."
},
{
"moduleId": "enterprise-controls",
"path": ".claude/enterprise/controls.md",
"description": "Enterprise governance scaffold for approvals, audit posture, and escalation."
},
{
"moduleId": "workflow-pack",
"path": ".claude/commands/feature-development.md",
@@ -292,13 +185,8 @@
},
{
"moduleId": "workflow-pack",
"path": ".claude/commands/add-new-install-target.md",
"description": "Workflow command scaffold for add-new-install-target."
},
{
"moduleId": "workflow-pack",
"path": ".claude/commands/add-new-skill-or-agent.md",
"description": "Workflow command scaffold for add-new-skill-or-agent."
"path": ".claude/commands/add-ecc-bundle-component.md",
"description": "Workflow command scaffold for add-ecc-bundle-component."
}
],
"workflows": [
@@ -307,12 +195,8 @@
"path": ".claude/commands/feature-development.md"
},
{
"command": "add-new-install-target",
"path": ".claude/commands/add-new-install-target.md"
},
{
"command": "add-new-skill-or-agent",
"path": ".claude/commands/add-new-skill-or-agent.md"
"command": "add-ecc-bundle-component",
"path": ".claude/commands/add-ecc-bundle-component.md"
}
],
"adapters": {
@@ -321,8 +205,7 @@
"identityPath": ".claude/identity.json",
"commandPaths": [
".claude/commands/feature-development.md",
".claude/commands/add-new-install-target.md",
".claude/commands/add-new-skill-or-agent.md"
".claude/commands/add-ecc-bundle-component.md"
]
},
"codex": {

View File

@@ -2,13 +2,13 @@
"version": "2.0",
"technicalLevel": "technical",
"preferredStyle": {
"verbosity": "minimal",
"verbosity": "moderate",
"codeComments": true,
"explanations": true
},
"domains": [
"javascript"
"typescript"
],
"suggestedBy": "ecc-tools-repo-analysis",
"createdAt": "2026-04-01T00:55:29.418Z"
"createdAt": "2026-04-03T12:33:26.494Z"
}

View File

@@ -18,4 +18,4 @@ Use this when the task is documentation-heavy, source-sensitive, or requires bro
- Primary language: JavaScript
- Framework: Not detected
- Workflows detected: 7
- Workflows detected: 10

View File

@@ -4,7 +4,7 @@ Generated by ECC Tools from repository history. Review before treating it as a h
## Commit Workflow
- Prefer `conventional` commit messaging with prefixes such as fix, feat, docs, chore.
- Prefer `conventional` commit messaging with prefixes such as fix, test, feat, docs.
- Keep new changes aligned with the existing pull-request and review flow already present in the repo.
## Architecture
@@ -24,9 +24,9 @@ Generated by ECC Tools from repository history. Review before treating it as a h
## Detected Workflows
- database-migration: Database schema changes with migration files
- feature-development: Standard feature implementation workflow
- add-new-install-target: Adds support for a new install target (e.g., CodeBuddy, Gemini) to the system, including scripts, manifests, schemas, and tests.
- add-new-skill-or-agent: Adds a new skill or agent to the system, including documentation and registration in manifests.
- add-language-rules: Adds a new programming language to the rules system, including coding style, hooks, patterns, security, and testing guidelines.
## Review Reminder

View File

@@ -1,11 +1,11 @@
---
name: everything-claude-code-conventions
description: Development conventions and patterns for everything-claude-code. JavaScript project with conventional commits.
description: Development conventions and patterns for everything-claude-code. TypeScript project with conventional commits.
---
# Everything Claude Code Conventions
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-04-01
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-04-03
## Overview
@@ -13,7 +13,7 @@ This skill teaches Claude the development patterns and conventions used in every
## Tech Stack
- **Primary Language**: JavaScript
- **Primary Language**: TypeScript
- **Architecture**: hybrid module organization
- **Test Location**: separate
@@ -27,20 +27,17 @@ Activate this skill when:
## Commit Conventions
Follow these commit message conventions based on 500 analyzed commits.
Follow these commit message conventions based on 10 analyzed commits.
### Commit Style: Conventional Commits
### Prefixes Used
- `fix`
- `feat`
- `docs`
- `chore`
### Message Guidelines
- Average message length: ~57 characters
- Average message length: ~94 characters
- Keep first line concise and descriptive
- Use imperative mood ("Add feature" not "Added feature")
@@ -48,49 +45,37 @@ Follow these commit message conventions based on 500 analyzed commits.
*Commit message example*
```text
feat: add everything-claude-code ECC bundle (.claude/commands/add-new-agent-or-skill.md)
feat: add everything-claude-code-conventions ECC bundle (.claude/ecc-tools.json)
```
*Commit message example*
```text
docs: add Claude Code troubleshooting workarounds
feat: add everything-claude-code-conventions ECC bundle (.claude/identity.json)
```
*Commit message example*
```text
refactor: fold social graph ranking into lead intelligence
feat: add everything-claude-code-conventions ECC bundle (.codex/agents/explorer.toml)
```
*Commit message example*
```text
chore: ignore local orchestration artifacts
feat: add everything-claude-code-conventions ECC bundle (.codex/agents/reviewer.toml)
```
*Commit message example*
```text
fix(security): remove evalview-agent-testing skill — external dependency
feat: add everything-claude-code-conventions ECC bundle (.codex/agents/docs-researcher.toml)
```
*Commit message example*
```text
feat: add everything-claude-code ECC bundle (.claude/commands/add-new-install-target.md)
```
*Commit message example*
```text
feat: add everything-claude-code ECC bundle (.claude/commands/feature-development.md)
```
*Commit message example*
```text
feat: add everything-claude-code ECC bundle (.claude/enterprise/controls.md)
feat: add everything-claude-code-conventions ECC bundle (.claude/commands/feature-development.md)
```
## Architecture
@@ -99,21 +84,6 @@ feat: add everything-claude-code ECC bundle (.claude/enterprise/controls.md)
This project uses **hybrid** module organization.
### Configuration Files
- `.github/workflows/ci.yml`
- `.github/workflows/maintenance.yml`
- `.github/workflows/monthly-metrics.yml`
- `.github/workflows/release.yml`
- `.github/workflows/reusable-release.yml`
- `.github/workflows/reusable-test.yml`
- `.github/workflows/reusable-validate.yml`
- `.opencode/package.json`
- `.opencode/tsconfig.json`
- `.prettierrc`
- `eslint.config.js`
- `package.json`
### Guidelines
- This project uses a hybrid organization
@@ -121,7 +91,7 @@ This project uses **hybrid** module organization.
## Code Style
### Language: JavaScript
### Language: TypeScript
### Naming Conventions
@@ -134,7 +104,7 @@ This project uses **hybrid** module organization.
### Import Style: Relative Imports
### Export Style: Mixed Style
### Export Style: Named Exports
*Preferred import style*
@@ -145,39 +115,13 @@ import { Button } from '../components/Button'
import { useAuth } from './hooks/useAuth'
```
## Testing
### Test Framework
No specific test framework detected — use the repository's existing test patterns.
### File Pattern: `*.test.js`
### Test Types
- **Unit tests**: Test individual functions and components in isolation
- **Integration tests**: Test interactions between multiple components/services
### Coverage
This project has coverage reporting configured. Aim for 80%+ coverage.
## Error Handling
### Error Handling Style: Try-Catch Blocks
*Standard error handling pattern*
*Preferred export style*
```typescript
try {
const result = await riskyOperation()
return result
} catch (error) {
console.error('Operation failed:', error)
throw new Error('User-friendly message')
}
// Use named exports
export function calculateTotal() { ... }
export const TAX_RATE = 0.1
export interface Order { ... }
```
## Common Workflows
@@ -188,7 +132,7 @@ These workflows were detected from analyzing commit patterns.
Standard feature implementation workflow
**Frequency**: ~20 times per month
**Frequency**: ~30 times per month
**Steps**:
1. Add feature implementation
@@ -196,185 +140,37 @@ Standard feature implementation workflow
3. Update documentation
**Files typically involved**:
- `skills/remotion-video-creation/rules/assets/*`
- `.opencode/*`
- `.opencode/plugins/*`
- `**/*.test.*`
- `.claude/commands/*`
**Example commit sequence**:
```
fix: CI fixes, security audit, remotion skill, lead-intelligence, npm audit (#1039)
chore(deps-dev): bump globals in the minor-and-patch group (#1062)
chore(deps): bump actions/github-script from 7.1.0 to 8.0.0 (#1059)
feat: add everything-claude-code-conventions ECC bundle (.claude/ecc-tools.json)
feat: add everything-claude-code-conventions ECC bundle (.claude/skills/everything-claude-code/SKILL.md)
feat: add everything-claude-code-conventions ECC bundle (.agents/skills/everything-claude-code/SKILL.md)
```
### Add New Install Target
### Add Ecc Bundle Component
Adds support for a new install target (e.g., CodeBuddy, Gemini) to the system, including scripts, manifests, schemas, and tests.
Adds a new component to the everything-claude-code-conventions ECC bundle, such as a tool definition, skill file, identity, or command documentation.
**Frequency**: ~2 times per month
**Frequency**: ~5 times per month
**Steps**:
1. Create new install scripts (install.js, install.sh, uninstall.js, uninstall.sh) in a dedicated directory (e.g., .codebuddy/ or .gemini/).
2. Add or update a README or documentation file for the new target.
3. Update manifests/install-modules.json to register the new target.
4. Update schemas/ecc-install-config.schema.json and schemas/install-modules.schema.json as needed.
5. Update scripts/lib/install-manifests.js and scripts/lib/install-targets/<target>-project.js.
6. Update or add tests in tests/lib/install-targets.test.js.
7. Update registry or related files if necessary.
1. Create or update a file under a relevant ECC bundle directory (e.g., .claude/ecc-tools.json, .claude/skills/everything-claude-code/SKILL.md, .agents/skills/everything-claude-code/SKILL.md, .claude/identity.json, .claude/commands/*.md)
2. Commit the file with a message indicating addition to the ECC bundle
**Files typically involved**:
- `manifests/install-modules.json`
- `schemas/ecc-install-config.schema.json`
- `schemas/install-modules.schema.json`
- `scripts/lib/install-manifests.js`
- `scripts/lib/install-targets/*.js`
- `tests/lib/install-targets.test.js`
- `.*/README.md`
- `.*/install.js`
- `.*/install.sh`
- `.*/uninstall.js`
- `.*/uninstall.sh`
- `.claude/ecc-tools.json`
- `.claude/skills/everything-claude-code/SKILL.md`
- `.agents/skills/everything-claude-code/SKILL.md`
- `.claude/identity.json`
- `.claude/commands/feature-development.md`
- `.claude/commands/add-ecc-bundle-component.md`
**Example commit sequence**:
```
Create new install scripts (install.js, install.sh, uninstall.js, uninstall.sh) in a dedicated directory (e.g., .codebuddy/ or .gemini/).
Add or update a README or documentation file for the new target.
Update manifests/install-modules.json to register the new target.
Update schemas/ecc-install-config.schema.json and schemas/install-modules.schema.json as needed.
Update scripts/lib/install-manifests.js and scripts/lib/install-targets/<target>-project.js.
Update or add tests in tests/lib/install-targets.test.js.
Update registry or related files if necessary.
```
### Add New Skill Or Agent
Adds a new skill or agent to the system, including documentation and registration in manifests.
**Frequency**: ~3 times per month
**Steps**:
1. Create SKILL.md in skills/<skill-name>/ or .claude/skills/<skill-name>/ or .agents/skills/<skill-name>/.
2. Create agent definition(s) in agents/<agent-name>.md if needed.
3. Update manifests/install-modules.json or other relevant manifest files.
4. Add or update documentation (README.md, AGENTS.md, etc.).
**Files typically involved**:
- `skills/*/SKILL.md`
- `.claude/skills/*/SKILL.md`
- `.agents/skills/*/SKILL.md`
- `agents/*.md`
- `manifests/install-modules.json`
- `README.md`
- `AGENTS.md`
**Example commit sequence**:
```
Create SKILL.md in skills/<skill-name>/ or .claude/skills/<skill-name>/ or .agents/skills/<skill-name>/.
Create agent definition(s) in agents/<agent-name>.md if needed.
Update manifests/install-modules.json or other relevant manifest files.
Add or update documentation (README.md, AGENTS.md, etc.).
```
### Add Or Update Command Workflow
Adds or extends user-facing commands (e.g., PRP, santa-loop, GAN harness), often with review/feedback cycles.
**Frequency**: ~2 times per month
**Steps**:
1. Create or update command markdown files in commands/ or .claude/commands/.
2. Add YAML frontmatter, usage, and output sections as per convention.
3. Address review feedback with follow-up fixes (e.g., parameter quoting, API usage, documentation).
4. If needed, update related files (e.g., hooks, tests, AGENTS.md).
**Files typically involved**:
- `commands/*.md`
- `.claude/commands/*.md`
**Example commit sequence**:
```
Create or update command markdown files in commands/ or .claude/commands/.
Add YAML frontmatter, usage, and output sections as per convention.
Address review feedback with follow-up fixes (e.g., parameter quoting, API usage, documentation).
If needed, update related files (e.g., hooks, tests, AGENTS.md).
```
### Add Or Update Opencode Agent
Adds new OpenCode agent prompt files and updates agent registration in opencode.json.
**Frequency**: ~2 times per month
**Steps**:
1. Add new prompt files in .opencode/prompts/agents/*.txt.
2. Update .opencode/opencode.json to register new agents.
3. Update AGENTS.md to reflect new agent count or details.
4. Address review feedback (e.g., remove agents, update documentation).
**Files typically involved**:
- `.opencode/prompts/agents/*.txt`
- `.opencode/opencode.json`
- `AGENTS.md`
**Example commit sequence**:
```
Add new prompt files in .opencode/prompts/agents/*.txt.
Update .opencode/opencode.json to register new agents.
Update AGENTS.md to reflect new agent count or details.
Address review feedback (e.g., remove agents, update documentation).
```
### Add Or Update Hook Orci Script
Implements or fixes hooks and related scripts for formatting, typechecking, session management, or CI integration.
**Frequency**: ~2 times per month
**Steps**:
1. Edit or add scripts in scripts/hooks/ or hooks/hooks.json.
2. Update or add tests in tests/hooks/.
3. Address review feedback (e.g., race conditions, timeouts, path normalization, security).
4. If needed, update related files (e.g., .cursor/hooks/after-file-edit.js, .github/workflows/*).
**Files typically involved**:
- `scripts/hooks/*.js`
- `scripts/hooks/*.sh`
- `hooks/hooks.json`
- `tests/hooks/*.js`
- `.cursor/hooks/*.js`
- `.github/workflows/*.yml`
**Example commit sequence**:
```
Edit or add scripts in scripts/hooks/ or hooks/hooks.json.
Update or add tests in tests/hooks/.
Address review feedback (e.g., race conditions, timeouts, path normalization, security).
If needed, update related files (e.g., .cursor/hooks/after-file-edit.js, .github/workflows/*).
```
### Dependency Update Via Dependabot
Automated or manual update of dependencies (npm packages or GitHub Actions) across multiple workflow files.
**Frequency**: ~4 times per month
**Steps**:
1. Update dependency version in package.json, yarn.lock, or relevant workflow YAML.
2. Update .github/workflows/*.yml to use new action versions.
3. Commit with standardized message referencing the dependency and version bump.
**Files typically involved**:
- `package.json`
- `yarn.lock`
- `package-lock.json`
- `.github/workflows/*.yml`
**Example commit sequence**:
```
Update dependency version in package.json, yarn.lock, or relevant workflow YAML.
Update .github/workflows/*.yml to use new action versions.
Commit with standardized message referencing the dependency and version bump.
Create or update a file under a relevant ECC bundle directory (e.g., .claude/ecc-tools.json, .claude/skills/everything-claude-code/SKILL.md, .agents/skills/everything-claude-code/SKILL.md, .claude/identity.json, .claude/commands/*.md)
Commit the file with a message indicating addition to the ECC bundle
```
@@ -385,14 +181,12 @@ Based on analysis of the codebase, follow these practices:
### Do
- Use conventional commit format (feat:, fix:, etc.)
- Follow *.test.js naming pattern
- Use camelCase for file names
- Prefer mixed exports
- Prefer named exports
### Don't
- Don't write vague commit messages
- Don't skip tests for new features
- Don't deviate from established patterns without discussion
---

View File

@@ -7,9 +7,9 @@
".agents/skills/everything-claude-code/SKILL.md"
],
"commandFiles": [
".claude/commands/database-migration.md",
".claude/commands/feature-development.md",
".claude/commands/add-new-install-target.md",
".claude/commands/add-new-skill-or-agent.md"
".claude/commands/add-language-rules.md"
],
"updatedAt": "2026-03-31T23:03:58.867Z"
"updatedAt": "2026-03-20T12:07:36.496Z"
}

View File

@@ -12,7 +12,7 @@ This directory contains the **Codex plugin manifest** for Everything Claude Code
## What This Provides
- **125 skills** from `./skills/` — reusable Codex workflows for TDD, security,
- **142 skills** from `./skills/` — reusable Codex workflows for TDD, security,
code review, architecture, and more
- **6 MCP servers** — GitHub, Context7, Exa, Memory, Playwright, Sequential Thinking
@@ -45,5 +45,7 @@ Run this from the repository root so `./` points to the repo root and `.mcp.json
- The `skills/` directory at the repo root is shared between Claude Code (`.claude-plugin/`)
and Codex (`.codex-plugin/`) — same source of truth, no duplication
- ECC is moving to a skills-first workflow surface. Legacy `commands/` remain for
compatibility on harnesses that still expect slash-entry shims.
- MCP server credentials are inherited from the launching environment (env vars)
- This manifest does **not** override `~/.codex/config.toml` settings

View File

@@ -37,7 +37,7 @@
{
"command": "node .cursor/hooks/after-file-edit.js",
"event": "afterFileEdit",
"description": "Auto-format, TypeScript check, console.log warning"
"description": "Auto-format, TypeScript check, console.log warning, and frontend design-quality reminder"
}
],
"beforeMCPExecution": [

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env node
const { readStdin, runExistingHook, transformToClaude } = require('./adapter');
const { hookEnabled, readStdin, runExistingHook, transformToClaude } = require('./adapter');
readStdin().then(raw => {
try {
const input = JSON.parse(raw);
@@ -11,6 +11,9 @@ readStdin().then(raw => {
// Accumulate edited paths for batch format+typecheck at stop time
runExistingHook('post-edit-accumulator.js', claudeStr);
runExistingHook('post-edit-console-warn.js', claudeStr);
if (hookEnabled('post:edit:design-quality-check', ['standard', 'strict'])) {
runExistingHook('design-quality-check.js', claudeStr);
}
} catch {}
process.stdout.write(raw);
}).catch(() => process.exit(0));

View File

@@ -28,8 +28,10 @@ jobs:
fetch-depth: 0
- name: Validate version tag
env:
INPUT_TAG: ${{ inputs.tag }}
run: |
if ! [[ "${{ inputs.tag }}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
if ! [[ "$INPUT_TAG" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "Invalid version tag format. Expected vX.Y.Z"
exit 1
fi

View File

@@ -1,6 +1,6 @@
# Everything Claude Code (ECC) — Agent Instructions
This is a **production-ready AI coding plugin** providing 36 specialized agents, 142 skills, 68 commands, and automated hook workflows for software development.
This is a **production-ready AI coding plugin** providing 38 specialized agents, 156 skills, 72 commands, and automated hook workflows for software development.
**Version:** 1.9.0
@@ -116,6 +116,12 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
- If there is no obvious project doc location, ask before creating a new top-level file
5. **Commit** — Conventional commits format, comprehensive PR summaries
## Workflow Surface Policy
- `skills/` is the canonical workflow surface.
- New workflow contributions should land in `skills/` first.
- `commands/` is a legacy slash-entry compatibility surface and should only be added or updated when a shim is still required for migration or cross-harness parity.
## Git Workflow
**Commit format:** `<type>: <description>` — Types: feat, fix, refactor, docs, test, chore, perf, ci
@@ -139,9 +145,9 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
## Project Structure
```
agents/ — 36 specialized subagents
skills/ — 142 workflow skills and domain knowledge
commands/ — 68 slash commands
agents/ — 38 specialized subagents
skills/ — 156 workflow skills and domain knowledge
commands/ — 72 slash commands
hooks/ — Trigger-based automations
rules/ — Always-follow guidelines (common + per-language)
scripts/ — Cross-platform Node.js utilities
@@ -149,6 +155,8 @@ mcp-configs/ — 14 MCP server configurations
tests/ — Test suite
```
`commands/` remains in the repo for compatibility, but the long-term direction is skills-first.
## Success Metrics
- All tests pass with 80%+ coverage

View File

@@ -34,7 +34,7 @@
**The performance optimization system for AI agent harnesses. From an Anthropic hackathon winner.**
Not just configs. A complete system: skills, instincts, memory optimization, continuous learning, security scanning, and research-first development. Production-ready agents, hooks, commands, rules, and MCP configurations evolved over 10+ months of intensive daily use building real products.
Not just configs. A complete system: skills, instincts, memory optimization, continuous learning, security scanning, and research-first development. Production-ready agents, skills, hooks, rules, MCP configurations, and legacy command shims evolved over 10+ months of intensive daily use building real products.
Works across **Claude Code**, **Codex**, **Cowork**, and other AI agent harnesses.
@@ -212,17 +212,20 @@ For manual install instructions see the README in the `rules/` folder. When copy
### Step 3: Start Using
```bash
# Try a command (plugin install uses namespaced form)
# Skills are the primary workflow surface.
# Existing slash-style command names still work while ECC migrates off commands/.
# Plugin install uses the namespaced form
/everything-claude-code:plan "Add user authentication"
# Manual install (Option 2) uses the shorter form:
# Manual install keeps the shorter slash form:
# /plan "Add user authentication"
# Check available commands
/plugin list everything-claude-code@everything-claude-code
```
**That's it!** You now have access to 36 agents, 142 skills, and 68 commands.
**That's it!** You now have access to 38 agents, 156 skills, and 72 legacy command shims.
### Multi-model commands require additional setup
@@ -392,7 +395,7 @@ everything-claude-code/
| |-- autonomous-loops/ # Autonomous loop patterns: sequential pipelines, PR loops, DAG orchestration (NEW)
| |-- plankton-code-quality/ # Write-time code quality enforcement with Plankton hooks (NEW)
|
|-- commands/ # Slash commands for quick execution
|-- commands/ # Legacy slash-entry shims; prefer skills/
| |-- tdd.md # /tdd - Test-driven development
| |-- plan.md # /plan - Implementation planning
| |-- e2e.md # /e2e - E2E test generation
@@ -671,10 +674,7 @@ cp -r everything-claude-code/rules/python ~/.claude/rules/
cp -r everything-claude-code/rules/golang ~/.claude/rules/
cp -r everything-claude-code/rules/php ~/.claude/rules/
# Copy commands
cp everything-claude-code/commands/*.md ~/.claude/commands/
# Copy skills (core vs niche)
# Copy skills first (primary workflow surface)
# Recommended (new users): core/general skills only
cp -r everything-claude-code/.agents/skills/* ~/.claude/skills/
cp -r everything-claude-code/skills/search-first ~/.claude/skills/
@@ -683,6 +683,10 @@ cp -r everything-claude-code/skills/search-first ~/.claude/skills/
# for s in django-patterns django-tdd laravel-patterns springboot-patterns; do
# cp -r everything-claude-code/skills/$s ~/.claude/skills/
# done
# Optional: keep legacy slash-command compatibility during migration
mkdir -p ~/.claude/commands
cp everything-claude-code/commands/*.md ~/.claude/commands/
```
#### Add hooks to settings.json
@@ -691,7 +695,7 @@ Copy the hooks from `hooks/hooks.json` to your `~/.claude/settings.json`.
#### Configure MCPs
Copy desired MCP servers from `mcp-configs/mcp-servers.json` to your `~/.claude.json`.
Copy desired MCP server definitions from `mcp-configs/mcp-servers.json` into your official Claude Code config in `~/.claude/settings.json`, or into a project-scoped `.mcp.json` if you want repo-local MCP access.
**Important:** Replace `YOUR_*_HERE` placeholders with your actual API keys.
@@ -716,7 +720,7 @@ You are a senior code reviewer...
### Skills
Skills are workflow definitions invoked by commands or agents:
Skills are the primary workflow surface. They can be invoked directly, suggested automatically, and reused by agents. ECC still ships `commands/` during migration, but new workflow development should land in `skills/` first.
```markdown
# TDD Workflow
@@ -762,7 +766,7 @@ See [`rules/README.md`](rules/README.md) for installation and structure details.
## Which Agent Should I Use?
Not sure where to start? Use this quick reference:
Not sure where to start? Use this quick reference. Skills are the canonical workflow surface; slash entries below are the compatibility form most users already know.
| I want to... | Use this command | Agent used |
|--------------|-----------------|------------|
@@ -782,6 +786,8 @@ Not sure where to start? Use this quick reference:
### Common Workflows
Slash forms below are shown because they are still the fastest familiar entrypoint. Under the hood, ECC is shifting these workflows toward skills-first definitions.
**Starting a new feature:**
```
/everything-claude-code:plan "Add user authentication with OAuth"
@@ -937,7 +943,7 @@ Please contribute! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
### Ideas for Contributions
- Language-specific skills (Rust, C#, Kotlin, Java) — Go, Python, Perl, Swift, and TypeScript already included
- Framework-specific configs (Rails, FastAPI, NestJS) — Django, Spring Boot, Laravel already included
- Framework-specific configs (Rails, FastAPI) — Django, NestJS, Spring Boot, and Laravel already included
- DevOps agents (Kubernetes, Terraform, AWS, Docker)
- Testing strategies (different frameworks, visual regression)
- Domain-specific knowledge (ML, data engineering, mobile)
@@ -1112,9 +1118,9 @@ The configuration is automatically detected from `.opencode/opencode.json`.
| Feature | Claude Code | OpenCode | Status |
|---------|-------------|----------|--------|
| Agents | PASS: 36 agents | PASS: 12 agents | **Claude Code leads** |
| Commands | PASS: 68 commands | PASS: 31 commands | **Claude Code leads** |
| Skills | PASS: 142 skills | PASS: 37 skills | **Claude Code leads** |
| Agents | PASS: 38 agents | PASS: 12 agents | **Claude Code leads** |
| Commands | PASS: 72 commands | PASS: 31 commands | **Claude Code leads** |
| Skills | PASS: 156 skills | PASS: 37 skills | **Claude Code leads** |
| Hooks | PASS: 8 event types | PASS: 11 events | **OpenCode has more!** |
| Rules | PASS: 29 rules | PASS: 13 instructions | **Claude Code leads** |
| MCP Servers | PASS: 14 servers | PASS: Full | **Full parity** |
@@ -1134,7 +1140,7 @@ OpenCode's plugin system is MORE sophisticated than Claude Code with 20+ event t
**Additional OpenCode events**: `file.edited`, `file.watcher.updated`, `message.updated`, `lsp.client.diagnostics`, `tui.toast.show`, and more.
### Available Commands (31+)
### Available Slash Entry Shims (31+)
| Command | Description |
|---------|-------------|
@@ -1221,9 +1227,9 @@ ECC is the **first plugin to maximize every major AI coding tool**. Here's how e
| Feature | Claude Code | Cursor IDE | Codex CLI | OpenCode |
|---------|------------|------------|-----------|----------|
| **Agents** | 21 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
| **Commands** | 52 | Shared | Instruction-based | 31 |
| **Skills** | 102 | Shared | 10 (native format) | 37 |
| **Agents** | 38 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
| **Commands** | 72 | Shared | Instruction-based | 31 |
| **Skills** | 156 | Shared | 10 (native format) | 37 |
| **Hook Events** | 8 types | 15 types | None yet | 11 types |
| **Hook Scripts** | 20+ scripts | 16 scripts (DRY adapter) | N/A | Plugin hooks |
| **Rules** | 34 (common + lang) | 34 (YAML frontmatter) | Instruction-based | 13 instructions |

View File

@@ -106,7 +106,7 @@ cp -r everything-claude-code/rules/perl ~/.claude/rules/
/plugin list everything-claude-code@everything-claude-code
```
**完成!** 你现在可以使用 13 个代理、43 个技能和 31 个命令。
**完成!** 你现在可以使用 38 个代理、156 个技能和 72 个命令。
### multi-* 命令需要额外配置

131
WORKING-CONTEXT.md Normal file
View File

@@ -0,0 +1,131 @@
# Working Context
Last updated: 2026-04-02
## Purpose
Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfaces, and ECC 2.0 platform buildout.
## Current Truth
- Default branch: `main`
- Immediate blocker addressed: CI lockfile drift and hook validation breakage fixed in `a273c62`
- Local full suite status after fix: `1723/1723` passing
- Main active operational work:
- keep default branch green
- continue issue-driven fixes from `main` now that the public PR backlog is at zero
- continue ECC 2.0 control-plane and operator-surface buildout
## Current Constraints
- No merge by title or commit summary alone.
- No arbitrary external runtime installs in shipped ECC surfaces.
- Overlapping skills, hooks, or agents should be consolidated when overlap is material and runtime separation is not required.
## Active Queues
- PR backlog: currently cleared on the public queue; new work should land through direct mainline fixes or fresh narrowly scoped PRs
- Product:
- selective install cleanup
- control plane primitives
- operator surface
- self-improving skills
- Skill quality:
- rewrite content-facing skills to use source-backed voice modeling
- remove generic LLM rhetoric, canned CTA patterns, and forced platform stereotypes
- continue one-by-one audit of overlapping or low-signal skill content
- move repo guidance and contribution flow to skills-first, leaving commands only as explicit compatibility shims
- add operator skills that wrap connected surfaces instead of exposing only raw APIs or disconnected primitives
- land the canonical voice system, network-optimization lane, and reusable Manim explainer lane
- Security:
- keep dependency posture clean
- preserve self-contained hook and MCP behavior
## Open PR Classification
- Closed on 2026-04-01 under backlog hygiene / merge policy:
- `#1069` `feat: add everything-claude-code ECC bundle`
- `#1068` `feat: add everything-claude-code-conventions ECC bundle`
- `#1080` `feat: add everything-claude-code ECC bundle`
- `#1079` `feat: add everything-claude-code-conventions ECC bundle`
- `#1064` `chore(deps-dev): bump @eslint/js from 9.39.2 to 10.0.1`
- `#1063` `chore(deps-dev): bump eslint from 9.39.2 to 10.1.0`
- Closed on 2026-04-01 because the content is sourced from external ecosystems and should only land via manual ECC-native re-port:
- `#852` openclaw-user-profiler
- `#851` openclaw-soul-forge
- `#640` harper skills
- Native-support candidates to fully diff-audit next:
- `#1055` Dart / Flutter support
- `#1043` C# reviewer and .NET skills
- Direct-port candidates landed after audit:
- `#1078` hook-id dedupe for managed Claude hook reinstalls
- `#844` ui-demo skill
- `#1110` install-time Claude hook root resolution
- `#1106` portable Codex Context7 key extraction
- `#1107` Codex baseline merge and sample agent-role sync
- `#1119` stale CI/lint cleanup that still contained safe low-risk fixes
- Port or rebuild inside ECC after full audit:
- `#894` Jira integration
- `#814` + `#808` rebuild as a single consolidated notifications lane for Opencode and cross-harness surfaces
## Interfaces
- Public truth: GitHub issues and PRs
- Internal execution truth: linked Linear work items under the ECC program
- Current linked Linear items:
- `ECC-206` ecosystem CI baseline
- `ECC-207` PR backlog audit and merge-policy enforcement
- `ECC-208` context hygiene
- `ECC-210` skills-first workflow migration and command compatibility retirement
## Update Rule
Keep this file detailed for only the current sprint, blockers, and next actions. Summarize completed work into archive or repo docs once it is no longer actively shaping execution.
## Latest Execution Notes
- 2026-04-02: `ECC-Tools/main` shipped `9566637` (`fix: prefer commit lookup over git ref resolution`). The PR-analysis fire is now fixed in the app repo by preferring explicit commit resolution before `git.getRef`, with regression coverage for pull refs and plain branch refs. Mirrored public tracking issue `#1184` in this repo was closed as resolved upstream.
- 2026-04-02: Direct-ported the clean native-support core of `#1043` into `main`: `agents/csharp-reviewer.md`, `skills/dotnet-patterns/SKILL.md`, and `skills/csharp-testing/SKILL.md`. This fills the gap between existing C# rule/docs mentions and actual shipped C# review/testing guidance.
- 2026-04-02: Direct-ported the clean native-support core of `#1055` into `main`: `agents/dart-build-resolver.md`, `commands/flutter-build.md`, `commands/flutter-review.md`, `commands/flutter-test.md`, `rules/dart/*`, and `skills/dart-flutter-patterns/SKILL.md`. The skill paths were wired into the current `framework-language` module instead of replaying the older PR's separate `flutter-dart` module layout.
- 2026-04-02: Closed `#1081` after diff audit. The PR only added vendor-marketing docs for an external X/Twitter backend (`Xquik` / `x-twitter-scraper`) to the canonical `x-api` skill instead of contributing an ECC-native capability.
- 2026-04-02: Direct-ported the useful Jira lane from `#894`, but sanitized it to match current supply-chain policy. `commands/jira.md`, `skills/jira-integration/SKILL.md`, and the pinned `jira` MCP template in `mcp-configs/mcp-servers.json` are in-tree, while the skill no longer tells users to install `uv` via `curl | bash`. `jira-integration` is classified under `operator-workflows` for selective installs.
- 2026-04-02: Closed `#1125` after full diff audit. The bundle/skill-router lane hardcoded many non-existent or non-canonical surfaces and created a second routing abstraction instead of a small ECC-native index layer.
- 2026-04-02: Closed `#1124` after full diff audit. The added agent roster was thoughtfully written, but it duplicated the existing ECC agent surface with a second competing catalog (`dispatch`, `explore`, `verifier`, `executor`, etc.) instead of strengthening canonical agents already in-tree.
- 2026-04-02: Closed the full Argus cluster `#1098`, `#1099`, `#1100`, `#1101`, and `#1102` after full diff audit. The common failure mode was the same across all five PRs: external multi-CLI dispatch was treated as a first-class runtime dependency of shipped ECC surfaces. Any useful protocol ideas should be re-ported later into ECC-native orchestration, review, or reflection lanes without external CLI fan-out assumptions.
- 2026-04-02: The previously open native-support / integration queue (`#1081`, `#1055`, `#1043`, `#894`) has now been fully resolved by direct-port or closure policy. The active public PR queue is currently zero; next focus stays on issue-driven mainline fixes and CI health, not backlog PR intake.
- 2026-04-01: `main` CI was restored locally with `1723/1723` tests passing after lockfile and hook validation fixes.
- 2026-04-01: Auto-generated ECC bundle PRs `#1068` and `#1069` were closed instead of merged; useful ideas must be ported manually after explicit diff audit.
- 2026-04-01: Major-version ESLint bump PRs `#1063` and `#1064` were closed; revisit only inside a planned ESLint 10 migration lane.
- 2026-04-01: Notification PRs `#808` and `#814` were identified as overlapping and should be rebuilt as one unified feature instead of landing as parallel branches.
- 2026-04-01: External-source skill PRs `#640`, `#851`, and `#852` were closed under the new ingestion policy; copy ideas from audited source later rather than merging branded/source-import PRs directly.
- 2026-04-01: The remaining low GitHub advisory on `ecc2/Cargo.lock` was addressed by moving `ratatui` to `0.30` with `crossterm_0_28`, which updated transitive `lru` from `0.12.5` to `0.16.3`. `cargo build --manifest-path ecc2/Cargo.toml` still passes.
- 2026-04-01: Safe core of `#834` was ported directly into `main` instead of merging the PR wholesale. This included stricter install-plan validation, antigravity target filtering that skips unsupported module trees, tracked catalog sync for English plus zh-CN docs, and a dedicated `catalog:sync` write mode.
- 2026-04-01: Repo catalog truth is now synced at `36` agents, `68` commands, and `142` skills across the tracked English and zh-CN docs.
- 2026-04-01: Legacy emoji and non-essential symbol usage in docs, scripts, and tests was normalized to keep the unicode-safety lane green without weakening the check itself.
- 2026-04-01: The remaining self-contained piece of `#834`, `docs/zh-CN/skills/browser-qa/SKILL.md`, was ported directly into the repo. After commit, `#834` should be closed as superseded-by-direct-port.
- 2026-04-01: Content skill cleanup started with `content-engine`, `crosspost`, `article-writing`, and `investor-outreach`. The new direction is source-first voice capture, explicit anti-trope bans, and no forced platform persona shifts.
- 2026-04-01: `node scripts/ci/check-unicode-safety.js --write` sanitized the remaining emoji-bearing Markdown files, including several `remotion-video-creation` rule docs and an old local plan note.
- 2026-04-01: Core English repo surfaces were shifted to a skills-first posture. README, AGENTS, plugin metadata, and contributor instructions now treat `skills/` as canonical and `commands/` as legacy slash-entry compatibility during migration.
- 2026-04-01: Follow-up bundle cleanup closed `#1080` and `#1079`, which were generated `.claude/` bundle PRs duplicating command-first scaffolding instead of shipping canonical ECC source changes.
- 2026-04-01: Ported the useful core of `#1078` directly into `main`, but tightened the implementation so legacy no-id hook installs deduplicate cleanly on the first reinstall instead of the second. Added stable hook ids to `hooks/hooks.json`, semantic fallback aliases in `mergeHookEntries()`, and a regression test covering upgrade from pre-id settings.
- 2026-04-01: Collapsed the obvious command/skill duplicates into thin legacy shims so `skills/` now hold the maintained bodies for NanoClaw, context-budget, DevFleet, docs lookup, E2E, evals, orchestration, prompt optimization, rules distillation, TDD, and verification.
- 2026-04-01: Ported the self-contained core of `#844` directly into `main` as `skills/ui-demo/SKILL.md` and registered it under the `media-generation` install module instead of merging the PR wholesale.
- 2026-04-01: Added the first connected-workflow operator lane as ECC-native skills instead of leaving the surface as raw plugins or APIs: `workspace-surface-audit`, `customer-billing-ops`, `project-flow-ops`, and `google-workspace-ops`. These are tracked under the new `operator-workflows` install module.
- 2026-04-01: Direct-ported the real fix from the unresolved hook-path PR lane into the active installer. Claude installs now replace `${CLAUDE_PLUGIN_ROOT}` with the concrete install root in both `settings.json` and the copied `hooks/hooks.json`, which keeps PreToolUse/PostToolUse hooks working outside plugin-managed env injection.
- 2026-04-01: Replaced the GNU-only `grep -P` parser in `scripts/sync-ecc-to-codex.sh` with a portable Node parser for Context7 key extraction. Added source-level regression coverage so BSD/macOS syncs do not drift back to non-portable parsing.
- 2026-04-01: Targeted regression suite after the direct ports is green: `tests/scripts/install-apply.test.js`, `tests/scripts/sync-ecc-to-codex.test.js`, and `tests/scripts/codex-hooks.test.js`.
- 2026-04-01: Ported the useful core of `#1107` directly into `main` as an add-only Codex baseline merge. `scripts/sync-ecc-to-codex.sh` now fills missing non-MCP defaults from `.codex/config.toml`, syncs sample agent role files into `~/.codex/agents`, and preserves user config instead of replacing it. Added regression coverage for sparse configs and implicit parent tables.
- 2026-04-01: Ported the safe low-risk cleanup from `#1119` directly into `main` instead of keeping an obsolete CI PR open. This included `.mjs` eslint handling, stricter null checks, Windows home-dir coverage in bash-log tests, and longer Trae shell-test timeouts.
- 2026-04-01: Added `brand-voice` as the canonical source-derived writing-style system and wired the content lane to treat it as the shared voice source of truth instead of duplicating partial style heuristics across skills.
- 2026-04-01: Added `connections-optimizer` as the review-first social-graph reorganization workflow for X and LinkedIn, with explicit pruning modes, browser fallback expectations, and Apple Mail drafting guidance.
- 2026-04-01: Added `manim-video` as the reusable technical explainer lane and seeded it with a starter network-graph scene so launch and systems animations do not depend on one-off scratch scripts.
- 2026-04-02: Re-extracted `social-graph-ranker` as a standalone primitive because the weighted bridge-decay model is reusable outside the full lead workflow. `lead-intelligence` now points to it for canonical graph ranking instead of carrying the full algorithm explanation inline, while `connections-optimizer` stays the broader operator layer for pruning, adds, and outbound review packs.
- 2026-04-02: Applied the same consolidation rule to the writing lane. `brand-voice` remains the canonical voice system, while `content-engine`, `crosspost`, `article-writing`, and `investor-outreach` now keep only workflow-specific guidance instead of duplicating a second Affaan/ECC voice model or repeating the full ban list in multiple places.
- 2026-04-02: Closed fresh auto-generated bundle PRs `#1182` and `#1183` under the existing policy. Useful ideas from generator output must be ported manually into canonical repo surfaces instead of merging `.claude`/bundle PRs wholesale.
- 2026-04-02: Ported the safe one-file macOS observer fix from `#1164` directly into `main` as a POSIX `mkdir` fallback for `continuous-learning-v2` lazy-start locking, then closed the PR as superseded by direct port.
- 2026-04-02: Ported the safe core of `#1153` directly into `main`: markdownlint cleanup for orchestration/docs surfaces plus the Windows `USERPROFILE` and path-normalization fixes in `install-apply` / `repair` tests. Local validation after installing repo deps: `node tests/scripts/install-apply.test.js`, `node tests/scripts/repair.test.js`, and targeted `yarn markdownlint` all passed.
- 2026-04-02: Direct-ported the safe web/frontend rules lane from `#1122` into `rules/web/`, but adapted `rules/web/hooks.md` to prefer project-local tooling and avoid remote one-off package execution examples.
- 2026-04-02: Adapted the design-quality reminder from `#1127` into the current ECC hook architecture with a local `scripts/hooks/design-quality-check.js`, Claude `hooks/hooks.json` wiring, Cursor `after-file-edit.js` wiring, and dedicated hook coverage in `tests/hooks/design-quality-check.test.js`.
- 2026-04-02: Fixed `#1141` on `main` in `16e9b17`. The observer lifecycle is now session-aware instead of purely detached: `SessionStart` writes a project-scoped lease, `SessionEnd` removes that lease and stops the observer when the final lease disappears, `observe.sh` records project activity, and `observer-loop.sh` now exits on idle when no leases remain. Targeted validation passed with `bash -n`, `node tests/hooks/observer-memory.test.js`, `node tests/integration/hooks.test.js`, `node scripts/ci/validate-hooks.js hooks/hooks.json`, and `node scripts/ci/check-unicode-safety.js`.
- 2026-04-02: Fixed the remaining Windows-only hook regression behind `#1070` by making `scripts/lib/utils.js#getHomeDir()` honor explicit `HOME` / `USERPROFILE` overrides before falling back to `os.homedir()`. This restores test-isolated observer state paths for hook integration runs on Windows. Added regression coverage in `tests/lib/utils.test.js`. Targeted validation passed with `node tests/lib/utils.test.js`, `node tests/integration/hooks.test.js`, `node tests/hooks/observer-memory.test.js`, and `node scripts/ci/check-unicode-safety.js`.
- 2026-04-02: Direct-ported NestJS support for `#1022` into `main` as `skills/nestjs-patterns/SKILL.md` and wired it into the `framework-language` install module. Synced the repo catalog afterward (`38` agents, `72` commands, `156` skills) and updated the docs so NestJS is no longer listed as an unfilled framework gap.

101
agents/csharp-reviewer.md Normal file
View File

@@ -0,0 +1,101 @@
---
name: csharp-reviewer
description: Expert C# code reviewer specializing in .NET conventions, async patterns, security, nullable reference types, and performance. Use for all C# code changes. MUST BE USED for C# projects.
tools: ["Read", "Grep", "Glob", "Bash"]
model: sonnet
---
You are a senior C# code reviewer ensuring high standards of idiomatic .NET code and best practices.
When invoked:
1. Run `git diff -- '*.cs'` to see recent C# file changes
2. Run `dotnet build` and `dotnet format --verify-no-changes` if available
3. Focus on modified `.cs` files
4. Begin review immediately
## Review Priorities
### CRITICAL — Security
- **SQL Injection**: String concatenation/interpolation in queries — use parameterized queries or EF Core
- **Command Injection**: Unvalidated input in `Process.Start` — validate and sanitize
- **Path Traversal**: User-controlled file paths — use `Path.GetFullPath` + prefix check
- **Insecure Deserialization**: `BinaryFormatter`, `JsonSerializer` with `TypeNameHandling.All`
- **Hardcoded secrets**: API keys, connection strings in source — use configuration/secret manager
- **CSRF/XSS**: Missing `[ValidateAntiForgeryToken]`, unencoded output in Razor
### CRITICAL — Error Handling
- **Empty catch blocks**: `catch { }` or `catch (Exception) { }` — handle or rethrow
- **Swallowed exceptions**: `catch { return null; }` — log context, throw specific
- **Missing `using`/`await using`**: Manual disposal of `IDisposable`/`IAsyncDisposable`
- **Blocking async**: `.Result`, `.Wait()`, `.GetAwaiter().GetResult()` — use `await`
### HIGH — Async Patterns
- **Missing CancellationToken**: Public async APIs without cancellation support
- **Fire-and-forget**: `async void` except event handlers — return `Task`
- **ConfigureAwait misuse**: Library code missing `ConfigureAwait(false)`
- **Sync-over-async**: Blocking calls in async context causing deadlocks
### HIGH — Type Safety
- **Nullable reference types**: Nullable warnings ignored or suppressed with `!`
- **Unsafe casts**: `(T)obj` without type check — use `obj is T t` or `obj as T`
- **Raw strings as identifiers**: Magic strings for config keys, routes — use constants or `nameof`
- **`dynamic` usage**: Avoid `dynamic` in application code — use generics or explicit models
### HIGH — Code Quality
- **Large methods**: Over 50 lines — extract helper methods
- **Deep nesting**: More than 4 levels — use early returns, guard clauses
- **God classes**: Classes with too many responsibilities — apply SRP
- **Mutable shared state**: Static mutable fields — use `ConcurrentDictionary`, `Interlocked`, or DI scoping
### MEDIUM — Performance
- **String concatenation in loops**: Use `StringBuilder` or `string.Join`
- **LINQ in hot paths**: Excessive allocations — consider `for` loops with pre-allocated buffers
- **N+1 queries**: EF Core lazy loading in loops — use `Include`/`ThenInclude`
- **Missing `AsNoTracking`**: Read-only queries tracking entities unnecessarily
### MEDIUM — Best Practices
- **Naming conventions**: PascalCase for public members, `_camelCase` for private fields
- **Record vs class**: Value-like immutable models should be `record` or `record struct`
- **Dependency injection**: `new`-ing services instead of injecting — use constructor injection
- **`IEnumerable` multiple enumeration**: Materialize with `.ToList()` when enumerated more than once
- **Missing `sealed`**: Non-inherited classes should be `sealed` for clarity and performance
## Diagnostic Commands
```bash
dotnet build # Compilation check
dotnet format --verify-no-changes # Format check
dotnet test --no-build # Run tests
dotnet test --collect:"XPlat Code Coverage" # Coverage
```
## Review Output Format
```text
[SEVERITY] Issue title
File: path/to/File.cs:42
Issue: Description
Fix: What to change
```
## Approval Criteria
- **Approve**: No CRITICAL or HIGH issues
- **Warning**: MEDIUM issues only (can merge with caution)
- **Block**: CRITICAL or HIGH issues found
## Framework Checks
- **ASP.NET Core**: Model validation, auth policies, middleware order, `IOptions<T>` pattern
- **EF Core**: Migration safety, `Include` for eager loading, `AsNoTracking` for reads
- **Minimal APIs**: Route grouping, endpoint filters, proper `TypedResults`
- **Blazor**: Component lifecycle, `StateHasChanged` usage, JS interop disposal
## Reference
For detailed C# patterns, see skill: `dotnet-patterns`.
For testing guidelines, see skill: `csharp-testing`.
---
Review with the mindset: "Would this code pass review at a top .NET shop or open-source project?"

View File

@@ -0,0 +1,201 @@
---
name: dart-build-resolver
description: Dart/Flutter build, analysis, and dependency error resolution specialist. Fixes `dart analyze` errors, Flutter compilation failures, pub dependency conflicts, and build_runner issues with minimal, surgical changes. Use when Dart/Flutter builds fail.
tools: ["Read", "Write", "Edit", "Bash", "Grep", "Glob"]
model: sonnet
---
# Dart/Flutter Build Error Resolver
You are an expert Dart/Flutter build error resolution specialist. Your mission is to fix Dart analyzer errors, Flutter compilation issues, pub dependency conflicts, and build_runner failures with **minimal, surgical changes**.
## Core Responsibilities
1. Diagnose `dart analyze` and `flutter analyze` errors
2. Fix Dart type errors, null safety violations, and missing imports
3. Resolve `pubspec.yaml` dependency conflicts and version constraints
4. Fix `build_runner` code generation failures
5. Handle Flutter-specific build errors (Android Gradle, iOS CocoaPods, web)
## Diagnostic Commands
Run these in order:
```bash
# Check Dart/Flutter analysis errors
flutter analyze 2>&1
# or for pure Dart projects
dart analyze 2>&1
# Check pub dependency resolution
flutter pub get 2>&1
# Check if code generation is stale
dart run build_runner build --delete-conflicting-outputs 2>&1
# Flutter build for target platform
flutter build apk 2>&1 # Android
flutter build ipa --no-codesign 2>&1 # iOS (CI without signing)
flutter build web 2>&1 # Web
```
## Resolution Workflow
```text
1. flutter analyze -> Parse error messages
2. Read affected file -> Understand context
3. Apply minimal fix -> Only what's needed
4. flutter analyze -> Verify fix
5. flutter test -> Ensure nothing broke
```
## Common Fix Patterns
| Error | Cause | Fix |
|-------|-------|-----|
| `The name 'X' isn't defined` | Missing import or typo | Add correct `import` or fix name |
| `A value of type 'X?' can't be assigned to type 'X'` | Null safety — nullable not handled | Add `!`, `?? default`, or null check |
| `The argument type 'X' can't be assigned to 'Y'` | Type mismatch | Fix type, add explicit cast, or correct API call |
| `Non-nullable instance field 'x' must be initialized` | Missing initializer | Add initializer, mark `late`, or make nullable |
| `The method 'X' isn't defined for type 'Y'` | Wrong type or wrong import | Check type and imports |
| `'await' applied to non-Future` | Awaiting a non-async value | Remove `await` or make function async |
| `Missing concrete implementation of 'X'` | Abstract interface not fully implemented | Add missing method implementations |
| `The class 'X' doesn't implement 'Y'` | Missing `implements` or missing method | Add method or fix class signature |
| `Because X depends on Y >=A and Z depends on Y <B, version solving failed` | Pub version conflict | Adjust version constraints or add `dependency_overrides` |
| `Could not find a file named "pubspec.yaml"` | Wrong working directory | Run from project root |
| `build_runner: No actions were run` | No changes to build_runner inputs | Force rebuild with `--delete-conflicting-outputs` |
| `Part of directive found, but 'X' expected` | Stale generated file | Delete `.g.dart` file and re-run build_runner |
## Pub Dependency Troubleshooting
```bash
# Show full dependency tree
flutter pub deps
# Check why a specific package version was chosen
flutter pub deps --style=compact | grep <package>
# Upgrade packages to latest compatible versions
flutter pub upgrade
# Upgrade specific package
flutter pub upgrade <package_name>
# Clear pub cache if metadata is corrupted
flutter pub cache repair
# Verify pubspec.lock is consistent
flutter pub get --enforce-lockfile
```
## Null Safety Fix Patterns
```dart
// Error: A value of type 'String?' can't be assigned to type 'String'
// BAD — force unwrap
final name = user.name!;
// GOOD — provide fallback
final name = user.name ?? 'Unknown';
// GOOD — guard and return early
if (user.name == null) return;
final name = user.name!; // safe after null check
// GOOD — Dart 3 pattern matching
final name = switch (user.name) {
final n? => n,
null => 'Unknown',
};
```
## Type Error Fix Patterns
```dart
// Error: The argument type 'List<dynamic>' can't be assigned to 'List<String>'
// BAD
final ids = jsonList; // inferred as List<dynamic>
// GOOD
final ids = List<String>.from(jsonList);
// or
final ids = (jsonList as List).cast<String>();
```
## build_runner Troubleshooting
```bash
# Clean and regenerate all files
dart run build_runner clean
dart run build_runner build --delete-conflicting-outputs
# Watch mode for development
dart run build_runner watch --delete-conflicting-outputs
# Check for missing build_runner dependencies in pubspec.yaml
# Required: build_runner, json_serializable / freezed / riverpod_generator (as dev_dependencies)
```
## Android Build Troubleshooting
```bash
# Clean Android build cache
cd android && ./gradlew clean && cd ..
# Invalidate Flutter tool cache
flutter clean
# Rebuild
flutter pub get && flutter build apk
# Check Gradle/JDK version compatibility
cd android && ./gradlew --version
```
## iOS Build Troubleshooting
```bash
# Update CocoaPods
cd ios && pod install --repo-update && cd ..
# Clean iOS build
flutter clean && cd ios && pod deintegrate && pod install && cd ..
# Check for platform version mismatches in Podfile
# Ensure ios platform version >= minimum required by all pods
```
## Key Principles
- **Surgical fixes only** — don't refactor, just fix the error
- **Never** add `// ignore:` suppressions without approval
- **Never** use `dynamic` to silence type errors
- **Always** run `flutter analyze` after each fix to verify
- Fix root cause over suppressing symptoms
- Prefer null-safe patterns over bang operators (`!`)
## Stop Conditions
Stop and report if:
- Same error persists after 3 fix attempts
- Fix introduces more errors than it resolves
- Requires architectural changes or package upgrades that change behavior
- Conflicting platform constraints need user decision
## Output Format
```text
[FIXED] lib/features/cart/data/cart_repository_impl.dart:42
Error: A value of type 'String?' can't be assigned to type 'String'
Fix: Changed `final id = response.id` to `final id = response.id ?? ''`
Remaining errors: 2
[FIXED] pubspec.yaml
Error: Version solving failed — http >=0.13.0 required by dio and <0.13.0 required by retrofit
Fix: Upgraded dio to ^5.3.0 which allows http >=0.13.0
Remaining errors: 0
```
Final: `Build Status: SUCCESS/FAILED | Errors Fixed: N | Files Modified: list`
For detailed Dart patterns and code examples, see `skill: flutter-dart-code-review`.

View File

@@ -98,21 +98,21 @@ Write to `gan-harness/generator-state.md` after each iteration:
The Evaluator will specifically penalize these patterns. **Avoid them:**
- ❌ Generic gradient backgrounds (#667eea #764ba2 is an instant tell)
- ❌ Excessive rounded corners on everything
- ❌ Stock hero sections with "Welcome to [App Name]"
- ❌ Default Material UI / Shadcn themes without customization
- ❌ Placeholder images from unsplash/placeholder services
- ❌ Generic card grids with identical layouts
- "AI-generated" decorative SVG patterns
- Avoid generic gradient backgrounds (#667eea -> #764ba2 is an instant tell)
- Avoid excessive rounded corners on everything
- Avoid stock hero sections with "Welcome to [App Name]"
- Avoid default Material UI / Shadcn themes without customization
- Avoid placeholder images from unsplash/placeholder services
- Avoid generic card grids with identical layouts
- Avoid "AI-generated" decorative SVG patterns
**Instead, aim for:**
- ✅ A specific, opinionated color palette (follow the spec)
- ✅ Thoughtful typography hierarchy (different weights, sizes for different content)
- ✅ Custom layouts that match the content (not generic grids)
- ✅ Meaningful animations tied to user actions (not decoration)
- ✅ Real empty states with personality
- ✅ Error states that help the user (not just "Something went wrong")
- Use a specific, opinionated color palette (follow the spec)
- Use thoughtful typography hierarchy (different weights, sizes for different content)
- Use custom layouts that match the content (not generic grids)
- Use meaningful animations tied to user actions (not decoration)
- Use real empty states with personality
- Use error states that help the user (not just "Something went wrong")
## Interaction with Evaluator

View File

@@ -1,51 +1,23 @@
---
description: Start NanoClaw v2 — ECC's persistent, zero-dependency REPL with model routing, skill hot-load, branching, compaction, export, and metrics.
description: Legacy slash-entry shim for the nanoclaw-repl skill. Prefer the skill directly.
---
# Claw Command
# Claw Command (Legacy Shim)
Start an interactive AI agent session with persistent markdown history and operational controls.
Use this only if you still reach for `/claw` from muscle memory. The maintained implementation lives in `skills/nanoclaw-repl/SKILL.md`.
## Usage
## Canonical Surface
```bash
node scripts/claw.js
```
- Prefer the `nanoclaw-repl` skill directly.
- Keep this file only as a compatibility entry point while command-first usage is retired.
Or via npm:
## Arguments
```bash
npm run claw
```
`$ARGUMENTS`
## Environment Variables
## Delegation
| Variable | Default | Description |
|----------|---------|-------------|
| `CLAW_SESSION` | `default` | Session name (alphanumeric + hyphens) |
| `CLAW_SKILLS` | *(empty)* | Comma-separated skills loaded at startup |
| `CLAW_MODEL` | `sonnet` | Default model for the session |
## REPL Commands
```text
/help Show help
/clear Clear current session history
/history Print full conversation history
/sessions List saved sessions
/model [name] Show/set model
/load <skill-name> Hot-load a skill into context
/branch <session-name> Branch current session
/search <query> Search query across sessions
/compact Compact old turns, keep recent context
/export <md|json|txt> [path] Export session
/metrics Show session metrics
exit Quit
```
## Notes
- NanoClaw remains zero-dependency.
- Sessions are stored at `~/.claude/claw/<session>.md`.
- Compaction keeps the most recent turns and writes a compaction header.
- Export supports markdown, JSON turns, and plain text.
Apply the `nanoclaw-repl` skill and keep the response focused on operating or extending `scripts/claw.js`.
- If the user wants to run it, use `node scripts/claw.js` or `npm run claw`.
- If the user wants to extend it, preserve the zero-dependency and markdown-backed session model.
- If the request is really about long-running orchestration rather than NanoClaw itself, redirect to `dmux-workflows` or `autonomous-agent-harness`.

View File

@@ -219,10 +219,10 @@ Create review artifact at `.claude/PRPs/reviews/pr-<NUMBER>-review.md`:
| Check | Result |
|---|---|
| Type check | Pass / Fail / ⏭️ Skipped |
| Lint | ✅ / ❌ / ⏭️ |
| Tests | ✅ / ❌ / ⏭️ |
| Build | ✅ / ❌ / ⏭️ |
| Type check | Pass / Fail / Skipped |
| Lint | Pass / Fail / Skipped |
| Tests | Pass / Fail / Skipped |
| Build | Pass / Fail / Skipped |
## Files Reviewed
<list of files with change type: Added/Modified/Deleted>

View File

@@ -1,29 +1,23 @@
---
description: Analyze context window usage across agents, skills, MCP servers, and rules to find optimization opportunities. Helps reduce token overhead and avoid performance warnings.
description: Legacy slash-entry shim for the context-budget skill. Prefer the skill directly.
---
# Context Budget Optimizer
# Context Budget Optimizer (Legacy Shim)
Analyze your Claude Code setup's context window consumption and produce actionable recommendations to reduce token overhead.
Use this only if you still invoke `/context-budget`. The maintained workflow lives in `skills/context-budget/SKILL.md`.
## Usage
## Canonical Surface
```
/context-budget [--verbose]
```
- Prefer the `context-budget` skill directly.
- Keep this file only as a compatibility entry point.
- Default: summary with top recommendations
- `--verbose`: full breakdown per component
## Arguments
$ARGUMENTS
## What to Do
## Delegation
Run the **context-budget** skill (`skills/context-budget/SKILL.md`) with the following inputs:
1. Pass `--verbose` flag if present in `$ARGUMENTS`
2. Assume a 200K context window (Claude Sonnet default) unless the user specifies otherwise
3. Follow the skill's four phases: Inventory → Classify → Detect Issues → Report
4. Output the formatted Context Budget Report to the user
The skill handles all scanning logic, token estimation, issue detection, and report formatting.
Apply the `context-budget` skill.
- Pass through `--verbose` if the user supplied it.
- Assume a 200K context window unless the user specified otherwise.
- Return the skill's inventory, issue detection, and prioritized savings report without re-implementing the scan here.

View File

@@ -1,92 +1,23 @@
---
description: Orchestrate parallel Claude Code agents via Claude DevFleet — plan projects from natural language, dispatch agents in isolated worktrees, monitor progress, and read structured reports.
description: Legacy slash-entry shim for the claude-devfleet skill. Prefer the skill directly.
---
# DevFleet — Multi-Agent Orchestration
# DevFleet (Legacy Shim)
Orchestrate parallel Claude Code agents via Claude DevFleet. Each agent runs in an isolated git worktree with full tooling.
Use this only if you still call `/devfleet`. The maintained workflow lives in `skills/claude-devfleet/SKILL.md`.
Requires the DevFleet MCP server: `claude mcp add devfleet --transport http http://localhost:18801/mcp`
## Canonical Surface
## Flow
- Prefer the `claude-devfleet` skill directly.
- Keep this file only as a compatibility entry point while command-first usage is retired.
```
User describes project
→ plan_project(prompt) → mission DAG with dependencies
→ Show plan, get approval
→ dispatch_mission(M1) → Agent spawns in worktree
→ M1 completes → auto-merge → M2 auto-dispatches (depends_on M1)
→ M2 completes → auto-merge
→ get_report(M2) → files_changed, what_done, errors, next_steps
→ Report summary to user
```
## Arguments
## Workflow
`$ARGUMENTS`
1. **Plan the project** from the user's description:
## Delegation
```
mcp__devfleet__plan_project(prompt="<user's description>")
```
This returns a project with chained missions. Show the user:
- Project name and ID
- Each mission: title, type, dependencies
- The dependency DAG (which missions block which)
2. **Wait for user approval** before dispatching. Show the plan clearly.
3. **Dispatch the first mission** (the one with empty `depends_on`):
```
mcp__devfleet__dispatch_mission(mission_id="<first_mission_id>")
```
The remaining missions auto-dispatch as their dependencies complete (because `plan_project` creates them with `auto_dispatch=true`). When manually creating missions with `create_mission`, you must explicitly set `auto_dispatch=true` for this behavior.
4. **Monitor progress** — check what's running:
```
mcp__devfleet__get_dashboard()
```
Or check a specific mission:
```
mcp__devfleet__get_mission_status(mission_id="<id>")
```
Prefer polling with `get_mission_status` over `wait_for_mission` for long-running missions, so the user sees progress updates.
5. **Read the report** for each completed mission:
```
mcp__devfleet__get_report(mission_id="<mission_id>")
```
Call this for every mission that reached a terminal state. Reports contain: files_changed, what_done, what_open, what_tested, what_untested, next_steps, errors_encountered.
## All Available Tools
| Tool | Purpose |
|------|---------|
| `plan_project(prompt)` | AI breaks description into chained missions with `auto_dispatch=true` |
| `create_project(name, path?, description?)` | Create a project manually, returns `project_id` |
| `create_mission(project_id, title, prompt, depends_on?, auto_dispatch?)` | Add a mission. `depends_on` is a list of mission ID strings. |
| `dispatch_mission(mission_id, model?, max_turns?)` | Start an agent |
| `cancel_mission(mission_id)` | Stop a running agent |
| `wait_for_mission(mission_id, timeout_seconds?)` | Block until done (prefer polling for long tasks) |
| `get_mission_status(mission_id)` | Check progress without blocking |
| `get_report(mission_id)` | Read structured report |
| `get_dashboard()` | System overview |
| `list_projects()` | Browse projects |
| `list_missions(project_id, status?)` | List missions |
## Guidelines
- Always confirm the plan before dispatching unless the user said "go ahead"
- Include mission titles and IDs when reporting status
- If a mission fails, read its report to understand errors before retrying
- Agent concurrency is configurable (default: 3). Excess missions queue and auto-dispatch as slots free up. Check `get_dashboard()` for slot availability.
- Dependencies form a DAG — never create circular dependencies
- Each agent auto-merges its worktree on completion. If a merge conflict occurs, the changes remain on the worktree branch for manual resolution.
Apply the `claude-devfleet` skill.
- Plan from the user's description, show the DAG, and get approval before dispatch unless the user already said to proceed.
- Prefer polling status over blocking waits for long missions.
- Report mission IDs, files changed, failures, and next steps from structured mission reports.

View File

@@ -1,31 +1,23 @@
---
description: Look up current documentation for a library or topic via Context7.
description: Legacy slash-entry shim for the documentation-lookup skill. Prefer the skill directly.
---
# /docs
# Docs Command (Legacy Shim)
## Purpose
Use this only if you still reach for `/docs`. The maintained workflow lives in `skills/documentation-lookup/SKILL.md`.
Look up up-to-date documentation for a library, framework, or API and return a summarized answer with relevant code snippets. Uses the Context7 MCP (resolve-library-id and query-docs) so answers reflect current docs, not training data.
## Canonical Surface
## Usage
- Prefer the `documentation-lookup` skill directly.
- Keep this file only as a compatibility entry point.
```
/docs [library name] [question]
```
## Arguments
Use quotes for multi-word arguments so they are parsed as a single token. Example: `/docs "Next.js" "How do I configure middleware?"`
`$ARGUMENTS`
If library or question is omitted, prompt the user for:
1. The library or product name (e.g. Next.js, Prisma, Supabase).
2. The specific question or task (e.g. "How do I set up middleware?", "Auth methods").
## Delegation
## Workflow
1. **Resolve library ID** — Call the Context7 tool `resolve-library-id` with the library name and the user's question to get a Context7-compatible library ID (e.g. `/vercel/next.js`).
2. **Query docs** — Call `query-docs` with that library ID and the user's question.
3. **Summarize** — Return a concise answer and include relevant code examples from the fetched documentation. Mention the library (and version if relevant).
## Output
The user receives a short, accurate answer backed by current docs, plus any code snippets that help. If Context7 is not available, say so and answer from training data with a note that docs may be outdated.
Apply the `documentation-lookup` skill.
- If the library or the question is missing, ask for the missing part.
- Use live documentation through Context7 instead of training data.
- Return only the current answer and the minimum code/example surface needed.

View File

@@ -1,123 +1,26 @@
---
description: Generate and run end-to-end tests with Playwright. Creates test journeys, runs tests, captures screenshots/videos/traces, and uploads artifacts.
description: Legacy slash-entry shim for the e2e-testing skill. Prefer the skill directly.
---
# E2E Command
# E2E Command (Legacy Shim)
This command invokes the **e2e-runner** agent to generate, maintain, and execute end-to-end tests using Playwright.
Use this only if you still invoke `/e2e`. The maintained workflow lives in `skills/e2e-testing/SKILL.md`.
## What This Command Does
## Canonical Surface
1. **Generate Test Journeys** - Create Playwright tests for user flows
2. **Run E2E Tests** - Execute tests across browsers
3. **Capture Artifacts** - Screenshots, videos, traces on failures
4. **Upload Results** - HTML reports and JUnit XML
5. **Identify Flaky Tests** - Quarantine unstable tests
- Prefer the `e2e-testing` skill directly.
- Keep this file only as a compatibility entry point.
## When to Use
## Arguments
Use `/e2e` when:
- Testing critical user journeys (login, trading, payments)
- Verifying multi-step flows work end-to-end
- Testing UI interactions and navigation
- Validating integration between frontend and backend
- Preparing for production deployment
`$ARGUMENTS`
## How It Works
## Delegation
The e2e-runner agent will:
1. **Analyze user flow** and identify test scenarios
2. **Generate Playwright test** using Page Object Model pattern
3. **Run tests** across multiple browsers (Chrome, Firefox, Safari)
4. **Capture failures** with screenshots, videos, and traces
5. **Generate report** with results and artifacts
6. **Identify flaky tests** and recommend fixes
## Example Usage
```
User: /e2e Test the market search and view flow
Agent (e2e-runner):
# E2E Test Generation: Market Search and View Flow
## Test Scenario Identified
**User Journey:** Market Search → View Results → Click Market → View Details
**Test Coverage:**
1. Navigate to markets page
2. Perform semantic search
3. Verify search results
4. Click on first result
5. Verify market details page loads
6. Verify chart renders
## Generated Test Code
```typescript
// tests/e2e/markets/search-and-view.spec.ts
import { test, expect } from '@playwright/test'
import { MarketsPage } from '../../pages/MarketsPage'
import { MarketDetailsPage } from '../../pages/MarketDetailsPage'
test.describe('Market Search and View Flow', () => {
test('user can search markets and view details', async ({ page }) => {
// 1. Navigate to markets page
const marketsPage = new MarketsPage(page)
await marketsPage.goto()
// Verify page loaded
await expect(page).toHaveTitle(/Markets/)
await expect(page.locator('h1')).toContainText('Markets')
// 2. Perform semantic search
await marketsPage.searchMarkets('election')
// Wait for API response
await page.waitForResponse(resp =>
resp.url().includes('/api/markets/search') && resp.status() === 200
)
// 3. Verify search results
const marketCards = marketsPage.marketCards
await expect(marketCards.first()).toBeVisible()
const resultCount = await marketCards.count()
expect(resultCount).toBeGreaterThan(0)
// Take screenshot of search results
await page.screenshot({ path: 'artifacts/search-results.png' })
// 4. Click on first result
const firstMarketTitle = await marketCards.first().textContent()
await marketCards.first().click()
// 5. Verify market details page loads
await expect(page).toHaveURL(/\/markets\/[a-z0-9-]+/)
const detailsPage = new MarketDetailsPage(page)
await expect(detailsPage.marketName).toBeVisible()
await expect(detailsPage.marketDescription).toBeVisible()
// 6. Verify chart renders
await expect(detailsPage.priceChart).toBeVisible()
// Verify market name matches
const detailsTitle = await detailsPage.marketName.textContent()
expect(detailsTitle?.toLowerCase()).toContain(
firstMarketTitle?.toLowerCase().substring(0, 20) || ''
)
// Take screenshot of market details
await page.screenshot({ path: 'artifacts/market-details.png' })
})
test('search with no results shows empty state', async ({ page }) => {
const marketsPage = new MarketsPage(page)
await marketsPage.goto()
// Search for non-existent market
Apply the `e2e-testing` skill.
- Generate or update Playwright coverage for the requested user flow.
- Run only the relevant tests unless the user explicitly asked for the entire suite.
- Capture the usual artifacts and report failures, flake risk, and next fixes without duplicating the full skill body here.
await marketsPage.searchMarkets('xyznonexistentmarket123456')
// Verify empty state

View File

@@ -1,120 +1,23 @@
# Eval Command
---
description: Legacy slash-entry shim for the eval-harness skill. Prefer the skill directly.
---
Manage eval-driven development workflow.
# Eval Command (Legacy Shim)
## Usage
Use this only if you still invoke `/eval`. The maintained workflow lives in `skills/eval-harness/SKILL.md`.
`/eval [define|check|report|list] [feature-name]`
## Canonical Surface
## Define Evals
`/eval define feature-name`
Create a new eval definition:
1. Create `.claude/evals/feature-name.md` with template:
```markdown
## EVAL: feature-name
Created: $(date)
### Capability Evals
- [ ] [Description of capability 1]
- [ ] [Description of capability 2]
### Regression Evals
- [ ] [Existing behavior 1 still works]
- [ ] [Existing behavior 2 still works]
### Success Criteria
- pass@3 > 90% for capability evals
- pass^3 = 100% for regression evals
```
2. Prompt user to fill in specific criteria
## Check Evals
`/eval check feature-name`
Run evals for a feature:
1. Read eval definition from `.claude/evals/feature-name.md`
2. For each capability eval:
- Attempt to verify criterion
- Record PASS/FAIL
- Log attempt in `.claude/evals/feature-name.log`
3. For each regression eval:
- Run relevant tests
- Compare against baseline
- Record PASS/FAIL
4. Report current status:
```
EVAL CHECK: feature-name
========================
Capability: X/Y passing
Regression: X/Y passing
Status: IN PROGRESS / READY
```
## Report Evals
`/eval report feature-name`
Generate comprehensive eval report:
```
EVAL REPORT: feature-name
=========================
Generated: $(date)
CAPABILITY EVALS
----------------
[eval-1]: PASS (pass@1)
[eval-2]: PASS (pass@2) - required retry
[eval-3]: FAIL - see notes
REGRESSION EVALS
----------------
[test-1]: PASS
[test-2]: PASS
[test-3]: PASS
METRICS
-------
Capability pass@1: 67%
Capability pass@3: 100%
Regression pass^3: 100%
NOTES
-----
[Any issues, edge cases, or observations]
RECOMMENDATION
--------------
[SHIP / NEEDS WORK / BLOCKED]
```
## List Evals
`/eval list`
Show all eval definitions:
```
EVAL DEFINITIONS
================
feature-auth [3/5 passing] IN PROGRESS
feature-search [5/5 passing] READY
feature-export [0/4 passing] NOT STARTED
```
- Prefer the `eval-harness` skill directly.
- Keep this file only as a compatibility entry point.
## Arguments
$ARGUMENTS:
- `define <name>` - Create new eval definition
- `check <name>` - Run and check evals
- `report <name>` - Generate full report
- `list` - Show all evals
- `clean` - Remove old eval logs (keeps last 10 runs)
`$ARGUMENTS`
## Delegation
Apply the `eval-harness` skill.
- Support the same user intents as before: define, check, report, list, and cleanup.
- Keep evals capability-first, regression-backed, and evidence-based.
- Use the skill as the canonical evaluator instead of maintaining a separate command-specific playbook.

164
commands/flutter-build.md Normal file
View File

@@ -0,0 +1,164 @@
---
description: Fix Dart analyzer errors and Flutter build failures incrementally. Invokes the dart-build-resolver agent for minimal, surgical fixes.
---
# Flutter Build and Fix
This command invokes the **dart-build-resolver** agent to incrementally fix Dart/Flutter build errors with minimal changes.
## What This Command Does
1. **Run Diagnostics**: Execute `flutter analyze`, `flutter pub get`
2. **Parse Errors**: Group by file and sort by severity
3. **Fix Incrementally**: One error at a time
4. **Verify Each Fix**: Re-run analysis after each change
5. **Report Summary**: Show what was fixed and what remains
## When to Use
Use `/flutter-build` when:
- `flutter analyze` reports errors
- `flutter build` fails for any platform
- `dart pub get` / `flutter pub get` fails with version conflicts
- `build_runner` fails to generate code
- After pulling changes that break the build
## Diagnostic Commands Run
```bash
# Analysis
flutter analyze 2>&1
# Dependencies
flutter pub get 2>&1
# Code generation (if project uses build_runner)
dart run build_runner build --delete-conflicting-outputs 2>&1
# Platform builds
flutter build apk 2>&1
flutter build web 2>&1
```
## Example Session
````text
User: /flutter-build
Agent:
# Flutter Build Resolution
## Initial Diagnostics
```
$ flutter analyze
Analyzing project...
error • lib/features/user/data/user_repository_impl.dart:34:16 •
A value of type 'String?' can't be assigned to a variable of type 'String'. •
invalid_assignment
error • lib/features/cart/presentation/cart_page.dart:58:5 •
The method 'add' isn't defined for the type 'List<Item>'.
Try correcting the name to the name of an existing method, or defining a method named 'add'. •
undefined_method
2 errors found.
```
## Fix 1: Null Safety
File: lib/features/user/data/user_repository_impl.dart:34
Error: A value of type 'String?' can't be assigned to type 'String'
Changed:
```dart
final id = response.id;
```
To:
```dart
final id = response.id ?? '';
```
```
$ flutter analyze
1 error found.
```
## Fix 2: Immutable List
File: lib/features/cart/presentation/cart_page.dart:58
Error: The method 'add' isn't defined for the type 'List<Item>'
Cause: State holds an unmodifiable list; mutation goes through Cubit
Changed:
```dart
state.items.add(item);
```
To:
```dart
context.read<CartCubit>().addItem(item);
// Note: Cubit exposes named methods (addItem, removeItem);
// .add(event) is the BLoC event API — don't mix them.
```
```
$ flutter analyze
No issues found!
```
## Final Verification
```
$ flutter test
All tests passed.
```
## Summary
| Metric | Count |
|--------|-------|
| Analysis errors fixed | 2 |
| Files modified | 2 |
| Remaining issues | 0 |
Build Status: PASS ✓
````
## Common Errors Fixed
| Error | Typical Fix |
|-------|-------------|
| `A value of type 'X?' can't be assigned to 'X'` | Add `?? default` or null guard |
| `The name 'X' isn't defined` | Add import or fix typo |
| `Non-nullable instance field must be initialized` | Add initializer or `late` |
| `Version solving failed` | Adjust version constraints in pubspec.yaml |
| `Missing concrete implementation of 'X'` | Implement missing interface method |
| `build_runner: Part of X expected` | Delete stale `.g.dart` and rebuild |
## Fix Strategy
1. **Analysis errors first** — code must be error-free
2. **Warning triage second** — fix warnings that could cause runtime bugs
3. **pub conflicts third** — fix dependency resolution
4. **One fix at a time** — verify each change
5. **Minimal changes** — don't refactor, just fix
## Stop Conditions
The agent will stop and report if:
- Same error persists after 3 attempts
- Fix introduces more errors
- Requires architectural changes
- Package upgrade conflicts need user decision
## Related Commands
- `/flutter-test` — Run tests after build succeeds
- `/flutter-review` — Review code quality
- `/verify` — Full verification loop
## Related
- Agent: `agents/dart-build-resolver.md`
- Skill: `skills/flutter-dart-code-review/`

116
commands/flutter-review.md Normal file
View File

@@ -0,0 +1,116 @@
---
description: Review Flutter/Dart code for idiomatic patterns, widget best practices, state management, performance, accessibility, and security. Invokes the flutter-reviewer agent.
---
# Flutter Code Review
This command invokes the **flutter-reviewer** agent to review Flutter/Dart code changes.
## What This Command Does
1. **Gather Context**: Review `git diff --staged` and `git diff`
2. **Inspect Project**: Check `pubspec.yaml`, `analysis_options.yaml`, state management solution
3. **Security Pre-scan**: Check for hardcoded secrets and critical security issues
4. **Full Review**: Apply the complete review checklist
5. **Report Findings**: Output issues grouped by severity with fix guidance
## Prerequisites
Before running `/flutter-review`, ensure:
1. **Build passes** — run `/flutter-build` first; a review on broken code is incomplete
2. **Tests pass** — run `/flutter-test` to confirm no regressions
3. **No merge conflicts** — resolve all conflicts so the diff reflects only intentional changes
4. **`flutter analyze` is clean** — fix analyzer warnings before review
## When to Use
Use `/flutter-review` when:
- Before submitting a PR with Flutter/Dart changes (after build and tests pass)
- After implementing a new feature to catch issues early
- When reviewing someone else's Flutter code
- To audit a widget, state management component, or service class
- Before a production release
## Review Areas
| Area | Severity |
|------|----------|
| Hardcoded secrets, cleartext HTTP | CRITICAL |
| Architecture violations, state management anti-patterns | CRITICAL |
| Widget rebuild issues, resource leaks | HIGH |
| Missing `dispose()`, `BuildContext` after await | HIGH |
| Dart null safety, missing error/loading states | HIGH |
| Const propagation, widget composition | HIGH |
| Performance: expensive work in `build()` | HIGH |
| Accessibility, semantic labels | MEDIUM |
| Missing tests for state transitions | HIGH |
| Hardcoded strings (l10n) | MEDIUM |
| Pub dependency hygiene | LOW |
## Example Session
````text
User: /flutter-review
Agent:
# Flutter Code Review
## Context
Changed files:
- lib/features/auth/presentation/login_page.dart
- lib/features/auth/data/auth_repository_impl.dart
State management: Riverpod (detected from pubspec.yaml)
Architecture: feature-first
## Security Pre-scan
✓ No hardcoded secrets detected
✓ No cleartext HTTP calls
## Review Findings
[HIGH] BuildContext used after async gap without mounted check
File: lib/features/auth/presentation/login_page.dart:67
Issue: `context.go('/home')` called after `await auth.login(...)` with no `mounted` check.
Fix: Add `if (!context.mounted) return;` before any navigation after awaits (Flutter 3.7+).
[HIGH] AsyncValue error state not handled
File: lib/features/auth/presentation/login_page.dart:42
Issue: `ref.watch(authProvider)` switches on loading/data but has no `error` branch.
Fix: Add error case to the switch expression or `when()` call to show a user-facing error message.
[MEDIUM] Hardcoded string not localized
File: lib/features/auth/presentation/login_page.dart:89
Issue: `Text('Login')` — user-visible string not using localization system.
Fix: Use the project's l10n accessor: `Text(context.l10n.loginButton)`.
## Review Summary
| Severity | Count | Status |
|----------|-------|--------|
| CRITICAL | 0 | pass |
| HIGH | 2 | block |
| MEDIUM | 1 | info |
| LOW | 0 | note |
Verdict: BLOCK — HIGH issues must be fixed before merge.
````
## Approval Criteria
- **Approve**: No CRITICAL or HIGH issues
- **Block**: Any CRITICAL or HIGH issues must be fixed before merge
## Related Commands
- `/flutter-build` — Fix build errors first
- `/flutter-test` — Run tests before reviewing
- `/code-review` — General code review (language-agnostic)
## Related
- Agent: `agents/flutter-reviewer.md`
- Skill: `skills/flutter-dart-code-review/`
- Rules: `rules/dart/`

144
commands/flutter-test.md Normal file
View File

@@ -0,0 +1,144 @@
---
description: Run Flutter/Dart tests, report failures, and incrementally fix test issues. Covers unit, widget, golden, and integration tests.
---
# Flutter Test
This command runs the Flutter test suite and reports results. When failures occur, it diagnoses and fixes issues incrementally.
## What This Command Does
1. **Run Tests**: Execute `flutter test` (or scope to changed files)
2. **Parse Failures**: Identify failing tests by type and cause
3. **Fix Incrementally**: One failure at a time where possible
4. **Verify**: Re-run after each fix
5. **Report**: Show coverage summary and remaining failures
## When to Use
Use `/flutter-test` when:
- After implementing a feature to verify nothing broke
- After running `/flutter-build` to ensure tests pass
- To check test coverage on new code
- When a specific test file is failing
- Before submitting a PR
## Commands Run
```bash
# Run all tests
flutter test 2>&1
# Run with coverage
flutter test --coverage 2>&1
# Run specific test file
flutter test test/unit/domain/usecases/get_user_test.dart 2>&1
# Run tests matching a name pattern
flutter test --name "CartBloc" 2>&1
# Run integration tests (requires device/emulator)
flutter test integration_test/ 2>&1
# Update golden files when intentional visual changes are made
flutter test --update-goldens 2>&1
```
## Example Session
````text
User: /flutter-test
Agent:
# Flutter Test Run
## Test Execution
```
$ flutter test
00:05 +42 -2: Some tests failed.
FAILED: test/unit/data/user_repository_test.dart
UserRepository getById
returns null when user not found
Expected: null
Actual: User(id: 'test-id', name: 'Test User')
FAILED: test/widget/cart_page_test.dart
CartPage
shows empty state when cart is empty
Expected: exactly one widget with text 'Your cart is empty'
Found: no widget with text 'Your cart is empty'
```
## Fix 1: Repository Test
File: test/unit/data/user_repository_test.dart
Failure: `getById` returns a user when it should return null for missing ID
Root cause: Test setup adds a user with ID 'test-id' but queries with 'missing-id'.
Fix: Updated test to query with 'missing-id' — setup was correct, query was wrong.
```
$ flutter test test/unit/data/user_repository_test.dart
1 test passed.
```
## Fix 2: Widget Test
File: test/widget/cart_page_test.dart
Failure: Empty state text widget not found
Root cause: Empty state message was renamed from 'Your cart is empty' to 'Cart is empty' in the widget.
Fix: Updated test string to match current widget copy.
```
$ flutter test test/widget/cart_page_test.dart
1 test passed.
```
## Final Run
```
$ flutter test --coverage
All 44 tests passed.
Coverage: 84.2% (target: 80%)
```
## Summary
| Metric | Value |
|--------|-------|
| Total tests | 44 |
| Passed | 44 |
| Failed | 0 |
| Coverage | 84.2% |
Test Status: PASS ✓
````
## Common Test Failures
| Failure | Typical Fix |
|---------|-------------|
| `Expected: <X> Actual: <Y>` | Update assertion or fix implementation |
| `Widget not found` | Fix finder selector or update test after widget rename |
| `Golden file not found` | Run `flutter test --update-goldens` to generate |
| `Golden mismatch` | Inspect diff; run `--update-goldens` if change was intentional |
| `MissingPluginException` | Mock platform channel in test setup |
| `LateInitializationError` | Initialize `late` fields in `setUp()` |
| `pumpAndSettle timed out` | Replace with explicit `pump(Duration)` calls |
## Related Commands
- `/flutter-build` — Fix build errors before running tests
- `/flutter-review` — Review code after tests pass
- `/tdd` — Test-driven development workflow
## Related
- Agent: `agents/flutter-reviewer.md`
- Agent: `agents/dart-build-resolver.md`
- Skill: `skills/flutter-dart-code-review/`
- Rules: `rules/dart/testing.md`

106
commands/jira.md Normal file
View File

@@ -0,0 +1,106 @@
---
description: Retrieve a Jira ticket, analyze requirements, update status, or add comments. Uses the jira-integration skill and MCP or REST API.
---
# Jira Command
Interact with Jira tickets directly from your workflow — fetch tickets, analyze requirements, add comments, and transition status.
## Usage
```
/jira get <TICKET-KEY> # Fetch and analyze a ticket
/jira comment <TICKET-KEY> # Add a progress comment
/jira transition <TICKET-KEY> # Change ticket status
/jira search <JQL> # Search issues with JQL
```
## What This Command Does
1. **Get & Analyze** — Fetch a Jira ticket and extract requirements, acceptance criteria, test scenarios, and dependencies
2. **Comment** — Add structured progress updates to a ticket
3. **Transition** — Move a ticket through workflow states (To Do → In Progress → Done)
4. **Search** — Find issues using JQL queries
## How It Works
### `/jira get <TICKET-KEY>`
1. Fetch the ticket from Jira (via MCP `jira_get_issue` or REST API)
2. Extract all fields: summary, description, acceptance criteria, priority, labels, linked issues
3. Optionally fetch comments for additional context
4. Produce a structured analysis:
```
Ticket: PROJ-1234
Summary: [title]
Status: [status]
Priority: [priority]
Type: [Story/Bug/Task]
Requirements:
1. [extracted requirement]
2. [extracted requirement]
Acceptance Criteria:
- [ ] [criterion from ticket]
Test Scenarios:
- Happy Path: [description]
- Error Case: [description]
- Edge Case: [description]
Dependencies:
- [linked issues, APIs, services]
Recommended Next Steps:
- /plan to create implementation plan
- /tdd to implement with tests first
```
### `/jira comment <TICKET-KEY>`
1. Summarize current session progress (what was built, tested, committed)
2. Format as a structured comment
3. Post to the Jira ticket
### `/jira transition <TICKET-KEY>`
1. Fetch available transitions for the ticket
2. Show options to user
3. Execute the selected transition
### `/jira search <JQL>`
1. Execute the JQL query against Jira
2. Return a summary table of matching issues
## Prerequisites
This command requires Jira credentials. Choose one:
**Option A — MCP Server (recommended):**
Add `jira` to your `mcpServers` config (see `mcp-configs/mcp-servers.json` for the template).
**Option B — Environment variables:**
```bash
export JIRA_URL="https://yourorg.atlassian.net"
export JIRA_EMAIL="your.email@example.com"
export JIRA_API_TOKEN="your-api-token"
```
If credentials are missing, stop and direct the user to set them up.
## Integration with Other Commands
After analyzing a ticket:
- Use `/plan` to create an implementation plan from the requirements
- Use `/tdd` to implement with test-driven development
- Use `/code-review` after implementation
- Use `/jira comment` to post progress back to the ticket
- Use `/jira transition` to move the ticket when work is complete
## Related
- **Skill:** `skills/jira-integration/`
- **MCP config:** `mcp-configs/mcp-servers.json``jira`

View File

@@ -1,139 +1,43 @@
---
description: Sequential and tmux/worktree orchestration guidance for multi-agent workflows.
description: Legacy slash-entry shim for dmux-workflows and autonomous-agent-harness. Prefer the skills directly.
---
# Orchestrate Command
# Orchestrate Command (Legacy Shim)
Sequential agent workflow for complex tasks.
Use this only if you still invoke `/orchestrate`. The maintained orchestration guidance lives in `skills/dmux-workflows/SKILL.md` and `skills/autonomous-agent-harness/SKILL.md`.
## Usage
## Canonical Surface
`/orchestrate [workflow-type] [task-description]`
- Prefer `dmux-workflows` for parallel panes, worktrees, and multi-agent splits.
- Prefer `autonomous-agent-harness` for longer-running loops, governance, scheduling, and control-plane style execution.
- Keep this file only as a compatibility entry point.
## Workflow Types
## Arguments
### feature
Full feature implementation workflow:
```
planner -> tdd-guide -> code-reviewer -> security-reviewer
```
`$ARGUMENTS`
### bugfix
Bug investigation and fix workflow:
```
planner -> tdd-guide -> code-reviewer
```
## Delegation
### refactor
Safe refactoring workflow:
```
architect -> code-reviewer -> tdd-guide
```
### security
Security-focused review:
```
security-reviewer -> code-reviewer -> architect
```
## Execution Pattern
For each agent in the workflow:
1. **Invoke agent** with context from previous agent
2. **Collect output** as structured handoff document
3. **Pass to next agent** in chain
4. **Aggregate results** into final report
## Handoff Document Format
Between agents, create handoff document:
```markdown
## HANDOFF: [previous-agent] -> [next-agent]
### Context
[Summary of what was done]
### Findings
[Key discoveries or decisions]
### Files Modified
[List of files touched]
### Open Questions
[Unresolved items for next agent]
### Recommendations
[Suggested next steps]
```
## Example: Feature Workflow
```
/orchestrate feature "Add user authentication"
```
Executes:
1. **Planner Agent**
- Analyzes requirements
- Creates implementation plan
- Identifies dependencies
- Output: `HANDOFF: planner -> tdd-guide`
2. **TDD Guide Agent**
- Reads planner handoff
- Writes tests first
- Implements to pass tests
- Output: `HANDOFF: tdd-guide -> code-reviewer`
3. **Code Reviewer Agent**
- Reviews implementation
- Checks for issues
- Suggests improvements
- Output: `HANDOFF: code-reviewer -> security-reviewer`
4. **Security Reviewer Agent**
- Security audit
- Vulnerability check
- Final approval
- Output: Final Report
## Final Report Format
```
ORCHESTRATION REPORT
====================
Workflow: feature
Task: Add user authentication
Agents: planner -> tdd-guide -> code-reviewer -> security-reviewer
SUMMARY
-------
[One paragraph summary]
AGENT OUTPUTS
-------------
Planner: [summary]
TDD Guide: [summary]
Code Reviewer: [summary]
Apply the orchestration skills instead of maintaining a second workflow spec here.
- Start with `dmux-workflows` for split/parallel execution.
- Pull in `autonomous-agent-harness` when the user is really asking for persistent loops, governance, or operator-layer behavior.
- Keep handoffs structured, but let the skills define the maintained sequencing rules.
Security Reviewer: [summary]
FILES CHANGED
-------------
### FILES CHANGED
[List all files modified]
TEST RESULTS
------------
### TEST RESULTS
[Test pass/fail summary]
SECURITY STATUS
---------------
### SECURITY STATUS
[Security findings]
RECOMMENDATION
--------------
### RECOMMENDATION
[SHIP / NEEDS WORK / BLOCKED]
```
@@ -207,7 +111,7 @@ Telemetry:
This keeps planner, implementer, reviewer, and loop workers legible from the operator surface.
## Arguments
## Workflow Arguments
$ARGUMENTS:
- `feature <description>` - Full feature workflow

View File

@@ -1,38 +1,23 @@
---
description: Analyze a draft prompt and output an optimized, ECC-enriched version ready to paste and run. Does NOT execute the task — outputs advisory analysis only.
description: Legacy slash-entry shim for the prompt-optimizer skill. Prefer the skill directly.
---
# /prompt-optimize
# Prompt Optimize (Legacy Shim)
Analyze and optimize the following prompt for maximum ECC leverage.
Use this only if you still invoke `/prompt-optimize`. The maintained workflow lives in `skills/prompt-optimizer/SKILL.md`.
## Your Task
## Canonical Surface
Apply the **prompt-optimizer** skill to the user's input below. Follow the 6-phase analysis pipeline:
- Prefer the `prompt-optimizer` skill directly.
- Keep this file only as a compatibility entry point.
0. **Project Detection** — Read CLAUDE.md, detect tech stack from project files (package.json, go.mod, pyproject.toml, etc.)
1. **Intent Detection** — Classify the task type (new feature, bug fix, refactor, research, testing, review, documentation, infrastructure, design)
2. **Scope Assessment** — Evaluate complexity (TRIVIAL / LOW / MEDIUM / HIGH / EPIC), using codebase size as signal if detected
3. **ECC Component Matching** — Map to specific skills, commands, agents, and model tier
4. **Missing Context Detection** — Identify gaps. If 3+ critical items missing, ask the user to clarify before generating
5. **Workflow & Model** — Determine lifecycle position, recommend model tier, and split into multiple prompts if HIGH/EPIC
## Arguments
## Output Requirements
`$ARGUMENTS`
- Present diagnosis, recommended ECC components, and an optimized prompt using the Output Format from the prompt-optimizer skill
- Provide both **Full Version** (detailed) and **Quick Version** (compact, varied by intent type)
- Respond in the same language as the user's input
- The optimized prompt must be complete and ready to copy-paste into a new session
- End with a footer offering adjustment or a clear next step for starting a separate execution request
## Delegation
## CRITICAL
Do NOT execute the user's task. Output ONLY the analysis and optimized prompt.
If the user asks for direct execution, explain that `/prompt-optimize` only produces advisory output and tell them to start a normal task request instead.
Note: `blueprint` is a **skill**, not a slash command. Write "Use the blueprint skill"
instead of presenting it as a `/...` command.
## User Input
$ARGUMENTS
Apply the `prompt-optimizer` skill.
- Keep it advisory-only: optimize the prompt, do not execute the task.
- Return the recommended ECC components plus a ready-to-run prompt.
- If the user actually wants direct execution, say so and tell them to make a normal task request instead of staying inside the shim.

View File

@@ -115,7 +115,7 @@ For each task in **Step-by-Step Tasks**:
```
If type-check fails → fix the error before moving to the next file.
4. **Track progress** — Log: ` Task N: [task name] — complete`
4. **Track progress** — Log: `[done] Task N: [task name] — complete`
### Handling Deviations
@@ -234,18 +234,18 @@ Write report to `.claude/PRPs/reports/{plan-name}-report.md`:
| # | Task | Status | Notes |
|---|---|---|---|
| 1 | [task name] | Complete | |
| 2 | [task name] | Complete | Deviated — [reason] |
| 1 | [task name] | [done] Complete | |
| 2 | [task name] | [done] Complete | Deviated — [reason] |
## Validation Results
| Level | Status | Notes |
|---|---|---|
| Static Analysis | Pass | |
| Unit Tests | Pass | N tests written |
| Build | Pass | |
| Integration | Pass | or N/A |
| Edge Cases | Pass | |
| Static Analysis | [done] Pass | |
| Unit Tests | [done] Pass | N tests written |
| Build | [done] Pass | |
| Integration | [done] Pass | or N/A |
| Edge Cases | [done] Pass | |
## Files Changed
@@ -297,17 +297,17 @@ Report to user:
- **Plan**: [plan file path] → archived to completed/
- **Branch**: [current branch name]
- **Status**: All tasks complete
- **Status**: [done] All tasks complete
### Validation Summary
| Check | Status |
|---|---|
| Type Check | |
| Lint | |
| Tests | (N written) |
| Build | |
| Integration | or N/A |
| Type Check | [done] |
| Lint | [done] |
| Tests | [done] (N written) |
| Build | [done] |
| Integration | [done] or N/A |
### Files Changed
- [N] files created, [M] files updated
@@ -322,8 +322,8 @@ Report to user:
### PRD Progress (if applicable)
| Phase | Status |
|---|---|
| Phase 1 | Complete |
| Phase 2 | ⏳ Next |
| Phase 1 | [done] Complete |
| Phase 2 | [next] |
| ... | ... |
> Next step: Run `/prp-pr` to create a pull request, or `/code-review` to review changes first.

View File

@@ -177,7 +177,7 @@ Next steps:
## Edge Cases
- **No `gh` CLI**: Stop with: "GitHub CLI (`gh`) is required. Install: https://cli.github.com/"
- **No `gh` CLI**: Stop with: "GitHub CLI (`gh`) is required. Install: <https://cli.github.com/>"
- **Not authenticated**: Stop with: "Run `gh auth login` first."
- **Force push needed**: If remote has diverged and rebase was done, use `git push --force-with-lease` (never `--force`).
- **Multiple PR templates**: If `.github/PULL_REQUEST_TEMPLATE/` has multiple files, list them and ask user to choose.

View File

@@ -1,11 +1,20 @@
---
description: "Scan skills to extract cross-cutting principles and distill them into rules"
description: Legacy slash-entry shim for the rules-distill skill. Prefer the skill directly.
---
# /rules-distill — Distill Principles from Skills into Rules
# Rules Distill (Legacy Shim)
Scan installed skills, extract cross-cutting principles, and distill them into rules.
Use this only if you still invoke `/rules-distill`. The maintained workflow lives in `skills/rules-distill/SKILL.md`.
## Process
## Canonical Surface
Follow the full workflow defined in the `rules-distill` skill.
- Prefer the `rules-distill` skill directly.
- Keep this file only as a compatibility entry point.
## Arguments
`$ARGUMENTS`
## Delegation
Apply the `rules-distill` skill and follow its inventory, cross-read, and verdict workflow instead of duplicating that logic here.

View File

@@ -1,123 +1,26 @@
---
description: Enforce test-driven development workflow. Scaffold interfaces, generate tests FIRST, then implement minimal code to pass. Ensure 80%+ coverage.
description: Legacy slash-entry shim for the tdd-workflow skill. Prefer the skill directly.
---
# TDD Command
# TDD Command (Legacy Shim)
This command invokes the **tdd-guide** agent to enforce test-driven development methodology.
Use this only if you still invoke `/tdd`. The maintained workflow lives in `skills/tdd-workflow/SKILL.md`.
## What This Command Does
## Canonical Surface
1. **Scaffold Interfaces** - Define types/interfaces first
2. **Generate Tests First** - Write failing tests (RED)
3. **Implement Minimal Code** - Write just enough to pass (GREEN)
4. **Refactor** - Improve code while keeping tests green (REFACTOR)
5. **Verify Coverage** - Ensure 80%+ test coverage
- Prefer the `tdd-workflow` skill directly.
- Keep this file only as a compatibility entry point.
## When to Use
## Arguments
Use `/tdd` when:
- Implementing new features
- Adding new functions/components
- Fixing bugs (write test that reproduces bug first)
- Refactoring existing code
- Building critical business logic
`$ARGUMENTS`
## How It Works
## Delegation
The tdd-guide agent will:
1. **Define interfaces** for inputs/outputs
2. **Write tests that will FAIL** (because code doesn't exist yet)
3. **Run tests** and verify they fail for the right reason
4. **Write minimal implementation** to make tests pass
5. **Run tests** and verify they pass
6. **Refactor** code while keeping tests green
7. **Check coverage** and add more tests if below 80%
## TDD Cycle
```
RED → GREEN → REFACTOR → REPEAT
RED: Write a failing test
GREEN: Write minimal code to pass
REFACTOR: Improve code, keep tests passing
REPEAT: Next feature/scenario
```
## Example Usage
```
User: /tdd I need a function to calculate market liquidity score
Agent (tdd-guide):
# TDD Session: Market Liquidity Score Calculator
## Step 1: Define Interface (SCAFFOLD)
```typescript
// lib/liquidity.ts
export interface MarketData {
totalVolume: number
bidAskSpread: number
activeTraders: number
lastTradeTime: Date
}
export function calculateLiquidityScore(market: MarketData): number {
// TODO: Implementation
throw new Error('Not implemented')
}
```
## Step 2: Write Failing Test (RED)
```typescript
// lib/liquidity.test.ts
import { calculateLiquidityScore } from './liquidity'
describe('calculateLiquidityScore', () => {
it('should return high score for liquid market', () => {
const market = {
totalVolume: 100000,
bidAskSpread: 0.01,
activeTraders: 500,
lastTradeTime: new Date()
}
const score = calculateLiquidityScore(market)
expect(score).toBeGreaterThan(80)
expect(score).toBeLessThanOrEqual(100)
})
it('should return low score for illiquid market', () => {
const market = {
totalVolume: 100,
bidAskSpread: 0.5,
activeTraders: 2,
lastTradeTime: new Date(Date.now() - 86400000) // 1 day ago
}
const score = calculateLiquidityScore(market)
expect(score).toBeLessThan(30)
expect(score).toBeGreaterThanOrEqual(0)
})
it('should handle edge case: zero volume', () => {
const market = {
totalVolume: 0,
bidAskSpread: 0,
activeTraders: 0,
lastTradeTime: new Date()
}
const score = calculateLiquidityScore(market)
expect(score).toBe(0)
})
Apply the `tdd-workflow` skill.
- Stay strict on RED -> GREEN -> REFACTOR.
- Keep tests first, coverage explicit, and checkpoint evidence clear.
- Use the skill as the maintained TDD body instead of duplicating the playbook here.
})
```

View File

@@ -1,59 +1,23 @@
# Verification Command
---
description: Legacy slash-entry shim for the verification-loop skill. Prefer the skill directly.
---
Run comprehensive verification on current codebase state.
# Verification Command (Legacy Shim)
## Instructions
Use this only if you still invoke `/verify`. The maintained workflow lives in `skills/verification-loop/SKILL.md`.
Execute verification in this exact order:
## Canonical Surface
1. **Build Check**
- Run the build command for this project
- If it fails, report errors and STOP
2. **Type Check**
- Run TypeScript/type checker
- Report all errors with file:line
3. **Lint Check**
- Run linter
- Report warnings and errors
4. **Test Suite**
- Run all tests
- Report pass/fail count
- Report coverage percentage
5. **Console.log Audit**
- Search for console.log in source files
- Report locations
6. **Git Status**
- Show uncommitted changes
- Show files modified since last commit
## Output
Produce a concise verification report:
```
VERIFICATION: [PASS/FAIL]
Build: [OK/FAIL]
Types: [OK/X errors]
Lint: [OK/X issues]
Tests: [X/Y passed, Z% coverage]
Secrets: [OK/X found]
Logs: [OK/X console.logs]
Ready for PR: [YES/NO]
```
If any critical issues, list them with fix suggestions.
- Prefer the `verification-loop` skill directly.
- Keep this file only as a compatibility entry point.
## Arguments
$ARGUMENTS can be:
- `quick` - Only build + types
- `full` - All checks (default)
- `pre-commit` - Checks relevant for commits
- `pre-pr` - Full checks plus security scan
`$ARGUMENTS`
## Delegation
Apply the `verification-loop` skill.
- Choose the right verification depth for the user's requested mode.
- Run build, types, lint, tests, security/log checks, and diff review in the right order for the current repo.
- Report only the verdicts and blockers instead of maintaining a second verification checklist here.

View File

@@ -600,7 +600,7 @@ node tests/hooks/hooks.test.js
### 貢献アイデア
- 言語固有のスキルRust、C#、Swift、Kotlin — Go、Python、Javaは既に含まれています
- フレームワーク固有の設定Rails、Laravel、FastAPI、NestJS — Django、Spring Bootは既に含まれています
- フレームワーク固有の設定Rails、Laravel、FastAPI — Django、NestJS、Spring Bootは既に含まれています
- DevOpsエージェントKubernetes、Terraform、AWS、Docker
- テスト戦略(異なるフレームワーク、ビジュアルリグレッション)
- 専門領域の知識ML、データエンジニアリング、モバイル開発

View File

@@ -631,7 +631,7 @@ node tests/hooks/hooks.test.js
### 기여 아이디어
- 언어별 스킬 (Rust, C#, Swift, Kotlin) — Go, Python, Java는 이미 포함
- 프레임워크별 설정 (Rails, Laravel, FastAPI, NestJS) — Django, Spring Boot는 이미 포함
- 프레임워크별 설정 (Rails, Laravel, FastAPI) — Django, NestJS, Spring Boot는 이미 포함
- DevOps 에이전트 (Kubernetes, Terraform, AWS, Docker)
- 테스팅 전략 (다양한 프레임워크, 비주얼 리그레션)
- 도메인별 지식 (ML, 데이터 엔지니어링, 모바일)

View File

@@ -441,7 +441,7 @@ Lütfen katkıda bulunun! Rehber için [CONTRIBUTING.md](../../CONTRIBUTING.md)'
### Katkı Fikirleri
- Dile özel skill'ler (Rust, C#, Kotlin, Java) — Go, Python, Perl, Swift ve TypeScript zaten dahil
- Framework'e özel config'ler (Rails, FastAPI, NestJS) — Django, Spring Boot, Laravel zaten dahil
- Framework'e özel config'ler (Rails, FastAPI) — Django, NestJS, Spring Boot ve Laravel zaten dahil
- DevOps agent'ları (Kubernetes, Terraform, AWS, Docker)
- Test stratejileri (farklı framework'ler, görsel regresyon)
- Domain'e özel bilgi (ML, data engineering, mobile)

View File

@@ -1,6 +1,6 @@
# Everything Claude Code (ECC) — 智能体指令
这是一个**生产就绪的 AI 编码插件**,提供 28 个专业代理、116 项技能、59 条命令以及自动化钩子工作流,用于软件开发。
这是一个**生产就绪的 AI 编码插件**,提供 38 个专业代理、156 项技能、72 条命令以及自动化钩子工作流,用于软件开发。
**版本:** 1.9.0
@@ -146,9 +146,9 @@
## 项目结构
```
agents/ — 28 个专业子代理
skills/ — 115 个工作流技能和领域知识
commands/ — 59 个斜杠命令
agents/ — 38 个专业子代理
skills/ — 156 个工作流技能和领域知识
commands/ — 72 个斜杠命令
hooks/ — 基于触发的自动化
rules/ — 始终遵循的指导方针(通用 + 每种语言)
scripts/ — 跨平台 Node.js 实用工具

View File

@@ -209,7 +209,7 @@ npx ecc-install typescript
/plugin list everything-claude-code@everything-claude-code
```
**搞定!** 你现在可以使用 28 个智能体、116 项技能和 59 个命令了。
**搞定!** 你现在可以使用 38 个智能体、156 项技能和 72 个命令了。
***
@@ -927,7 +927,7 @@ node tests/hooks/hooks.test.js
### 贡献想法
* 特定语言技能 (Rust, C#, Kotlin, Java) — Go、Python、Perl、Swift 和 TypeScript 已包含在内
* 特定框架配置 (Rails, FastAPI, NestJS) — Django、Spring Boot、Laravel 已包含在内
* 特定框架配置 (Rails, FastAPI) — Django、NestJS、Spring Boot、Laravel 已包含在内
* DevOps 智能体 (Kubernetes, Terraform, AWS, Docker)
* 测试策略 (不同框架、视觉回归)
* 领域特定知识 (ML, 数据工程, 移动端)
@@ -1094,9 +1094,9 @@ opencode
| 功能特性 | Claude Code | OpenCode | 状态 |
|---------|-------------|----------|--------|
| 智能体 | PASS: 28 个 | PASS: 12 个 | **Claude Code 领先** |
| 命令 | PASS: 59 个 | PASS: 31 个 | **Claude Code 领先** |
| 技能 | PASS: 116 项 | PASS: 37 项 | **Claude Code 领先** |
| 智能体 | PASS: 38 个 | PASS: 12 个 | **Claude Code 领先** |
| 命令 | PASS: 72 个 | PASS: 31 个 | **Claude Code 领先** |
| 技能 | PASS: 156 项 | PASS: 37 项 | **Claude Code 领先** |
| 钩子 | PASS: 8 种事件类型 | PASS: 11 种事件 | **OpenCode 更多!** |
| 规则 | PASS: 29 条 | PASS: 13 条指令 | **Claude Code 领先** |
| MCP 服务器 | PASS: 14 个 | PASS: 完整 | **完全对等** |
@@ -1206,9 +1206,9 @@ ECC 是**第一个最大化利用每个主要 AI 编码工具的插件**。以
| 功能特性 | Claude Code | Cursor IDE | Codex CLI | OpenCode |
|---------|------------|------------|-----------|----------|
| **智能体** | 21 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 |
| **命令** | 52 | 共享 | 基于指令 | 31 |
| **技能** | 102 | 共享 | 10 (原生格式) | 37 |
| **智能体** | 38 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 |
| **命令** | 72 | 共享 | 基于指令 | 31 |
| **技能** | 156 | 共享 | 10 (原生格式) | 37 |
| **钩子事件** | 8 种类型 | 15 种类型 | 暂无 | 11 种类型 |
| **钩子脚本** | 20+ 个脚本 | 16 个脚本 (DRY 适配器) | N/A | 插件钩子 |
| **规则** | 34 (通用 + 语言) | 34 (YAML 前页) | 基于指令 | 13 条指令 |

View File

@@ -0,0 +1,81 @@
# Browser QA — 自动化视觉测试与交互验证
## When to use
- 功能部署到 staging / preview 之后
- 需要验证跨页面的 UI 行为时
- 发布前确认布局、表单和交互是否真的可用
- 审查涉及前端改动的 PR 时
- 做可访问性审计和响应式测试时
## How it works
使用浏览器自动化 MCPclaude-in-chrome、Playwright 或 Puppeteer像真实用户一样与线上页面交互。
### 阶段 1冒烟测试
```
1. 打开目标 URL
2. 检查控制台错误(过滤噪声:分析脚本、第三方库)
3. 验证网络请求中没有 4xx / 5xx
4. 在桌面和移动端视口截图首屏内容
5. 检查 Core Web VitalsLCP < 2.5sCLS < 0.1INP < 200ms
```
### 阶段 2交互测试
```
1. 点击所有导航链接,验证没有死链
2. 使用有效数据提交表单,验证成功态
3. 使用无效数据提交表单,验证错误态
4. 测试认证流程:登录 → 受保护页面 → 登出
5. 测试关键用户路径(结账、引导、搜索)
```
### 阶段 3视觉回归
```
1. 在 3 个断点375px、768px、1440px对关键页面截图
2. 与基线截图对比(如果已保存)
3. 标记 > 5px 的布局偏移、缺失元素、内容溢出
4. 如适用,检查暗色模式
```
### 阶段 4可访问性
```
1. 在每个页面运行 axe-core 或等价工具
2. 标记 WCAG AA 违规(对比度、标签、焦点顺序)
3. 验证键盘导航可以端到端工作
4. 检查屏幕阅读器地标
```
## Examples
```markdown
## QA 报告 — [URL] — [timestamp]
### 冒烟测试
- 控制台错误0 个严重错误2 个警告(分析脚本噪声)
- 网络:全部 200/304无失败请求
- Core Web VitalsLCP 1.2sCLS 0.02INP 89ms
### 交互
- [done] 导航链接12/12 正常
- [issue] 联系表单:无效邮箱缺少错误态
- [done] 认证流程:登录 / 登出正常
### 视觉
- [issue] Hero 区域在 375px 视口下溢出
- [done] 暗色模式:所有页面一致
### 可访问性
- 2 个 AA 级违规Hero 图片缺少 alt 文本,页脚链接对比度过低
### 结论修复后可发布2 个问题0 个阻塞项)
```
## 集成
可与任意浏览器 MCP 配合:
- `mChild__claude-in-chrome__*` 工具(推荐,直接使用你的真实 Chrome
- 通过 `mcp__browserbase__*` 使用 Playwright
- 直接运行 Puppeteer 脚本
可与 `/canary-watch` 搭配用于发布后的持续监控。

957
ecc2/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -9,7 +9,7 @@ repository = "https://github.com/affaan-m/everything-claude-code"
[dependencies]
# TUI
ratatui = "0.29"
ratatui = { version = "0.30", features = ["crossterm_0_28"] }
crossterm = "0.28"
# Async runtime

View File

@@ -24,5 +24,11 @@ module.exports = [
'no-undef': 'error',
'eqeqeq': 'warn'
}
},
{
files: ['**/*.mjs'],
languageOptions: {
sourceType: 'module'
}
}
];

View File

@@ -35,6 +35,7 @@ User request → Claude picks a tool → PreToolUse hook runs → Tool executes
| **PR logger** | `Bash` | Logs PR URL and review command after `gh pr create` |
| **Build analysis** | `Bash` | Background analysis after build commands (async, non-blocking) |
| **Quality gate** | `Edit\|Write\|MultiEdit` | Runs fast quality checks after edits |
| **Design quality check** | `Edit\|Write\|MultiEdit` | Warns when frontend edits drift toward generic template-looking UI |
| **Prettier format** | `Edit` | Auto-formats JS/TS files with Prettier after edits |
| **TypeScript check** | `Edit` | Runs `tsc --noEmit` after editing `.ts`/`.tsx` files |
| **console.log warning** | `Edit` | Warns about `console.log` statements in edited files |

View File

@@ -10,7 +10,8 @@
"command": "npx block-no-verify@1.1.2"
}
],
"description": "Block git hook-bypass flag to protect pre-commit, commit-msg, and pre-push hooks from being skipped"
"description": "Block git hook-bypass flag to protect pre-commit, commit-msg, and pre-push hooks from being skipped",
"id": "pre:bash:block-no-verify"
},
{
"matcher": "Bash",
@@ -20,7 +21,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/auto-tmux-dev.js\""
}
],
"description": "Auto-start dev servers in tmux with directory-based session names"
"description": "Auto-start dev servers in tmux with directory-based session names",
"id": "pre:bash:auto-tmux-dev"
},
{
"matcher": "Bash",
@@ -30,7 +32,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:bash:tmux-reminder\" \"scripts/hooks/pre-bash-tmux-reminder.js\" \"strict\""
}
],
"description": "Reminder to use tmux for long-running commands"
"description": "Reminder to use tmux for long-running commands",
"id": "pre:bash:tmux-reminder"
},
{
"matcher": "Bash",
@@ -40,7 +43,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:bash:git-push-reminder\" \"scripts/hooks/pre-bash-git-push-reminder.js\" \"strict\""
}
],
"description": "Reminder before git push to review changes"
"description": "Reminder before git push to review changes",
"id": "pre:bash:git-push-reminder"
},
{
"matcher": "Bash",
@@ -50,7 +54,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:bash:commit-quality\" \"scripts/hooks/pre-bash-commit-quality.js\" \"strict\""
}
],
"description": "Pre-commit quality check: lint staged files, validate commit message format, detect console.log/debugger/secrets before committing"
"description": "Pre-commit quality check: lint staged files, validate commit message format, detect console.log/debugger/secrets before committing",
"id": "pre:bash:commit-quality"
},
{
"matcher": "Write",
@@ -60,7 +65,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:write:doc-file-warning\" \"scripts/hooks/doc-file-warning.js\" \"standard,strict\""
}
],
"description": "Doc file warning: warn about non-standard documentation files (exit code 0; warns only)"
"description": "Doc file warning: warn about non-standard documentation files (exit code 0; warns only)",
"id": "pre:write:doc-file-warning"
},
{
"matcher": "Edit|Write",
@@ -70,7 +76,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:edit-write:suggest-compact\" \"scripts/hooks/suggest-compact.js\" \"standard,strict\""
}
],
"description": "Suggest manual compaction at logical intervals"
"description": "Suggest manual compaction at logical intervals",
"id": "pre:edit-write:suggest-compact"
},
{
"matcher": "*",
@@ -82,7 +89,8 @@
"timeout": 10
}
],
"description": "Capture tool use observations for continuous learning"
"description": "Capture tool use observations for continuous learning",
"id": "pre:observe:continuous-learning"
},
{
"matcher": "Bash|Write|Edit|MultiEdit",
@@ -93,7 +101,8 @@
"timeout": 15
}
],
"description": "Optional InsAIts AI security monitor for Bash/Edit/Write flows. Enable with ECC_ENABLE_INSAITS=1. Requires: pip install insa-its"
"description": "Optional InsAIts AI security monitor for Bash/Edit/Write flows. Enable with ECC_ENABLE_INSAITS=1. Requires: pip install insa-its",
"id": "pre:insaits:security"
},
{
"matcher": "Bash|Write|Edit|MultiEdit",
@@ -104,7 +113,8 @@
"timeout": 10
}
],
"description": "Capture governance events (secrets, policy violations, approval requests). Enable with ECC_GOVERNANCE_CAPTURE=1"
"description": "Capture governance events (secrets, policy violations, approval requests). Enable with ECC_GOVERNANCE_CAPTURE=1",
"id": "pre:governance-capture"
},
{
"matcher": "Write|Edit|MultiEdit",
@@ -115,7 +125,8 @@
"timeout": 5
}
],
"description": "Block modifications to linter/formatter config files. Steers agent to fix code instead of weakening configs."
"description": "Block modifications to linter/formatter config files. Steers agent to fix code instead of weakening configs.",
"id": "pre:config-protection"
},
{
"matcher": "*",
@@ -125,7 +136,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:mcp-health-check\" \"scripts/hooks/mcp-health-check.js\" \"standard,strict\""
}
],
"description": "Check MCP server health before MCP tool execution and block unhealthy MCP calls"
"description": "Check MCP server health before MCP tool execution and block unhealthy MCP calls",
"id": "pre:mcp-health-check"
}
],
"PreCompact": [
@@ -137,7 +149,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:compact\" \"scripts/hooks/pre-compact.js\" \"standard,strict\""
}
],
"description": "Save state before context compaction"
"description": "Save state before context compaction",
"id": "pre:compact"
}
],
"SessionStart": [
@@ -149,7 +162,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/session-start-bootstrap.js\""
}
],
"description": "Load previous context and detect package manager on new session"
"description": "Load previous context and detect package manager on new session",
"id": "session:start"
}
],
"PostToolUse": [
@@ -158,20 +172,22 @@
"hooks": [
{
"type": "command",
"command": "#!/bin/bash\nmkdir -p ~/.claude; INPUT=$(cat);\necho \"$INPUT\" | jq -r '\"[\" + (now | todate) + \"] \" + ((.tool_input.command // \"?\") | gsub(\"\n\"; \" \") | gsub(\"--token[= ][^ ]*\"; \"--token=<REDACTED>\") | gsub(\"Authorization:[: ]*[^ ]*[: ]*[^ ]*\"; \"Authorization:<REDACTED>\") | gsub(\"AKIA[A-Z0-9]{16}\"; \"<REDACTED>\") | gsub(\"ASIA[A-Z0-9]{16}\"; \"<REDACTED>\") | gsub(\"password[= ][^ ]*\"; \"password=<REDACTED>\") | gsub(\"ghp_[A-Za-z0-9_]+\"; \"<REDACTED>\") | gsub(\"gho_[A-Za-z0-9_]+\"; \"<REDACTED>\") | gsub(\"ghs_[A-Za-z0-9_]+\"; \"<REDACTED>\") | gsub(\"github_pat_[A-Za-z0-9_]+\"; \"<REDACTED>\"))' >> ~/.claude/bash-commands.log 2>/dev/null || true;\nprintf '%s\n' \"$INPUT\""
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/post-bash-command-log.js\" audit"
}
],
"description": "Audit log all bash commands to ~/.claude/bash-commands.log"
"description": "Audit log all bash commands to ~/.claude/bash-commands.log",
"id": "post:bash:command-log-audit"
},
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "#!/bin/bash\nmkdir -p ~/.claude; INPUT=$(cat);\necho \"$INPUT\" | jq -r '\"[\" + (now | todate) + \"] tool=Bash command=\" + ((.tool_input.command // \"?\") | gsub(\"\n\"; \" \") | gsub(\"--token[= ][^ ]*\"; \"--token=<REDACTED>\") | gsub(\"Authorization:[: ]*[^ ]*[: ]*[^ ]*\"; \"Authorization:<REDACTED>\") | gsub(\"AKIA[A-Z0-9]{16}\"; \"<REDACTED>\") | gsub(\"ASIA[A-Z0-9]{16}\"; \"<REDACTED>\") | gsub(\"password[= ][^ ]*\"; \"password=<REDACTED>\") | gsub(\"ghp_[A-Za-z0-9_]+\"; \"<REDACTED>\") | gsub(\"gho_[A-Za-z0-9_]+\"; \"<REDACTED>\") | gsub(\"ghs_[A-Za-z0-9_]+\"; \"<REDACTED>\") | gsub(\"github_pat_[A-Za-z0-9_]+\"; \"<REDACTED>\"))' >> ~/.claude/cost-tracker.log 2>/dev/null || true;\nprintf '%s\n' \"$INPUT\""
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/post-bash-command-log.js\" cost"
}
],
"description": "Cost tracker - log bash tool usage with timestamps"
"description": "Cost tracker - log bash tool usage with timestamps",
"id": "post:bash:command-log-cost"
},
{
"matcher": "Bash",
@@ -181,7 +197,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:bash:pr-created\" \"scripts/hooks/post-bash-pr-created.js\" \"standard,strict\""
}
],
"description": "Log PR URL and provide review command after PR creation"
"description": "Log PR URL and provide review command after PR creation",
"id": "post:bash:pr-created"
},
{
"matcher": "Bash",
@@ -193,7 +210,8 @@
"timeout": 30
}
],
"description": "Example: async hook for build analysis (runs in background without blocking)"
"description": "Example: async hook for build analysis (runs in background without blocking)",
"id": "post:bash:build-complete"
},
{
"matcher": "Edit|Write|MultiEdit",
@@ -205,7 +223,20 @@
"timeout": 30
}
],
"description": "Run quality gate checks after file edits"
"description": "Run quality gate checks after file edits",
"id": "post:quality-gate"
},
{
"matcher": "Edit|Write|MultiEdit",
"hooks": [
{
"type": "command",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:edit:design-quality-check\" \"scripts/hooks/design-quality-check.js\" \"standard,strict\"",
"timeout": 10
}
],
"description": "Warn when frontend edits drift toward generic template-looking UI",
"id": "post:edit:design-quality-check"
},
{
"matcher": "Edit|Write|MultiEdit",
@@ -215,7 +246,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:edit:accumulate\" \"scripts/hooks/post-edit-accumulator.js\" \"standard,strict\""
}
],
"description": "Record edited JS/TS file paths for batch format+typecheck at Stop time"
"description": "Record edited JS/TS file paths for batch format+typecheck at Stop time",
"id": "post:edit:accumulator"
},
{
"matcher": "Edit",
@@ -225,7 +257,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:edit:console-warn\" \"scripts/hooks/post-edit-console-warn.js\" \"standard,strict\""
}
],
"description": "Warn about console.log statements after edits"
"description": "Warn about console.log statements after edits",
"id": "post:edit:console-warn"
},
{
"matcher": "Bash|Write|Edit|MultiEdit",
@@ -236,7 +269,8 @@
"timeout": 10
}
],
"description": "Capture governance events from tool outputs. Enable with ECC_GOVERNANCE_CAPTURE=1"
"description": "Capture governance events from tool outputs. Enable with ECC_GOVERNANCE_CAPTURE=1",
"id": "post:governance-capture"
},
{
"matcher": "*",
@@ -248,7 +282,8 @@
"timeout": 10
}
],
"description": "Capture tool use results for continuous learning"
"description": "Capture tool use results for continuous learning",
"id": "post:observe:continuous-learning"
}
],
"PostToolUseFailure": [
@@ -260,7 +295,8 @@
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:mcp-health-check\" \"scripts/hooks/mcp-health-check.js\" \"standard,strict\""
}
],
"description": "Track failed MCP tool calls, mark unhealthy servers, and attempt reconnect"
"description": "Track failed MCP tool calls, mark unhealthy servers, and attempt reconnect",
"id": "post:mcp-health-check"
}
],
"Stop": [
@@ -269,11 +305,12 @@
"hooks": [
{
"type": "command",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"stop:format-typecheck\" \"scripts/hooks/stop-format-typecheck.js\" \"standard,strict\"",
"command": "node -e \"const fs=require('fs');const path=require('path');const {spawnSync}=require('child_process');const raw=fs.readFileSync(0,'utf8');const rel=path.join('scripts','hooks','run-with-flags.js');const hasRunnerRoot=candidate=>{const value=typeof candidate==='string'?candidate.trim():'';return value.length>0&&fs.existsSync(path.join(path.resolve(value),rel));};const root=(()=>{const envRoot=process.env.CLAUDE_PLUGIN_ROOT||'';if(hasRunnerRoot(envRoot))return path.resolve(envRoot.trim());const home=require('os').homedir();const claudeDir=path.join(home,'.claude');if(hasRunnerRoot(claudeDir))return claudeDir;for(const candidate of [path.join(claudeDir,'plugins','everything-claude-code'),path.join(claudeDir,'plugins','everything-claude-code@everything-claude-code'),path.join(claudeDir,'plugins','marketplace','everything-claude-code')]){if(hasRunnerRoot(candidate))return candidate;}try{const cacheBase=path.join(claudeDir,'plugins','cache','everything-claude-code');for(const org of fs.readdirSync(cacheBase,{withFileTypes:true})){if(!org.isDirectory())continue;for(const version of fs.readdirSync(path.join(cacheBase,org.name),{withFileTypes:true})){if(!version.isDirectory())continue;const candidate=path.join(cacheBase,org.name,version.name);if(hasRunnerRoot(candidate))return candidate;}}}catch{}return claudeDir;})();const script=path.join(root,rel);if(fs.existsSync(script)){const result=spawnSync(process.execPath,[script,'stop:format-typecheck','scripts/hooks/stop-format-typecheck.js','standard,strict'],{input:raw,encoding:'utf8',env:process.env,cwd:process.cwd(),timeout:300000});const stdout=typeof result.stdout==='string'?result.stdout:'';if(stdout)process.stdout.write(stdout);else process.stdout.write(raw);if(result.stderr)process.stderr.write(result.stderr);if(result.error||result.status===null||result.signal){const reason=result.error?result.error.message:(result.signal?'signal '+result.signal:'missing exit status');process.stderr.write('[Stop] ERROR: hook runner failed: '+reason+String.fromCharCode(10));process.exit(1);}process.exit(Number.isInteger(result.status)?result.status:0);}process.stderr.write('[Stop] WARNING: could not resolve ECC plugin root; skipping hook'+String.fromCharCode(10));process.stdout.write(raw);\"",
"timeout": 300
}
],
"description": "Batch format (Biome/Prettier) and typecheck (tsc) all JS/TS files edited this response — runs once at Stop instead of after every Edit"
"description": "Batch format (Biome/Prettier) and typecheck (tsc) all JS/TS files edited this response — runs once at Stop instead of after every Edit",
"id": "stop:format-typecheck"
},
{
"matcher": "*",
@@ -283,7 +320,8 @@
"command": "node -e \"const fs=require('fs');const path=require('path');const {spawnSync}=require('child_process');const raw=fs.readFileSync(0,'utf8');const rel=path.join('scripts','hooks','run-with-flags.js');const hasRunnerRoot=candidate=>{const value=typeof candidate==='string'?candidate.trim():'';return value.length>0&&fs.existsSync(path.join(path.resolve(value),rel));};const root=(()=>{const envRoot=process.env.CLAUDE_PLUGIN_ROOT||'';if(hasRunnerRoot(envRoot))return path.resolve(envRoot.trim());const home=require('os').homedir();const claudeDir=path.join(home,'.claude');if(hasRunnerRoot(claudeDir))return claudeDir;for(const candidate of [path.join(claudeDir,'plugins','everything-claude-code'),path.join(claudeDir,'plugins','everything-claude-code@everything-claude-code'),path.join(claudeDir,'plugins','marketplace','everything-claude-code')]){if(hasRunnerRoot(candidate))return candidate;}try{const cacheBase=path.join(claudeDir,'plugins','cache','everything-claude-code');for(const org of fs.readdirSync(cacheBase,{withFileTypes:true})){if(!org.isDirectory())continue;for(const version of fs.readdirSync(path.join(cacheBase,org.name),{withFileTypes:true})){if(!version.isDirectory())continue;const candidate=path.join(cacheBase,org.name,version.name);if(hasRunnerRoot(candidate))return candidate;}}}catch{}return claudeDir;})();const script=path.join(root,rel);if(fs.existsSync(script)){const result=spawnSync(process.execPath,[script,'stop:check-console-log','scripts/hooks/check-console-log.js','standard,strict'],{input:raw,encoding:'utf8',env:process.env,cwd:process.cwd(),timeout:30000});const stdout=typeof result.stdout==='string'?result.stdout:'';if(stdout)process.stdout.write(stdout);else process.stdout.write(raw);if(result.stderr)process.stderr.write(result.stderr);if(result.error||result.status===null||result.signal){const reason=result.error?result.error.message:(result.signal?'signal '+result.signal:'missing exit status');process.stderr.write('[Stop] ERROR: hook runner failed: '+reason+String.fromCharCode(10));process.exit(1);}process.exit(Number.isInteger(result.status)?result.status:0);}process.stderr.write('[Stop] WARNING: could not resolve ECC plugin root; skipping hook'+String.fromCharCode(10));process.stdout.write(raw);\""
}
],
"description": "Check for console.log in modified files after each response"
"description": "Check for console.log in modified files after each response",
"id": "stop:check-console-log"
},
{
"matcher": "*",
@@ -295,7 +333,8 @@
"timeout": 10
}
],
"description": "Persist session state after each response (Stop carries transcript_path)"
"description": "Persist session state after each response (Stop carries transcript_path)",
"id": "stop:session-end"
},
{
"matcher": "*",
@@ -307,7 +346,8 @@
"timeout": 10
}
],
"description": "Evaluate session for extractable patterns"
"description": "Evaluate session for extractable patterns",
"id": "stop:evaluate-session"
},
{
"matcher": "*",
@@ -319,7 +359,8 @@
"timeout": 10
}
],
"description": "Track token and cost metrics per session"
"description": "Track token and cost metrics per session",
"id": "stop:cost-tracker"
},
{
"matcher": "*",
@@ -331,7 +372,8 @@
"timeout": 10
}
],
"description": "Send desktop notification (macOS/WSL) with task summary when Claude responds"
"description": "Send desktop notification (macOS/WSL) with task summary when Claude responds",
"id": "stop:desktop-notify"
}
],
"SessionEnd": [
@@ -345,7 +387,8 @@
"timeout": 10
}
],
"description": "Session end lifecycle marker (non-blocking)"
"description": "Session end lifecycle marker (non-blocking)",
"id": "session:end:marker"
}
]
}

View File

@@ -140,11 +140,19 @@
{
"id": "capability:content",
"family": "capability",
"description": "Business, writing, market, and investor communication skills.",
"description": "Business, writing, market, investor communication, and reusable voice-system skills.",
"modules": [
"business-content"
]
},
{
"id": "capability:operators",
"family": "capability",
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, Google Workspace, and network optimization.",
"modules": [
"operator-workflows"
]
},
{
"id": "capability:social",
"family": "capability",
@@ -156,7 +164,7 @@
{
"id": "capability:media",
"family": "capability",
"description": "Media generation and AI-assisted editing skills.",
"description": "Media generation, technical explainers, and AI-assisted editing skills.",
"modules": [
"media-generation"
]

View File

@@ -117,11 +117,14 @@
"skills/backend-patterns",
"skills/coding-standards",
"skills/compose-multiplatform-patterns",
"skills/csharp-testing",
"skills/cpp-coding-standards",
"skills/cpp-testing",
"skills/dart-flutter-patterns",
"skills/django-patterns",
"skills/django-tdd",
"skills/django-verification",
"skills/dotnet-patterns",
"skills/frontend-patterns",
"skills/frontend-slides",
"skills/golang-patterns",
@@ -137,6 +140,7 @@
"skills/laravel-tdd",
"skills/laravel-verification",
"skills/mcp-server-patterns",
"skills/nestjs-patterns",
"skills/perl-patterns",
"skills/perl-testing",
"skills/python-patterns",
@@ -282,10 +286,12 @@
"description": "Business, writing, market, and investor communication skills.",
"paths": [
"skills/article-writing",
"skills/brand-voice",
"skills/content-engine",
"skills/investor-materials",
"skills/investor-outreach",
"skills/lead-intelligence",
"skills/social-graph-ranker",
"skills/market-research"
],
"targets": [
@@ -303,6 +309,33 @@
"cost": "heavy",
"stability": "stable"
},
{
"id": "operator-workflows",
"kind": "skills",
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, Google Workspace, and network optimization.",
"paths": [
"skills/connections-optimizer",
"skills/customer-billing-ops",
"skills/google-workspace-ops",
"skills/jira-integration",
"skills/project-flow-ops",
"skills/workspace-surface-audit"
],
"targets": [
"claude",
"cursor",
"antigravity",
"codex",
"opencode",
"codebuddy"
],
"dependencies": [
"platform-configs"
],
"defaultInstall": false,
"cost": "medium",
"stability": "beta"
},
{
"id": "social-distribution",
"kind": "skills",
@@ -329,10 +362,12 @@
{
"id": "media-generation",
"kind": "skills",
"description": "Media generation and AI-assisted editing skills.",
"description": "Media generation, technical explainers, and AI-assisted editing skills.",
"paths": [
"skills/fal-ai-media",
"skills/manim-video",
"skills/remotion-video-creation",
"skills/ui-demo",
"skills/video-editing",
"skills/videodb"
],

View File

@@ -66,6 +66,7 @@
"security",
"research-apis",
"business-content",
"operator-workflows",
"social-distribution",
"media-generation",
"orchestration",

View File

@@ -1,5 +1,15 @@
{
"mcpServers": {
"jira": {
"command": "uvx",
"args": ["mcp-atlassian==0.21.0"],
"env": {
"JIRA_URL": "YOUR_JIRA_URL_HERE",
"JIRA_EMAIL": "YOUR_JIRA_EMAIL_HERE",
"JIRA_API_TOKEN": "YOUR_JIRA_API_TOKEN_HERE"
},
"description": "Jira issue tracking — search, create, update, comment, transition issues"
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],

368
package-lock.json generated
View File

@@ -20,7 +20,7 @@
},
"devDependencies": {
"@eslint/js": "^9.39.2",
"c8": "^10.1.2",
"c8": "^11.0.0",
"eslint": "^9.39.2",
"globals": "^17.1.0",
"markdownlint-cli": "^0.48.0"
@@ -278,49 +278,6 @@
"integrity": "sha512-trnsAYxU3xnS1gPHPyU961coFyLkh4gAD/0zQ5mymY4yOZ+CYvsPqUbOFSw0aDM4y0tV7tiFxL/1XfXPNC6IPg==",
"license": "ISC"
},
"node_modules/@isaacs/cliui": {
"version": "8.0.2",
"resolved": "https://registry.npmjs.org/@isaacs/cliui/-/cliui-8.0.2.tgz",
"integrity": "sha512-O8jcjabXaleOG9DQ0+ARXWZBTfnP4WNAqzuiJK7ll44AmxGKv/J2M4TPjxjY3znBCfvBXFzucm1twdyFybFqEA==",
"dev": true,
"license": "ISC",
"dependencies": {
"string-width": "^5.1.2",
"string-width-cjs": "npm:string-width@^4.2.0",
"strip-ansi": "^7.0.1",
"strip-ansi-cjs": "npm:strip-ansi@^6.0.1",
"wrap-ansi": "^8.1.0",
"wrap-ansi-cjs": "npm:wrap-ansi@^7.0.0"
},
"engines": {
"node": ">=12"
}
},
"node_modules/@isaacs/cliui/node_modules/emoji-regex": {
"version": "9.2.2",
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-9.2.2.tgz",
"integrity": "sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==",
"dev": true,
"license": "MIT"
},
"node_modules/@isaacs/cliui/node_modules/string-width": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/string-width/-/string-width-5.1.2.tgz",
"integrity": "sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==",
"dev": true,
"license": "MIT",
"dependencies": {
"eastasianwidth": "^0.2.0",
"emoji-regex": "^9.2.2",
"strip-ansi": "^7.0.1"
},
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/@istanbuljs/schema": {
"version": "0.1.3",
"resolved": "https://registry.npmjs.org/@istanbuljs/schema/-/schema-0.1.3.tgz",
@@ -359,17 +316,6 @@
"@jridgewell/sourcemap-codec": "^1.4.14"
}
},
"node_modules/@pkgjs/parseargs": {
"version": "0.11.0",
"resolved": "https://registry.npmjs.org/@pkgjs/parseargs/-/parseargs-0.11.0.tgz",
"integrity": "sha512-+1VkjdD0QBLPodGrJUeqarH8VAIvQODIbwh9XpP5Syisf7YoQgsJKPNFoqqLQlu+VQ/tVSshMR6loPMn8U+dPg==",
"dev": true,
"license": "MIT",
"optional": true,
"engines": {
"node": ">=14"
}
},
"node_modules/@types/debug": {
"version": "4.1.12",
"resolved": "https://registry.npmjs.org/@types/debug/-/debug-4.1.12.tgz",
@@ -516,9 +462,9 @@
}
},
"node_modules/c8": {
"version": "10.1.3",
"resolved": "https://registry.npmjs.org/c8/-/c8-10.1.3.tgz",
"integrity": "sha512-LvcyrOAaOnrrlMpW22n690PUvxiq4Uf9WMhQwNJ9vgagkL/ph1+D4uvjvDA5XCbykrc0sx+ay6pVi9YZ1GnhyA==",
"version": "11.0.0",
"resolved": "https://registry.npmjs.org/c8/-/c8-11.0.0.tgz",
"integrity": "sha512-e/uRViGHSVIJv7zsaDKM7VRn2390TgHXqUSvYwPHBQaU6L7E9L0n9JbdkwdYPvshDT0KymBmmlwSpms3yBaMNg==",
"dev": true,
"license": "ISC",
"dependencies": {
@@ -529,7 +475,7 @@
"istanbul-lib-coverage": "^3.2.0",
"istanbul-lib-report": "^3.0.1",
"istanbul-reports": "^3.1.6",
"test-exclude": "^7.0.1",
"test-exclude": "^8.0.0",
"v8-to-istanbul": "^9.0.0",
"yargs": "^17.7.2",
"yargs-parser": "^21.1.1"
@@ -538,7 +484,7 @@
"c8": "bin/c8.js"
},
"engines": {
"node": ">=18"
"node": "20 || >=22"
},
"peerDependencies": {
"monocart-coverage-reports": "^2"
@@ -811,13 +757,6 @@
"url": "https://github.com/sponsors/wooorm"
}
},
"node_modules/eastasianwidth": {
"version": "0.2.0",
"resolved": "https://registry.npmjs.org/eastasianwidth/-/eastasianwidth-0.2.0.tgz",
"integrity": "sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==",
"dev": true,
"license": "MIT"
},
"node_modules/emoji-regex": {
"version": "8.0.0",
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
@@ -1184,22 +1123,18 @@
}
},
"node_modules/glob": {
"version": "10.5.0",
"resolved": "https://registry.npmjs.org/glob/-/glob-10.5.0.tgz",
"integrity": "sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg==",
"deprecated": "Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me",
"version": "13.0.6",
"resolved": "https://registry.npmjs.org/glob/-/glob-13.0.6.tgz",
"integrity": "sha512-Wjlyrolmm8uDpm/ogGyXZXb1Z+Ca2B8NbJwqBVg0axK9GbBeoS7yGV6vjXnYdGm6X53iehEuxxbyiKp8QmN4Vw==",
"dev": true,
"license": "ISC",
"license": "BlueOak-1.0.0",
"dependencies": {
"foreground-child": "^3.1.0",
"jackspeak": "^3.1.2",
"minimatch": "^9.0.4",
"minipass": "^7.1.2",
"package-json-from-dist": "^1.0.0",
"path-scurry": "^1.11.1"
"minimatch": "^10.2.2",
"minipass": "^7.1.3",
"path-scurry": "^2.0.2"
},
"bin": {
"glob": "dist/esm/bin.mjs"
"engines": {
"node": "18 || 20 || >=22"
},
"funding": {
"url": "https://github.com/sponsors/isaacs"
@@ -1218,27 +1153,40 @@
"node": ">=10.13.0"
}
},
"node_modules/glob/node_modules/balanced-match": {
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-4.0.4.tgz",
"integrity": "sha512-BLrgEcRTwX2o6gGxGOCNyMvGSp35YofuYzw9h1IMTRmKqttAZZVU67bdb9Pr2vUHA8+j3i2tJfjO6C6+4myGTA==",
"dev": true,
"license": "MIT",
"engines": {
"node": "18 || 20 || >=22"
}
},
"node_modules/glob/node_modules/brace-expansion": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.3.tgz",
"integrity": "sha512-MCV/fYJEbqx68aE58kv2cA/kiky1G8vux3OR6/jbS+jIMe/6fJWa0DTzJU7dqijOWYwHi1t29FlfYI9uytqlpA==",
"version": "5.0.5",
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-5.0.5.tgz",
"integrity": "sha512-VZznLgtwhn+Mact9tfiwx64fA9erHH/MCXEUfB/0bX/6Fz6ny5EGTXYltMocqg4xFAQZtnO3DHWWXi8RiuN7cQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"balanced-match": "^1.0.0"
"balanced-match": "^4.0.2"
},
"engines": {
"node": "18 || 20 || >=22"
}
},
"node_modules/glob/node_modules/minimatch": {
"version": "9.0.9",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-9.0.9.tgz",
"integrity": "sha512-OBwBN9AL4dqmETlpS2zasx+vTeWclWzkblfZk7KTA5j3jeOONz/tRCnZomUyvNg83wL5Zv9Ss6HMJXAgL8R2Yg==",
"version": "10.2.5",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-10.2.5.tgz",
"integrity": "sha512-MULkVLfKGYDFYejP07QOurDLLQpcjk7Fw+7jXS2R2czRQzR56yHRveU5NDJEOviH+hETZKSkIk5c+T23GjFUMg==",
"dev": true,
"license": "ISC",
"license": "BlueOak-1.0.0",
"dependencies": {
"brace-expansion": "^2.0.2"
"brace-expansion": "^5.0.5"
},
"engines": {
"node": ">=16 || 14 >=14.17"
"node": "18 || 20 || >=22"
},
"funding": {
"url": "https://github.com/sponsors/isaacs"
@@ -1448,22 +1396,6 @@
"node": ">=8"
}
},
"node_modules/jackspeak": {
"version": "3.4.3",
"resolved": "https://registry.npmjs.org/jackspeak/-/jackspeak-3.4.3.tgz",
"integrity": "sha512-OGlZQpz2yfahA/Rd1Y8Cd9SIEsqvXkLVoSw/cgwhnhFMDbsQFeZYoJJ7bIZBS9BcamUW96asq/npPWugM+RQBw==",
"dev": true,
"license": "BlueOak-1.0.0",
"dependencies": {
"@isaacs/cliui": "^8.0.2"
},
"funding": {
"url": "https://github.com/sponsors/isaacs"
},
"optionalDependencies": {
"@pkgjs/parseargs": "^0.11.0"
}
},
"node_modules/js-yaml": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.1.tgz",
@@ -1598,11 +1530,14 @@
"license": "MIT"
},
"node_modules/lru-cache": {
"version": "10.4.3",
"resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-10.4.3.tgz",
"integrity": "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ==",
"version": "11.2.7",
"resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-11.2.7.tgz",
"integrity": "sha512-aY/R+aEsRelme17KGQa/1ZSIpLpNYYrhcrepKTZgE+W3WM16YMCaPwOHLHsmopZHELU0Ojin1lPVxKR0MihncA==",
"dev": true,
"license": "ISC"
"license": "BlueOak-1.0.0",
"engines": {
"node": "20 || >=22"
}
},
"node_modules/make-dir": {
"version": "4.0.0",
@@ -2375,13 +2310,6 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/package-json-from-dist": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/package-json-from-dist/-/package-json-from-dist-1.0.1.tgz",
"integrity": "sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw==",
"dev": true,
"license": "BlueOak-1.0.0"
},
"node_modules/parent-module": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/parent-module/-/parent-module-1.0.1.tgz",
@@ -2436,17 +2364,17 @@
}
},
"node_modules/path-scurry": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/path-scurry/-/path-scurry-1.11.1.tgz",
"integrity": "sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA==",
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/path-scurry/-/path-scurry-2.0.2.tgz",
"integrity": "sha512-3O/iVVsJAPsOnpwWIeD+d6z/7PmqApyQePUtCndjatj/9I5LylHvt5qluFaBT3I5h3r1ejfR056c+FCv+NnNXg==",
"dev": true,
"license": "BlueOak-1.0.0",
"dependencies": {
"lru-cache": "^10.2.0",
"minipass": "^5.0.0 || ^6.0.2 || ^7.0.0"
"lru-cache": "^11.0.0",
"minipass": "^7.1.2"
},
"engines": {
"node": ">=16 || 14 >=14.18"
"node": "18 || 20 || >=22"
},
"funding": {
"url": "https://github.com/sponsors/isaacs"
@@ -2624,45 +2552,6 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/string-width-cjs": {
"name": "string-width",
"version": "4.2.3",
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
"dev": true,
"license": "MIT",
"dependencies": {
"emoji-regex": "^8.0.0",
"is-fullwidth-code-point": "^3.0.0",
"strip-ansi": "^6.0.1"
},
"engines": {
"node": ">=8"
}
},
"node_modules/string-width-cjs/node_modules/ansi-regex": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
"integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=8"
}
},
"node_modules/string-width-cjs/node_modules/strip-ansi": {
"version": "6.0.1",
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
"dev": true,
"license": "MIT",
"dependencies": {
"ansi-regex": "^5.0.1"
},
"engines": {
"node": ">=8"
}
},
"node_modules/strip-ansi": {
"version": "7.1.2",
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-7.1.2.tgz",
@@ -2679,30 +2568,6 @@
"url": "https://github.com/chalk/strip-ansi?sponsor=1"
}
},
"node_modules/strip-ansi-cjs": {
"name": "strip-ansi",
"version": "6.0.1",
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
"dev": true,
"license": "MIT",
"dependencies": {
"ansi-regex": "^5.0.1"
},
"engines": {
"node": ">=8"
}
},
"node_modules/strip-ansi-cjs/node_modules/ansi-regex": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
"integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=8"
}
},
"node_modules/strip-json-comments": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-3.1.1.tgz",
@@ -2730,18 +2595,18 @@
}
},
"node_modules/test-exclude": {
"version": "7.0.2",
"resolved": "https://registry.npmjs.org/test-exclude/-/test-exclude-7.0.2.tgz",
"integrity": "sha512-u9E6A+ZDYdp7a4WnarkXPZOx8Ilz46+kby6p1yZ8zsGTz9gYa6FIS7lj2oezzNKmtdyyJNNmmXDppga5GB7kSw==",
"version": "8.0.0",
"resolved": "https://registry.npmjs.org/test-exclude/-/test-exclude-8.0.0.tgz",
"integrity": "sha512-ZOffsNrXYggvU1mDGHk54I96r26P8SyMjO5slMKSc7+IWmtB/MQKnEC2fP51imB3/pT6YK5cT5E8f+Dd9KdyOQ==",
"dev": true,
"license": "ISC",
"dependencies": {
"@istanbuljs/schema": "^0.1.2",
"glob": "^10.4.1",
"glob": "^13.0.6",
"minimatch": "^10.2.2"
},
"engines": {
"node": ">=18"
"node": "20 || >=22"
}
},
"node_modules/test-exclude/node_modules/balanced-match": {
@@ -2768,13 +2633,13 @@
}
},
"node_modules/test-exclude/node_modules/minimatch": {
"version": "10.2.4",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-10.2.4.tgz",
"integrity": "sha512-oRjTw/97aTBN0RHbYCdtF1MQfvusSIBQM0IZEgzl6426+8jSC0nF1a/GmnVLpfB9yyr6g6FTqWqiZVbxrtaCIg==",
"version": "10.2.5",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-10.2.5.tgz",
"integrity": "sha512-MULkVLfKGYDFYejP07QOurDLLQpcjk7Fw+7jXS2R2czRQzR56yHRveU5NDJEOviH+hETZKSkIk5c+T23GjFUMg==",
"dev": true,
"license": "BlueOak-1.0.0",
"dependencies": {
"brace-expansion": "^5.0.2"
"brace-expansion": "^5.0.5"
},
"engines": {
"node": "18 || 20 || >=22"
@@ -2870,119 +2735,6 @@
"node": ">=0.10.0"
}
},
"node_modules/wrap-ansi": {
"version": "8.1.0",
"resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-8.1.0.tgz",
"integrity": "sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"ansi-styles": "^6.1.0",
"string-width": "^5.0.1",
"strip-ansi": "^7.0.1"
},
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://github.com/chalk/wrap-ansi?sponsor=1"
}
},
"node_modules/wrap-ansi-cjs": {
"name": "wrap-ansi",
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz",
"integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==",
"dev": true,
"license": "MIT",
"dependencies": {
"ansi-styles": "^4.0.0",
"string-width": "^4.1.0",
"strip-ansi": "^6.0.0"
},
"engines": {
"node": ">=10"
},
"funding": {
"url": "https://github.com/chalk/wrap-ansi?sponsor=1"
}
},
"node_modules/wrap-ansi-cjs/node_modules/ansi-regex": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
"integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=8"
}
},
"node_modules/wrap-ansi-cjs/node_modules/string-width": {
"version": "4.2.3",
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
"dev": true,
"license": "MIT",
"dependencies": {
"emoji-regex": "^8.0.0",
"is-fullwidth-code-point": "^3.0.0",
"strip-ansi": "^6.0.1"
},
"engines": {
"node": ">=8"
}
},
"node_modules/wrap-ansi-cjs/node_modules/strip-ansi": {
"version": "6.0.1",
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
"dev": true,
"license": "MIT",
"dependencies": {
"ansi-regex": "^5.0.1"
},
"engines": {
"node": ">=8"
}
},
"node_modules/wrap-ansi/node_modules/ansi-styles": {
"version": "6.2.3",
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-6.2.3.tgz",
"integrity": "sha512-4Dj6M28JB+oAH8kFkTLUo+a2jwOFkuqb3yucU0CANcRRUbxS0cP0nZYCGjcc3BNXwRIsUVmDGgzawme7zvJHvg==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://github.com/chalk/ansi-styles?sponsor=1"
}
},
"node_modules/wrap-ansi/node_modules/emoji-regex": {
"version": "9.2.2",
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-9.2.2.tgz",
"integrity": "sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==",
"dev": true,
"license": "MIT"
},
"node_modules/wrap-ansi/node_modules/string-width": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/string-width/-/string-width-5.1.2.tgz",
"integrity": "sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==",
"dev": true,
"license": "MIT",
"dependencies": {
"eastasianwidth": "^0.2.0",
"emoji-regex": "^9.2.2",
"strip-ansi": "^7.0.1"
},
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/y18n": {
"version": "5.0.8",
"resolved": "https://registry.npmjs.org/y18n/-/y18n-5.0.8.tgz",

View File

@@ -1,7 +1,7 @@
{
"name": "ecc-universal",
"version": "1.9.0",
"description": "Complete collection of battle-tested Claude Code configs — agents, skills, hooks, commands, and rules evolved over 10+ months of intensive daily use by an Anthropic hackathon winner",
"description": "Complete collection of battle-tested Claude Code configs — agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use by an Anthropic hackathon winner",
"keywords": [
"claude-code",
"ai",
@@ -80,6 +80,7 @@
"scripts/orchestrate-worktrees.js",
"scripts/setup-package-manager.js",
"scripts/skill-create-output.js",
"scripts/codex/merge-codex-config.js",
"scripts/codex/merge-mcp-config.js",
"scripts/repair.js",
"scripts/harness-audit.js",
@@ -102,13 +103,15 @@
},
"scripts": {
"postinstall": "echo '\\n ecc-universal installed!\\n Run: npx ecc typescript\\n Compat: npx ecc-install typescript\\n Docs: https://github.com/affaan-m/everything-claude-code\\n'",
"catalog:check": "node scripts/ci/catalog.js --text",
"catalog:sync": "node scripts/ci/catalog.js --write --text",
"lint": "eslint . && markdownlint '**/*.md' --ignore node_modules",
"harness:audit": "node scripts/harness-audit.js",
"claw": "node scripts/claw.js",
"orchestrate:status": "node scripts/orchestration-status.js",
"orchestrate:worker": "bash scripts/orchestrate-codex-worker.sh",
"orchestrate:tmux": "node scripts/orchestrate-worktrees.js",
"test": "node scripts/ci/check-unicode-safety.js && node scripts/ci/validate-agents.js && node scripts/ci/validate-commands.js && node scripts/ci/validate-rules.js && node scripts/ci/validate-skills.js && node scripts/ci/validate-hooks.js && node scripts/ci/validate-install-manifests.js && node scripts/ci/validate-no-personal-paths.js && node scripts/ci/catalog.js --text && node tests/run-all.js",
"test": "node scripts/ci/check-unicode-safety.js && node scripts/ci/validate-agents.js && node scripts/ci/validate-commands.js && node scripts/ci/validate-rules.js && node scripts/ci/validate-skills.js && node scripts/ci/validate-hooks.js && node scripts/ci/validate-install-manifests.js && node scripts/ci/validate-no-personal-paths.js && npm run catalog:check && node tests/run-all.js",
"coverage": "c8 --all --include=\"scripts/**/*.js\" --check-coverage --lines 80 --functions 80 --branches 80 --statements 80 --reporter=text --reporter=lcov node tests/run-all.js"
},
"dependencies": {

View File

@@ -17,6 +17,7 @@ rules/
├── typescript/ # TypeScript/JavaScript specific
├── python/ # Python specific
├── golang/ # Go specific
├── web/ # Web and frontend specific
├── swift/ # Swift specific
└── php/ # PHP specific
```
@@ -33,6 +34,7 @@ rules/
./install.sh typescript
./install.sh python
./install.sh golang
./install.sh web
./install.sh swift
./install.sh php
@@ -56,6 +58,7 @@ cp -r rules/common ~/.claude/rules/common
cp -r rules/typescript ~/.claude/rules/typescript
cp -r rules/python ~/.claude/rules/python
cp -r rules/golang ~/.claude/rules/golang
cp -r rules/web ~/.claude/rules/web
cp -r rules/swift ~/.claude/rules/swift
cp -r rules/php ~/.claude/rules/php
@@ -86,6 +89,8 @@ To add support for a new language (e.g., `rust/`):
```
4. Reference existing skills if available, or create new ones under `skills/`.
For non-language domains like `web/`, follow the same layered pattern when there is enough reusable domain-specific guidance to justify a standalone ruleset.
## Rule Priority
When language-specific rules and common rules conflict, **language-specific rules take precedence** (specific overrides general). This follows the standard layered configuration pattern (similar to CSS specificity or `.gitignore` precedence).

159
rules/dart/coding-style.md Normal file
View File

@@ -0,0 +1,159 @@
---
paths:
- "**/*.dart"
- "**/pubspec.yaml"
- "**/analysis_options.yaml"
---
# Dart/Flutter Coding Style
> This file extends [common/coding-style.md](../common/coding-style.md) with Dart and Flutter-specific content.
## Formatting
- **dart format** for all `.dart` files — enforced in CI (`dart format --set-exit-if-changed .`)
- Line length: 80 characters (dart format default)
- Trailing commas on multi-line argument/parameter lists to improve diffs and formatting
## Immutability
- Prefer `final` for local variables and `const` for compile-time constants
- Use `const` constructors wherever all fields are `final`
- Return unmodifiable collections from public APIs (`List.unmodifiable`, `Map.unmodifiable`)
- Use `copyWith()` for state mutations in immutable state classes
```dart
// BAD
var count = 0;
List<String> items = ['a', 'b'];
// GOOD
final count = 0;
const items = ['a', 'b'];
```
## Naming
Follow Dart conventions:
- `camelCase` for variables, parameters, and named constructors
- `PascalCase` for classes, enums, typedefs, and extensions
- `snake_case` for file names and library names
- `SCREAMING_SNAKE_CASE` for constants declared with `const` at top level
- Prefix private members with `_`
- Extension names describe the type they extend: `StringExtensions`, not `MyHelpers`
## Null Safety
- Avoid `!` (bang operator) — prefer `?.`, `??`, `if (x != null)`, or Dart 3 pattern matching; reserve `!` only where a null value is a programming error and crashing is the right behaviour
- Avoid `late` unless initialization is guaranteed before first use (prefer nullable or constructor init)
- Use `required` for constructor parameters that must always be provided
```dart
// BAD — crashes at runtime if user is null
final name = user!.name;
// GOOD — null-aware operators
final name = user?.name ?? 'Unknown';
// GOOD — Dart 3 pattern matching (exhaustive, compiler-checked)
final name = switch (user) {
User(:final name) => name,
null => 'Unknown',
};
// GOOD — early-return null guard
String getUserName(User? user) {
if (user == null) return 'Unknown';
return user.name; // promoted to non-null after the guard
}
```
## Sealed Types and Pattern Matching (Dart 3+)
Use sealed classes to model closed state hierarchies:
```dart
sealed class AsyncState<T> {
const AsyncState();
}
final class Loading<T> extends AsyncState<T> {
const Loading();
}
final class Success<T> extends AsyncState<T> {
const Success(this.data);
final T data;
}
final class Failure<T> extends AsyncState<T> {
const Failure(this.error);
final Object error;
}
```
Always use exhaustive `switch` with sealed types — no default/wildcard:
```dart
// BAD
if (state is Loading) { ... }
// GOOD
return switch (state) {
Loading() => const CircularProgressIndicator(),
Success(:final data) => DataWidget(data),
Failure(:final error) => ErrorWidget(error.toString()),
};
```
## Error Handling
- Specify exception types in `on` clauses — never use bare `catch (e)`
- Never catch `Error` subtypes — they indicate programming bugs
- Use `Result`-style types or sealed classes for recoverable errors
- Avoid using exceptions for control flow
```dart
// BAD
try {
await fetchUser();
} catch (e) {
log(e.toString());
}
// GOOD
try {
await fetchUser();
} on NetworkException catch (e) {
log('Network error: ${e.message}');
} on NotFoundException {
handleNotFound();
}
```
## Async / Futures
- Always `await` Futures or explicitly call `unawaited()` to signal intentional fire-and-forget
- Never mark a function `async` if it never `await`s anything
- Use `Future.wait` / `Future.any` for concurrent operations
- Check `context.mounted` before using `BuildContext` after any `await` (Flutter 3.7+)
```dart
// BAD — ignoring Future
fetchData(); // fire-and-forget without marking intent
// GOOD
unawaited(fetchData()); // explicit fire-and-forget
await fetchData(); // or properly awaited
```
## Imports
- Use `package:` imports throughout — never relative imports (`../`) for cross-feature or cross-layer code
- Order: `dart:` → external `package:` → internal `package:` (same package)
- No unused imports — `dart analyze` enforces this with `unused_import`
## Code Generation
- Generated files (`.g.dart`, `.freezed.dart`, `.gr.dart`) must be committed or gitignored consistently — pick one strategy per project
- Never manually edit generated files
- Keep generator annotations (`@JsonSerializable`, `@freezed`, `@riverpod`, etc.) on the canonical source file only

66
rules/dart/hooks.md Normal file
View File

@@ -0,0 +1,66 @@
---
paths:
- "**/*.dart"
- "**/pubspec.yaml"
- "**/analysis_options.yaml"
---
# Dart/Flutter Hooks
> This file extends [common/hooks.md](../common/hooks.md) with Dart and Flutter-specific content.
## PostToolUse Hooks
Configure in `~/.claude/settings.json`:
- **dart format**: Auto-format `.dart` files after edit
- **dart analyze**: Run static analysis after editing Dart files and surface warnings
- **flutter test**: Optionally run affected tests after significant changes
## Recommended Hook Configuration
```json
{
"hooks": {
"PostToolUse": [
{
"matcher": { "tool_name": "Edit", "file_paths": ["**/*.dart"] },
"hooks": [
{ "type": "command", "command": "dart format $CLAUDE_FILE_PATHS" }
]
}
]
}
}
```
## Pre-commit Checks
Run before committing Dart/Flutter changes:
```bash
dart format --set-exit-if-changed .
dart analyze --fatal-infos
flutter test
```
## Useful One-liners
```bash
# Format all Dart files
dart format .
# Analyze and report issues
dart analyze
# Run all tests with coverage
flutter test --coverage
# Regenerate code-gen files
dart run build_runner build --delete-conflicting-outputs
# Check for outdated packages
flutter pub outdated
# Upgrade packages within constraints
flutter pub upgrade
```

261
rules/dart/patterns.md Normal file
View File

@@ -0,0 +1,261 @@
---
paths:
- "**/*.dart"
- "**/pubspec.yaml"
---
# Dart/Flutter Patterns
> This file extends [common/patterns.md](../common/patterns.md) with Dart, Flutter, and common ecosystem-specific content.
## Repository Pattern
```dart
abstract interface class UserRepository {
Future<User?> getById(String id);
Future<List<User>> getAll();
Stream<List<User>> watchAll();
Future<void> save(User user);
Future<void> delete(String id);
}
class UserRepositoryImpl implements UserRepository {
const UserRepositoryImpl(this._remote, this._local);
final UserRemoteDataSource _remote;
final UserLocalDataSource _local;
@override
Future<User?> getById(String id) async {
final local = await _local.getById(id);
if (local != null) return local;
final remote = await _remote.getById(id);
if (remote != null) await _local.save(remote);
return remote;
}
@override
Future<List<User>> getAll() async {
final remote = await _remote.getAll();
for (final user in remote) {
await _local.save(user);
}
return remote;
}
@override
Stream<List<User>> watchAll() => _local.watchAll();
@override
Future<void> save(User user) => _local.save(user);
@override
Future<void> delete(String id) async {
await _remote.delete(id);
await _local.delete(id);
}
}
```
## State Management: BLoC/Cubit
```dart
// Cubit — simple state transitions
class CounterCubit extends Cubit<int> {
CounterCubit() : super(0);
void increment() => emit(state + 1);
void decrement() => emit(state - 1);
}
// BLoC — event-driven
@immutable
sealed class CartEvent {}
class CartItemAdded extends CartEvent { CartItemAdded(this.item); final Item item; }
class CartItemRemoved extends CartEvent { CartItemRemoved(this.id); final String id; }
class CartCleared extends CartEvent {}
@immutable
class CartState {
const CartState({this.items = const []});
final List<Item> items;
CartState copyWith({List<Item>? items}) => CartState(items: items ?? this.items);
}
class CartBloc extends Bloc<CartEvent, CartState> {
CartBloc() : super(const CartState()) {
on<CartItemAdded>((event, emit) =>
emit(state.copyWith(items: [...state.items, event.item])));
on<CartItemRemoved>((event, emit) =>
emit(state.copyWith(items: state.items.where((i) => i.id != event.id).toList())));
on<CartCleared>((_, emit) => emit(const CartState()));
}
}
```
## State Management: Riverpod
```dart
// Simple provider
@riverpod
Future<List<User>> users(Ref ref) async {
final repo = ref.watch(userRepositoryProvider);
return repo.getAll();
}
// Notifier for mutable state
@riverpod
class CartNotifier extends _$CartNotifier {
@override
List<Item> build() => [];
void add(Item item) => state = [...state, item];
void remove(String id) => state = state.where((i) => i.id != id).toList();
void clear() => state = [];
}
// ConsumerWidget
class CartPage extends ConsumerWidget {
const CartPage({super.key});
@override
Widget build(BuildContext context, WidgetRef ref) {
final items = ref.watch(cartNotifierProvider);
return ListView(
children: items.map((item) => CartItemTile(item: item)).toList(),
);
}
}
```
## Dependency Injection
Constructor injection is preferred. Use `get_it` or Riverpod providers at composition root:
```dart
// get_it registration (in a setup file)
void setupDependencies() {
final di = GetIt.instance;
di.registerSingleton<ApiClient>(ApiClient(baseUrl: Env.apiUrl));
di.registerSingleton<UserRepository>(
UserRepositoryImpl(di<ApiClient>(), di<LocalDatabase>()),
);
di.registerFactory(() => UserListViewModel(di<UserRepository>()));
}
```
## ViewModel Pattern (without BLoC/Riverpod)
```dart
class UserListViewModel extends ChangeNotifier {
UserListViewModel(this._repository);
final UserRepository _repository;
AsyncState<List<User>> _state = const Loading();
AsyncState<List<User>> get state => _state;
Future<void> load() async {
_state = const Loading();
notifyListeners();
try {
final users = await _repository.getAll();
_state = Success(users);
} on Exception catch (e) {
_state = Failure(e);
}
notifyListeners();
}
}
```
## UseCase Pattern
```dart
class GetUserUseCase {
const GetUserUseCase(this._repository);
final UserRepository _repository;
Future<User?> call(String id) => _repository.getById(id);
}
class CreateUserUseCase {
const CreateUserUseCase(this._repository, this._idGenerator);
final UserRepository _repository;
final IdGenerator _idGenerator; // injected — domain layer must not depend on uuid package directly
Future<void> call(CreateUserInput input) async {
// Validate, apply business rules, then persist
final user = User(id: _idGenerator.generate(), name: input.name, email: input.email);
await _repository.save(user);
}
}
```
## Immutable State with freezed
```dart
@freezed
class UserState with _$UserState {
const factory UserState({
@Default([]) List<User> users,
@Default(false) bool isLoading,
String? errorMessage,
}) = _UserState;
}
```
## Clean Architecture Layer Boundaries
```
lib/
├── domain/ # Pure Dart — no Flutter, no external packages
│ ├── entities/
│ ├── repositories/ # Abstract interfaces
│ └── usecases/
├── data/ # Implements domain interfaces
│ ├── datasources/
│ ├── models/ # DTOs with fromJson/toJson
│ └── repositories/
└── presentation/ # Flutter widgets + state management
├── pages/
├── widgets/
└── providers/ (or blocs/ or viewmodels/)
```
- Domain must not import `package:flutter` or any data-layer package
- Data layer maps DTOs to domain entities at repository boundaries
- Presentation calls use cases, not repositories directly
## Navigation (GoRouter)
```dart
final router = GoRouter(
routes: [
GoRoute(
path: '/',
builder: (context, state) => const HomePage(),
),
GoRoute(
path: '/users/:id',
builder: (context, state) {
final id = state.pathParameters['id']!;
return UserDetailPage(userId: id);
},
),
],
// refreshListenable re-evaluates redirect whenever auth state changes
refreshListenable: GoRouterRefreshStream(authCubit.stream),
redirect: (context, state) {
final isLoggedIn = context.read<AuthCubit>().state is AuthAuthenticated;
if (!isLoggedIn && !state.matchedLocation.startsWith('/login')) {
return '/login';
}
return null;
},
);
```
## References
See skill: `flutter-dart-code-review` for the comprehensive review checklist.
See skill: `compose-multiplatform-patterns` for Kotlin Multiplatform/Flutter interop patterns.

135
rules/dart/security.md Normal file
View File

@@ -0,0 +1,135 @@
---
paths:
- "**/*.dart"
- "**/pubspec.yaml"
- "**/AndroidManifest.xml"
- "**/Info.plist"
---
# Dart/Flutter Security
> This file extends [common/security.md](../common/security.md) with Dart, Flutter, and mobile-specific content.
## Secrets Management
- Never hardcode API keys, tokens, or credentials in Dart source
- Use `--dart-define` or `--dart-define-from-file` for compile-time config (values are not truly secret — use a backend proxy for server-side secrets)
- Use `flutter_dotenv` or equivalent, with `.env` files listed in `.gitignore`
- Store runtime secrets in platform-secure storage: `flutter_secure_storage` (Keychain on iOS, EncryptedSharedPreferences on Android)
```dart
// BAD
const apiKey = 'sk-abc123...';
// GOOD — compile-time config (not secret, just configurable)
const apiKey = String.fromEnvironment('API_KEY');
// GOOD — runtime secret from secure storage
final token = await secureStorage.read(key: 'auth_token');
```
## Network Security
- Enforce HTTPS — no `http://` calls in production
- Configure Android `network_security_config.xml` to block cleartext traffic
- Set `NSAppTransportSecurity` in `Info.plist` to disallow arbitrary loads
- Set request timeouts on all HTTP clients — never leave defaults
- Consider certificate pinning for high-security endpoints
```dart
// Dio with timeout and HTTPS enforcement
final dio = Dio(BaseOptions(
baseUrl: 'https://api.example.com',
connectTimeout: const Duration(seconds: 10),
receiveTimeout: const Duration(seconds: 30),
));
```
## Input Validation
- Validate and sanitize all user input before sending to API or storage
- Never pass unsanitized input to SQL queries — use parameterized queries (sqflite, drift)
- Sanitize deep link URLs before navigation — validate scheme, host, and path parameters
- Use `Uri.tryParse` and validate before navigating
```dart
// BAD — SQL injection
await db.rawQuery("SELECT * FROM users WHERE email = '$userInput'");
// GOOD — parameterized
await db.query('users', where: 'email = ?', whereArgs: [userInput]);
// BAD — unvalidated deep link
final uri = Uri.parse(incomingLink);
context.go(uri.path); // could navigate to any route
// GOOD — validated deep link
final uri = Uri.tryParse(incomingLink);
if (uri != null && uri.host == 'myapp.com' && _allowedPaths.contains(uri.path)) {
context.go(uri.path);
}
```
## Data Protection
- Store tokens, PII, and credentials only in `flutter_secure_storage`
- Never write sensitive data to `SharedPreferences` or local files in plaintext
- Clear auth state on logout: tokens, cached user data, cookies
- Use biometric authentication (`local_auth`) for sensitive operations
- Avoid logging sensitive data — no `print(token)` or `debugPrint(password)`
## Android-Specific
- Declare only required permissions in `AndroidManifest.xml`
- Export Android components (`Activity`, `Service`, `BroadcastReceiver`) only when necessary; add `android:exported="false"` where not needed
- Review intent filters — exported components with implicit intent filters are accessible by any app
- Use `FLAG_SECURE` for screens displaying sensitive data (prevents screenshots)
```xml
<!-- AndroidManifest.xml — restrict exported components -->
<activity android:name=".MainActivity" android:exported="true">
<!-- Only the launcher activity needs exported=true -->
</activity>
<activity android:name=".SensitiveActivity" android:exported="false" />
```
## iOS-Specific
- Declare only required usage descriptions in `Info.plist` (`NSCameraUsageDescription`, etc.)
- Store secrets in Keychain — `flutter_secure_storage` uses Keychain on iOS
- Use App Transport Security (ATS) — disallow arbitrary loads
- Enable data protection entitlement for sensitive files
## WebView Security
- Use `webview_flutter` v4+ (`WebViewController` / `WebViewWidget`) — the legacy `WebView` widget is removed
- Disable JavaScript unless explicitly required (`JavaScriptMode.disabled`)
- Validate URLs before loading — never load arbitrary URLs from deep links
- Never expose Dart callbacks to JavaScript unless absolutely needed and carefully sandboxed
- Use `NavigationDelegate.onNavigationRequest` to intercept and validate navigation requests
```dart
// webview_flutter v4+ API (WebViewController + WebViewWidget)
final controller = WebViewController()
..setJavaScriptMode(JavaScriptMode.disabled) // disabled unless required
..setNavigationDelegate(
NavigationDelegate(
onNavigationRequest: (request) {
final uri = Uri.tryParse(request.url);
if (uri == null || uri.host != 'trusted.example.com') {
return NavigationDecision.prevent;
}
return NavigationDecision.navigate;
},
),
);
// In your widget tree:
WebViewWidget(controller: controller)
```
## Obfuscation and Build Security
- Enable obfuscation in release builds: `flutter build apk --obfuscate --split-debug-info=./debug-info/`
- Keep `--split-debug-info` output out of version control (used for crash symbolication only)
- Ensure ProGuard/R8 rules don't inadvertently expose serialized classes
- Run `flutter analyze` and address all warnings before release

215
rules/dart/testing.md Normal file
View File

@@ -0,0 +1,215 @@
---
paths:
- "**/*.dart"
- "**/pubspec.yaml"
- "**/analysis_options.yaml"
---
# Dart/Flutter Testing
> This file extends [common/testing.md](../common/testing.md) with Dart and Flutter-specific content.
## Test Framework
- **flutter_test** / **dart:test** — built-in test runner
- **mockito** (with `@GenerateMocks`) or **mocktail** (no codegen) for mocking
- **bloc_test** for BLoC/Cubit unit tests
- **fake_async** for controlling time in unit tests
- **integration_test** for end-to-end device tests
## Test Types
| Type | Tool | Location | When to Write |
|------|------|----------|---------------|
| Unit | `dart:test` | `test/unit/` | All domain logic, state managers, repositories |
| Widget | `flutter_test` | `test/widget/` | All widgets with meaningful behavior |
| Golden | `flutter_test` | `test/golden/` | Design-critical UI components |
| Integration | `integration_test` | `integration_test/` | Critical user flows on real device/emulator |
## Unit Tests: State Managers
### BLoC with `bloc_test`
```dart
group('CartBloc', () {
late CartBloc bloc;
late MockCartRepository repository;
setUp(() {
repository = MockCartRepository();
bloc = CartBloc(repository);
});
tearDown(() => bloc.close());
blocTest<CartBloc, CartState>(
'emits updated items when CartItemAdded',
build: () => bloc,
act: (b) => b.add(CartItemAdded(testItem)),
expect: () => [CartState(items: [testItem])],
);
blocTest<CartBloc, CartState>(
'emits empty cart when CartCleared',
seed: () => CartState(items: [testItem]),
build: () => bloc,
act: (b) => b.add(CartCleared()),
expect: () => [const CartState()],
);
});
```
### Riverpod with `ProviderContainer`
```dart
test('usersProvider loads users from repository', () async {
final container = ProviderContainer(
overrides: [userRepositoryProvider.overrideWithValue(FakeUserRepository())],
);
addTearDown(container.dispose);
final result = await container.read(usersProvider.future);
expect(result, isNotEmpty);
});
```
## Widget Tests
```dart
testWidgets('CartPage shows item count badge', (tester) async {
await tester.pumpWidget(
ProviderScope(
overrides: [
cartNotifierProvider.overrideWith(() => FakeCartNotifier([testItem])),
],
child: const MaterialApp(home: CartPage()),
),
);
await tester.pump();
expect(find.text('1'), findsOneWidget);
expect(find.byType(CartItemTile), findsOneWidget);
});
testWidgets('shows empty state when cart is empty', (tester) async {
await tester.pumpWidget(
ProviderScope(
overrides: [cartNotifierProvider.overrideWith(() => FakeCartNotifier([]))],
child: const MaterialApp(home: CartPage()),
),
);
await tester.pump();
expect(find.text('Your cart is empty'), findsOneWidget);
});
```
## Fakes Over Mocks
Prefer hand-written fakes for complex dependencies:
```dart
class FakeUserRepository implements UserRepository {
final _users = <String, User>{};
Object? fetchError;
@override
Future<User?> getById(String id) async {
if (fetchError != null) throw fetchError!;
return _users[id];
}
@override
Future<List<User>> getAll() async {
if (fetchError != null) throw fetchError!;
return _users.values.toList();
}
@override
Stream<List<User>> watchAll() => Stream.value(_users.values.toList());
@override
Future<void> save(User user) async {
_users[user.id] = user;
}
@override
Future<void> delete(String id) async {
_users.remove(id);
}
void addUser(User user) => _users[user.id] = user;
}
```
## Async Testing
```dart
// Use fake_async for controlling timers and Futures
test('debounce triggers after 300ms', () {
fakeAsync((async) {
final debouncer = Debouncer(delay: const Duration(milliseconds: 300));
var callCount = 0;
debouncer.run(() => callCount++);
expect(callCount, 0);
async.elapse(const Duration(milliseconds: 200));
expect(callCount, 0);
async.elapse(const Duration(milliseconds: 200));
expect(callCount, 1);
});
});
```
## Golden Tests
```dart
testWidgets('UserCard golden test', (tester) async {
await tester.pumpWidget(
MaterialApp(home: UserCard(user: testUser)),
);
await expectLater(
find.byType(UserCard),
matchesGoldenFile('goldens/user_card.png'),
);
});
```
Run `flutter test --update-goldens` when intentional visual changes are made.
## Test Naming
Use descriptive, behavior-focused names:
```dart
test('returns null when user does not exist', () { ... });
test('throws NotFoundException when id is empty string', () { ... });
testWidgets('disables submit button while form is invalid', (tester) async { ... });
```
## Test Organization
```
test/
├── unit/
│ ├── domain/
│ │ └── usecases/
│ └── data/
│ └── repositories/
├── widget/
│ └── presentation/
│ └── pages/
└── golden/
└── widgets/
integration_test/
└── flows/
├── login_flow_test.dart
└── checkout_flow_test.dart
```
## Coverage
- Target 80%+ line coverage for business logic (domain + state managers)
- All state transitions must have tests: loading → success, loading → error, retry
- Run `flutter test --coverage` and inspect `lcov.info` with a coverage reporter
- Coverage failures should block CI when below threshold

96
rules/web/coding-style.md Normal file
View File

@@ -0,0 +1,96 @@
> This file extends [common/coding-style.md](../common/coding-style.md) with web-specific frontend content.
# Web Coding Style
## File Organization
Organize by feature or surface area, not by file type:
```text
src/
├── components/
│ ├── hero/
│ │ ├── Hero.tsx
│ │ ├── HeroVisual.tsx
│ │ └── hero.css
│ ├── scrolly-section/
│ │ ├── ScrollySection.tsx
│ │ ├── StickyVisual.tsx
│ │ └── scrolly.css
│ └── ui/
│ ├── Button.tsx
│ ├── SurfaceCard.tsx
│ └── AnimatedText.tsx
├── hooks/
│ ├── useReducedMotion.ts
│ └── useScrollProgress.ts
├── lib/
│ ├── animation.ts
│ └── color.ts
└── styles/
├── tokens.css
├── typography.css
└── global.css
```
## CSS Custom Properties
Define design tokens as variables. Do not hardcode palette, typography, or spacing repeatedly:
```css
:root {
--color-surface: oklch(98% 0 0);
--color-text: oklch(18% 0 0);
--color-accent: oklch(68% 0.21 250);
--text-base: clamp(1rem, 0.92rem + 0.4vw, 1.125rem);
--text-hero: clamp(3rem, 1rem + 7vw, 8rem);
--space-section: clamp(4rem, 3rem + 5vw, 10rem);
--duration-fast: 150ms;
--duration-normal: 300ms;
--ease-out-expo: cubic-bezier(0.16, 1, 0.3, 1);
}
```
## Animation-Only Properties
Prefer compositor-friendly motion:
- `transform`
- `opacity`
- `clip-path`
- `filter` (sparingly)
Avoid animating layout-bound properties:
- `width`
- `height`
- `top`
- `left`
- `margin`
- `padding`
- `border`
- `font-size`
## Semantic HTML First
```html
<header>
<nav aria-label="Main navigation">...</nav>
</header>
<main>
<section aria-labelledby="hero-heading">
<h1 id="hero-heading">...</h1>
</section>
</main>
<footer>...</footer>
```
Do not reach for generic wrapper `div` stacks when a semantic element exists.
## Naming
- Components: PascalCase (`ScrollySection`, `SurfaceCard`)
- Hooks: `use` prefix (`useReducedMotion`)
- CSS classes: kebab-case or utility classes
- Animation timelines: camelCase with intent (`heroRevealTl`)

View File

@@ -0,0 +1,63 @@
> This file extends [common/patterns.md](../common/patterns.md) with web-specific design-quality guidance.
# Web Design Quality Standards
## Anti-Template Policy
Do not ship generic template-looking UI. Frontend output should look intentional, opinionated, and specific to the product.
### Banned Patterns
- Default card grids with uniform spacing and no hierarchy
- Stock hero section with centered headline, gradient blob, and generic CTA
- Unmodified library defaults passed off as finished design
- Flat layouts with no layering, depth, or motion
- Uniform radius, spacing, and shadows across every component
- Safe gray-on-white styling with one decorative accent color
- Dashboard-by-numbers layouts with sidebar + cards + charts and no point of view
- Default font stacks used without a deliberate reason
### Required Qualities
Every meaningful frontend surface should demonstrate at least four of these:
1. Clear hierarchy through scale contrast
2. Intentional rhythm in spacing, not uniform padding everywhere
3. Depth or layering through overlap, shadows, surfaces, or motion
4. Typography with character and a real pairing strategy
5. Color used semantically, not just decoratively
6. Hover, focus, and active states that feel designed
7. Grid-breaking editorial or bento composition where appropriate
8. Texture, grain, or atmosphere when it fits the visual direction
9. Motion that clarifies flow instead of distracting from it
10. Data visualization treated as part of the design system, not an afterthought
## Before Writing Frontend Code
1. Pick a specific style direction. Avoid vague defaults like "clean minimal".
2. Define a palette intentionally.
3. Choose typography deliberately.
4. Gather at least a small set of real references.
5. Use ECC design/frontend skills where relevant.
## Worthwhile Style Directions
- Editorial / magazine
- Neo-brutalism
- Glassmorphism with real depth
- Dark luxury or light luxury with disciplined contrast
- Bento layouts
- Scrollytelling
- 3D integration
- Swiss / International
- Retro-futurism
Do not default to dark mode automatically. Choose the visual direction the product actually wants.
## Component Checklist
- [ ] Does it avoid looking like a default Tailwind or shadcn template?
- [ ] Does it have intentional hover/focus/active states?
- [ ] Does it use hierarchy rather than uniform emphasis?
- [ ] Would this look believable in a real product screenshot?
- [ ] If it supports both themes, do both light and dark feel intentional?

120
rules/web/hooks.md Normal file
View File

@@ -0,0 +1,120 @@
> This file extends [common/hooks.md](../common/hooks.md) with web-specific hook recommendations.
# Web Hooks
## Recommended PostToolUse Hooks
Prefer project-local tooling. Do not wire hooks to remote one-off package execution.
### Format on Save
Use the project's existing formatter entrypoint after edits:
```json
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"command": "pnpm prettier --write \"$FILE_PATH\"",
"description": "Format edited frontend files"
}
]
}
}
```
Equivalent local commands via `yarn prettier` or `npm exec prettier --` are fine when they use repo-owned dependencies.
### Lint Check
```json
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"command": "pnpm eslint --fix \"$FILE_PATH\"",
"description": "Run ESLint on edited frontend files"
}
]
}
}
```
### Type Check
```json
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"command": "pnpm tsc --noEmit --pretty false",
"description": "Type-check after frontend edits"
}
]
}
}
```
### CSS Lint
```json
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"command": "pnpm stylelint --fix \"$FILE_PATH\"",
"description": "Lint edited stylesheets"
}
]
}
}
```
## PreToolUse Hooks
### Guard File Size
Block oversized writes from tool input content, not from a file that may not exist yet:
```json
{
"hooks": {
"PreToolUse": [
{
"matcher": "Write",
"command": "node -e \"let d='';process.stdin.on('data',c=>d+=c);process.stdin.on('end',()=>{const i=JSON.parse(d);const c=i.tool_input?.content||'';const lines=c.split('\\n').length;if(lines>800){console.error('[Hook] BLOCKED: File exceeds 800 lines ('+lines+' lines)');console.error('[Hook] Split into smaller modules');process.exit(2)}console.log(d)})\"",
"description": "Block writes that exceed 800 lines"
}
]
}
}
```
## Stop Hooks
### Final Build Verification
```json
{
"hooks": {
"Stop": [
{
"command": "pnpm build",
"description": "Verify the production build at session end"
}
]
}
}
```
## Ordering
Recommended order:
1. format
2. lint
3. type check
4. build verification

79
rules/web/patterns.md Normal file
View File

@@ -0,0 +1,79 @@
> This file extends [common/patterns.md](../common/patterns.md) with web-specific patterns.
# Web Patterns
## Component Composition
### Compound Components
Use compound components when related UI shares state and interaction semantics:
```tsx
<Tabs defaultValue="overview">
<Tabs.List>
<Tabs.Trigger value="overview">Overview</Tabs.Trigger>
<Tabs.Trigger value="settings">Settings</Tabs.Trigger>
</Tabs.List>
<Tabs.Content value="overview">...</Tabs.Content>
<Tabs.Content value="settings">...</Tabs.Content>
</Tabs>
```
- Parent owns state
- Children consume via context
- Prefer this over prop drilling for complex widgets
### Render Props / Slots
- Use render props or slot patterns when behavior is shared but markup must vary
- Keep keyboard handling, ARIA, and focus logic in the headless layer
### Container / Presentational Split
- Container components own data loading and side effects
- Presentational components receive props and render UI
- Presentational components should stay pure
## State Management
Treat these separately:
| Concern | Tooling |
|---------|---------|
| Server state | TanStack Query, SWR, tRPC |
| Client state | Zustand, Jotai, signals |
| URL state | search params, route segments |
| Form state | React Hook Form or equivalent |
- Do not duplicate server state into client stores
- Derive values instead of storing redundant computed state
## URL As State
Persist shareable state in the URL:
- filters
- sort order
- pagination
- active tab
- search query
## Data Fetching
### Stale-While-Revalidate
- Return cached data immediately
- Revalidate in the background
- Prefer existing libraries instead of rolling this by hand
### Optimistic Updates
- Snapshot current state
- Apply optimistic update
- Roll back on failure
- Emit visible error feedback when rolling back
### Parallel Loading
- Fetch independent data in parallel
- Avoid parent-child request waterfalls
- Prefetch likely next routes or states when justified

64
rules/web/performance.md Normal file
View File

@@ -0,0 +1,64 @@
> This file extends [common/performance.md](../common/performance.md) with web-specific performance content.
# Web Performance Rules
## Core Web Vitals Targets
| Metric | Target |
|--------|--------|
| LCP | < 2.5s |
| INP | < 200ms |
| CLS | < 0.1 |
| FCP | < 1.5s |
| TBT | < 200ms |
## Bundle Budget
| Page Type | JS Budget (gzipped) | CSS Budget |
|-----------|---------------------|------------|
| Landing page | < 150kb | < 30kb |
| App page | < 300kb | < 50kb |
| Microsite | < 80kb | < 15kb |
## Loading Strategy
1. Inline critical above-the-fold CSS where justified
2. Preload the hero image and primary font only
3. Defer non-critical CSS or JS
4. Dynamically import heavy libraries
```js
const gsapModule = await import('gsap');
const { ScrollTrigger } = await import('gsap/ScrollTrigger');
```
## Image Optimization
- Explicit `width` and `height`
- `loading="eager"` plus `fetchpriority="high"` for hero media only
- `loading="lazy"` for below-the-fold assets
- Prefer AVIF or WebP with fallbacks
- Never ship source images far beyond rendered size
## Font Loading
- Max two font families unless there is a clear exception
- `font-display: swap`
- Subset where possible
- Preload only the truly critical weight/style
## Animation Performance
- Animate compositor-friendly properties only
- Use `will-change` narrowly and remove it when done
- Prefer CSS for simple transitions
- Use `requestAnimationFrame` or established animation libraries for JS motion
- Avoid scroll handler churn; use IntersectionObserver or well-behaved libraries
## Performance Checklist
- [ ] All images have explicit dimensions
- [ ] No accidental render-blocking resources
- [ ] No layout shifts from dynamic content
- [ ] Motion stays on compositor-friendly properties
- [ ] Third-party scripts load async/defer and only when needed

57
rules/web/security.md Normal file
View File

@@ -0,0 +1,57 @@
> This file extends [common/security.md](../common/security.md) with web-specific security content.
# Web Security Rules
## Content Security Policy
Always configure a production CSP.
### Nonce-Based CSP
Use a per-request nonce for scripts instead of `'unsafe-inline'`.
```text
Content-Security-Policy:
default-src 'self';
script-src 'self' 'nonce-{RANDOM}' https://cdn.jsdelivr.net;
style-src 'self' 'unsafe-inline' https://fonts.googleapis.com;
img-src 'self' data: https:;
font-src 'self' https://fonts.gstatic.com;
connect-src 'self' https://*.example.com;
frame-src 'none';
object-src 'none';
base-uri 'self';
```
Adjust origins to the project. Do not cargo-cult this block unchanged.
## XSS Prevention
- Never inject unsanitized HTML
- Avoid `innerHTML` / `dangerouslySetInnerHTML` unless sanitized first
- Escape dynamic template values
- Sanitize user HTML with a vetted local sanitizer when absolutely necessary
## Third-Party Scripts
- Load asynchronously
- Use SRI when serving from a CDN
- Audit quarterly
- Prefer self-hosting for critical dependencies when practical
## HTTPS and Headers
```text
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
Referrer-Policy: strict-origin-when-cross-origin
Permissions-Policy: camera=(), microphone=(), geolocation=()
```
## Forms
- CSRF protection on state-changing forms
- Rate limiting on submission endpoints
- Validate client and server side
- Prefer honeypots or light anti-abuse controls over heavy-handed CAPTCHA defaults

55
rules/web/testing.md Normal file
View File

@@ -0,0 +1,55 @@
> This file extends [common/testing.md](../common/testing.md) with web-specific testing content.
# Web Testing Rules
## Priority Order
### 1. Visual Regression
- Screenshot key breakpoints: 320, 768, 1024, 1440
- Test hero sections, scrollytelling sections, and meaningful states
- Use Playwright screenshots for visual-heavy work
- If both themes exist, test both
### 2. Accessibility
- Run automated accessibility checks
- Test keyboard navigation
- Verify reduced-motion behavior
- Verify color contrast
### 3. Performance
- Run Lighthouse or equivalent against meaningful pages
- Keep CWV targets from [performance.md](performance.md)
### 4. Cross-Browser
- Minimum: Chrome, Firefox, Safari
- Test scrolling, motion, and fallback behavior
### 5. Responsive
- Test 320, 375, 768, 1024, 1440, 1920
- Verify no overflow
- Verify touch interactions
## E2E Shape
```ts
import { test, expect } from '@playwright/test';
test('landing hero loads', async ({ page }) => {
await page.goto('/');
await expect(page.locator('h1')).toBeVisible();
});
```
- Avoid flaky timeout-based assertions
- Prefer deterministic waits
## Unit Tests
- Test utilities, data transforms, and custom hooks
- For highly visual components, visual regression often carries more signal than brittle markup assertions
- Visual regression supplements coverage targets; it does not replace them

View File

@@ -51,6 +51,7 @@
"cursor",
"antigravity",
"codex",
"gemini",
"opencode",
"codebuddy"
]

View File

@@ -1,12 +1,13 @@
#!/usr/bin/env node
/**
* Verify repo catalog counts against README.md and AGENTS.md.
* Verify repo catalog counts against tracked documentation files.
*
* Usage:
* node scripts/ci/catalog.js
* node scripts/ci/catalog.js --json
* node scripts/ci/catalog.js --md
* node scripts/ci/catalog.js --text
* node scripts/ci/catalog.js --write --text
*/
'use strict';
@@ -17,6 +18,10 @@ const path = require('path');
const ROOT = path.join(__dirname, '../..');
const README_PATH = path.join(ROOT, 'README.md');
const AGENTS_PATH = path.join(ROOT, 'AGENTS.md');
const README_ZH_CN_PATH = path.join(ROOT, 'README.zh-CN.md');
const DOCS_ZH_CN_README_PATH = path.join(ROOT, 'docs', 'zh-CN', 'README.md');
const DOCS_ZH_CN_AGENTS_PATH = path.join(ROOT, 'docs', 'zh-CN', 'AGENTS.md');
const WRITE_MODE = process.argv.includes('--write');
const OUTPUT_MODE = process.argv.includes('--md')
? 'md'
@@ -43,8 +48,9 @@ function listMatchingFiles(relativeDir, matcher) {
function buildCatalog() {
const agents = listMatchingFiles('agents', entry => entry.isFile() && entry.name.endsWith('.md'));
const commands = listMatchingFiles('commands', entry => entry.isFile() && entry.name.endsWith('.md'));
const skills = listMatchingFiles('skills', entry => entry.isDirectory() && fs.existsSync(path.join(ROOT, 'skills', entry.name, 'SKILL.md')))
.map(skillDir => `${skillDir}/SKILL.md`);
const skills = listMatchingFiles('skills', entry => (
entry.isDirectory() && fs.existsSync(path.join(ROOT, 'skills', entry.name, 'SKILL.md'))
)).map(skillDir => `${skillDir}/SKILL.md`);
return {
agents: { count: agents.length, files: agents, glob: 'agents/*.md' },
@@ -61,10 +67,28 @@ function readFileOrThrow(filePath) {
}
}
function writeFileOrThrow(filePath, content) {
try {
fs.writeFileSync(filePath, content, 'utf8');
} catch (error) {
throw new Error(`Failed to write ${path.basename(filePath)}: ${error.message}`);
}
}
function replaceOrThrow(content, regex, replacer, source) {
if (!regex.test(content)) {
throw new Error(`${source} is missing the expected catalog marker`);
}
return content.replace(regex, replacer);
}
function parseReadmeExpectations(readmeContent) {
const expectations = [];
const quickStartMatch = readmeContent.match(/access to\s+(\d+)\s+agents,\s+(\d+)\s+skills,\s+and\s+(\d+)\s+commands/i);
const quickStartMatch = readmeContent.match(
/access to\s+(\d+)\s+agents,\s+(\d+)\s+skills,\s+and\s+(\d+)\s+(?:commands|legacy command shims?)/i
);
if (!quickStartMatch) {
throw new Error('README.md is missing the quick-start catalog summary');
}
@@ -95,6 +119,120 @@ function parseReadmeExpectations(readmeContent) {
});
}
const parityPatterns = [
{
category: 'agents',
regex: /^\|\s*(?:\*\*)?Agents(?:\*\*)?\s*\|\s*(\d+)\s*\|\s*Shared\s*\(AGENTS\.md\)\s*\|\s*Shared\s*\(AGENTS\.md\)\s*\|\s*12\s*\|$/im,
source: 'README.md parity table'
},
{
category: 'commands',
regex: /^\|\s*(?:\*\*)?Commands(?:\*\*)?\s*\|\s*(\d+)\s*\|\s*Shared\s*\|\s*Instruction-based\s*\|\s*31\s*\|$/im,
source: 'README.md parity table'
},
{
category: 'skills',
regex: /^\|\s*(?:\*\*)?Skills(?:\*\*)?\s*\|\s*(\d+)\s*\|\s*Shared\s*\|\s*10\s*\(native format\)\s*\|\s*37\s*\|$/im,
source: 'README.md parity table'
}
];
for (const pattern of parityPatterns) {
const match = readmeContent.match(pattern.regex);
if (!match) {
throw new Error(`${pattern.source} is missing the ${pattern.category} row`);
}
expectations.push({
category: pattern.category,
mode: 'exact',
expected: Number(match[1]),
source: `${pattern.source} (${pattern.category})`
});
}
return expectations;
}
function parseZhRootReadmeExpectations(readmeContent) {
const match = readmeContent.match(/你现在可以使用\s+(\d+)\s+个代理、\s*(\d+)\s*个技能和\s*(\d+)\s*个命令/i);
if (!match) {
throw new Error('README.zh-CN.md is missing the quick-start catalog summary');
}
return [
{ category: 'agents', mode: 'exact', expected: Number(match[1]), source: 'README.zh-CN.md quick-start summary' },
{ category: 'skills', mode: 'exact', expected: Number(match[2]), source: 'README.zh-CN.md quick-start summary' },
{ category: 'commands', mode: 'exact', expected: Number(match[3]), source: 'README.zh-CN.md quick-start summary' }
];
}
function parseZhDocsReadmeExpectations(readmeContent) {
const expectations = [];
const quickStartMatch = readmeContent.match(/你现在可以使用\s+(\d+)\s+个智能体、\s*(\d+)\s*项技能和\s*(\d+)\s*个命令了/i);
if (!quickStartMatch) {
throw new Error('docs/zh-CN/README.md is missing the quick-start catalog summary');
}
expectations.push(
{ category: 'agents', mode: 'exact', expected: Number(quickStartMatch[1]), source: 'docs/zh-CN/README.md quick-start summary' },
{ category: 'skills', mode: 'exact', expected: Number(quickStartMatch[2]), source: 'docs/zh-CN/README.md quick-start summary' },
{ category: 'commands', mode: 'exact', expected: Number(quickStartMatch[3]), source: 'docs/zh-CN/README.md quick-start summary' }
);
const tablePatterns = [
{ category: 'agents', regex: /\|\s*智能体\s*\|\s*(?:(?:PASS:|\u2705)\s*)?(\d+)\s*个\s*\|/i, source: 'docs/zh-CN/README.md comparison table' },
{ category: 'commands', regex: /\|\s*命令\s*\|\s*(?:(?:PASS:|\u2705)\s*)?(\d+)\s*个\s*\|/i, source: 'docs/zh-CN/README.md comparison table' },
{ category: 'skills', regex: /\|\s*技能\s*\|\s*(?:(?:PASS:|\u2705)\s*)?(\d+)\s*项\s*\|/i, source: 'docs/zh-CN/README.md comparison table' }
];
for (const pattern of tablePatterns) {
const match = readmeContent.match(pattern.regex);
if (!match) {
throw new Error(`${pattern.source} is missing the ${pattern.category} row`);
}
expectations.push({
category: pattern.category,
mode: 'exact',
expected: Number(match[1]),
source: `${pattern.source} (${pattern.category})`
});
}
const parityPatterns = [
{
category: 'agents',
regex: /^\|\s*(?:\*\*)?智能体(?:\*\*)?\s*\|\s*(\d+)\s*\|\s*共享\s*\(AGENTS\.md\)\s*\|\s*共享\s*\(AGENTS\.md\)\s*\|\s*12\s*\|$/im,
source: 'docs/zh-CN/README.md parity table'
},
{
category: 'commands',
regex: /^\|\s*(?:\*\*)?命令(?:\*\*)?\s*\|\s*(\d+)\s*\|\s*共享\s*\|\s*基于指令\s*\|\s*31\s*\|$/im,
source: 'docs/zh-CN/README.md parity table'
},
{
category: 'skills',
regex: /^\|\s*(?:\*\*)?技能(?:\*\*)?\s*\|\s*(\d+)\s*\|\s*共享\s*\|\s*10\s*\(原生格式\)\s*\|\s*37\s*\|$/im,
source: 'docs/zh-CN/README.md parity table'
}
];
for (const pattern of parityPatterns) {
const match = readmeContent.match(pattern.regex);
if (!match) {
throw new Error(`${pattern.source} is missing the ${pattern.category} row`);
}
expectations.push({
category: pattern.category,
mode: 'exact',
expected: Number(match[1]),
source: `${pattern.source} (${pattern.category})`
});
}
return expectations;
}
@@ -153,6 +291,61 @@ function parseAgentsDocExpectations(agentsContent) {
return expectations;
}
function parseZhAgentsDocExpectations(agentsContent) {
const summaryMatch = agentsContent.match(/提供\s+(\d+)\s+个专业代理、\s*(\d+)(\+)?\s*项技能、\s*(\d+)\s+条命令/i);
if (!summaryMatch) {
throw new Error('docs/zh-CN/AGENTS.md is missing the catalog summary line');
}
const expectations = [
{ category: 'agents', mode: 'exact', expected: Number(summaryMatch[1]), source: 'docs/zh-CN/AGENTS.md summary' },
{
category: 'skills',
mode: summaryMatch[3] ? 'minimum' : 'exact',
expected: Number(summaryMatch[2]),
source: 'docs/zh-CN/AGENTS.md summary'
},
{ category: 'commands', mode: 'exact', expected: Number(summaryMatch[4]), source: 'docs/zh-CN/AGENTS.md summary' }
];
const structurePatterns = [
{
category: 'agents',
mode: 'exact',
regex: /^\s*agents\/\s*[—–-]\s*(\d+)\s+个专业子代理\s*$/im,
source: 'docs/zh-CN/AGENTS.md project structure'
},
{
category: 'skills',
mode: 'minimum',
regex: /^\s*skills\/\s*[—–-]\s*(\d+)(\+)?\s+个工作流技能和领域知识\s*$/im,
source: 'docs/zh-CN/AGENTS.md project structure'
},
{
category: 'commands',
mode: 'exact',
regex: /^\s*commands\/\s*[—–-]\s*(\d+)\s+个斜杠命令\s*$/im,
source: 'docs/zh-CN/AGENTS.md project structure'
}
];
for (const pattern of structurePatterns) {
const match = agentsContent.match(pattern.regex);
if (!match) {
throw new Error(`${pattern.source} is missing the ${pattern.category} entry`);
}
expectations.push({
category: pattern.category,
mode: pattern.mode === 'minimum' && match[2] ? 'minimum' : pattern.mode,
expected: Number(match[1]),
source: `${pattern.source} (${pattern.category})`
});
}
return expectations;
}
function evaluateExpectations(catalog, expectations) {
return expectations.map(expectation => {
const actual = catalog[expectation.category].count;
@@ -173,6 +366,208 @@ function formatExpectation(expectation) {
return `${expectation.source}: ${expectation.category} documented ${comparator} ${expectation.expected}, actual ${expectation.actual}`;
}
function syncEnglishReadme(content, catalog) {
let nextContent = content;
nextContent = replaceOrThrow(
nextContent,
/(access to\s+)(\d+)(\s+agents,\s+)(\d+)(\s+skills,\s+and\s+)(\d+)(\s+(?:commands|legacy command shims?))/i,
(_, prefix, __, agentsSuffix, ___, skillsSuffix) =>
`${prefix}${catalog.agents.count}${agentsSuffix}${catalog.skills.count}${skillsSuffix}${catalog.commands.count} legacy command shims`,
'README.md quick-start summary'
);
nextContent = replaceOrThrow(
nextContent,
/(\|\s*(?:\*\*)?Agents(?:\*\*)?\s*\|\s*(?:(?:PASS:|\u2705)\s*)?)(\d+)(\s+agents\s*\|)/i,
(_, prefix, __, suffix) => `${prefix}${catalog.agents.count}${suffix}`,
'README.md comparison table (agents)'
);
nextContent = replaceOrThrow(
nextContent,
/(\|\s*(?:\*\*)?Commands(?:\*\*)?\s*\|\s*(?:(?:PASS:|\u2705)\s*)?)(\d+)(\s+commands\s*\|)/i,
(_, prefix, __, suffix) => `${prefix}${catalog.commands.count}${suffix}`,
'README.md comparison table (commands)'
);
nextContent = replaceOrThrow(
nextContent,
/(\|\s*(?:\*\*)?Skills(?:\*\*)?\s*\|\s*(?:(?:PASS:|\u2705)\s*)?)(\d+)(\s+skills\s*\|)/i,
(_, prefix, __, suffix) => `${prefix}${catalog.skills.count}${suffix}`,
'README.md comparison table (skills)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\|\s*(?:\*\*)?Agents(?:\*\*)?\s*\|\s*)(\d+)(\s*\|\s*Shared\s*\(AGENTS\.md\)\s*\|\s*Shared\s*\(AGENTS\.md\)\s*\|\s*12\s*\|)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.agents.count}${suffix}`,
'README.md parity table (agents)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\|\s*(?:\*\*)?Commands(?:\*\*)?\s*\|\s*)(\d+)(\s*\|\s*Shared\s*\|\s*Instruction-based\s*\|\s*31\s*\|)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.commands.count}${suffix}`,
'README.md parity table (commands)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\|\s*(?:\*\*)?Skills(?:\*\*)?\s*\|\s*)(\d+)(\s*\|\s*Shared\s*\|\s*10\s*\(native format\)\s*\|\s*37\s*\|)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.skills.count}${suffix}`,
'README.md parity table (skills)'
);
return nextContent;
}
function syncEnglishAgents(content, catalog) {
let nextContent = content;
nextContent = replaceOrThrow(
nextContent,
/(providing\s+)(\d+)(\s+specialized agents,\s+)(\d+)(\+?)(\s+skills,\s+)(\d+)(\s+commands)/i,
(_, prefix, __, agentsSuffix, ___, skillsPlus, skillsSuffix, ____, commandsSuffix) =>
`${prefix}${catalog.agents.count}${agentsSuffix}${catalog.skills.count}${skillsPlus}${skillsSuffix}${catalog.commands.count}${commandsSuffix}`,
'AGENTS.md summary'
);
nextContent = replaceOrThrow(
nextContent,
/^(\s*agents\/\s*[—–-]\s*)(\d+)(\s+specialized subagents\s*)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.agents.count}${suffix}`,
'AGENTS.md project structure (agents)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\s*skills\/\s*[—–-]\s*)(\d+)(\+?)(\s+workflow skills and domain knowledge\s*)$/im,
(_, prefix, __, plus, suffix) => `${prefix}${catalog.skills.count}${plus}${suffix}`,
'AGENTS.md project structure (skills)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\s*commands\/\s*[—–-]\s*)(\d+)(\s+slash commands\s*)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.commands.count}${suffix}`,
'AGENTS.md project structure (commands)'
);
return nextContent;
}
function syncZhRootReadme(content, catalog) {
return replaceOrThrow(
content,
/(你现在可以使用\s+)(\d+)(\s+个代理、\s*)(\d+)(\s*个技能和\s*)(\d+)(\s*个命令[。.!]?)/i,
(_, prefix, __, agentsSuffix, ___, skillsSuffix, ____, commandsSuffix) =>
`${prefix}${catalog.agents.count}${agentsSuffix}${catalog.skills.count}${skillsSuffix}${catalog.commands.count}${commandsSuffix}`,
'README.zh-CN.md quick-start summary'
);
}
function syncZhDocsReadme(content, catalog) {
let nextContent = content;
nextContent = replaceOrThrow(
nextContent,
/(你现在可以使用\s+)(\d+)(\s+个智能体、\s*)(\d+)(\s*项技能和\s*)(\d+)(\s*个命令了[。.!]?)/i,
(_, prefix, __, agentsSuffix, ___, skillsSuffix, ____, commandsSuffix) =>
`${prefix}${catalog.agents.count}${agentsSuffix}${catalog.skills.count}${skillsSuffix}${catalog.commands.count}${commandsSuffix}`,
'docs/zh-CN/README.md quick-start summary'
);
nextContent = replaceOrThrow(
nextContent,
/(\|\s*智能体\s*\|\s*(?:(?:PASS:|\u2705)\s*)?)(\d+)(\s*个\s*\|)/i,
(_, prefix, __, suffix) => `${prefix}${catalog.agents.count}${suffix}`,
'docs/zh-CN/README.md comparison table (agents)'
);
nextContent = replaceOrThrow(
nextContent,
/(\|\s*命令\s*\|\s*(?:(?:PASS:|\u2705)\s*)?)(\d+)(\s*个\s*\|)/i,
(_, prefix, __, suffix) => `${prefix}${catalog.commands.count}${suffix}`,
'docs/zh-CN/README.md comparison table (commands)'
);
nextContent = replaceOrThrow(
nextContent,
/(\|\s*技能\s*\|\s*(?:(?:PASS:|\u2705)\s*)?)(\d+)(\s*项\s*\|)/i,
(_, prefix, __, suffix) => `${prefix}${catalog.skills.count}${suffix}`,
'docs/zh-CN/README.md comparison table (skills)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\|\s*(?:\*\*)?智能体(?:\*\*)?\s*\|\s*)(\d+)(\s*\|\s*共享\s*\(AGENTS\.md\)\s*\|\s*共享\s*\(AGENTS\.md\)\s*\|\s*12\s*\|)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.agents.count}${suffix}`,
'docs/zh-CN/README.md parity table (agents)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\|\s*(?:\*\*)?命令(?:\*\*)?\s*\|\s*)(\d+)(\s*\|\s*共享\s*\|\s*基于指令\s*\|\s*31\s*\|)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.commands.count}${suffix}`,
'docs/zh-CN/README.md parity table (commands)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\|\s*(?:\*\*)?技能(?:\*\*)?\s*\|\s*)(\d+)(\s*\|\s*共享\s*\|\s*10\s*\(原生格式\)\s*\|\s*37\s*\|)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.skills.count}${suffix}`,
'docs/zh-CN/README.md parity table (skills)'
);
return nextContent;
}
function syncZhAgents(content, catalog) {
let nextContent = content;
nextContent = replaceOrThrow(
nextContent,
/(提供\s+)(\d+)(\s+个专业代理、\s*)(\d+)(\+?)(\s*项技能、\s*)(\d+)(\s+条命令)/i,
(_, prefix, __, agentsSuffix, ___, skillsPlus, skillsSuffix, ____, commandsSuffix) =>
`${prefix}${catalog.agents.count}${agentsSuffix}${catalog.skills.count}${skillsPlus}${skillsSuffix}${catalog.commands.count}${commandsSuffix}`,
'docs/zh-CN/AGENTS.md summary'
);
nextContent = replaceOrThrow(
nextContent,
/^(\s*agents\/\s*[—–-]\s*)(\d+)(\s+个专业子代理\s*)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.agents.count}${suffix}`,
'docs/zh-CN/AGENTS.md project structure (agents)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\s*skills\/\s*[—–-]\s*)(\d+)(\+?)(\s+个工作流技能和领域知识\s*)$/im,
(_, prefix, __, plus, suffix) => `${prefix}${catalog.skills.count}${plus}${suffix}`,
'docs/zh-CN/AGENTS.md project structure (skills)'
);
nextContent = replaceOrThrow(
nextContent,
/^(\s*commands\/\s*[—–-]\s*)(\d+)(\s+个斜杠命令\s*)$/im,
(_, prefix, __, suffix) => `${prefix}${catalog.commands.count}${suffix}`,
'docs/zh-CN/AGENTS.md project structure (commands)'
);
return nextContent;
}
const DOCUMENT_SPECS = [
{
filePath: README_PATH,
parseExpectations: parseReadmeExpectations,
syncContent: syncEnglishReadme,
},
{
filePath: AGENTS_PATH,
parseExpectations: parseAgentsDocExpectations,
syncContent: syncEnglishAgents,
},
{
filePath: README_ZH_CN_PATH,
parseExpectations: parseZhRootReadmeExpectations,
syncContent: syncZhRootReadme,
},
{
filePath: DOCS_ZH_CN_README_PATH,
parseExpectations: parseZhDocsReadmeExpectations,
syncContent: syncZhDocsReadme,
},
{
filePath: DOCS_ZH_CN_AGENTS_PATH,
parseExpectations: parseZhAgentsDocExpectations,
syncContent: syncZhAgents,
},
];
function renderText(result) {
console.log('Catalog counts:');
console.log(`- agents: ${result.catalog.agents.count}`);
@@ -215,12 +610,20 @@ function renderMarkdown(result) {
function main() {
const catalog = buildCatalog();
const readmeContent = readFileOrThrow(README_PATH);
const agentsContent = readFileOrThrow(AGENTS_PATH);
const expectations = [
...parseReadmeExpectations(readmeContent),
...parseAgentsDocExpectations(agentsContent)
];
if (WRITE_MODE) {
for (const spec of DOCUMENT_SPECS) {
const currentContent = readFileOrThrow(spec.filePath);
const nextContent = spec.syncContent(currentContent, catalog);
if (nextContent !== currentContent) {
writeFileOrThrow(spec.filePath, nextContent);
}
}
}
const expectations = DOCUMENT_SPECS.flatMap(spec => (
spec.parseExpectations(readFileOrThrow(spec.filePath))
));
const checks = evaluateExpectations(catalog, expectations);
const result = { catalog, checks };

View File

@@ -0,0 +1,317 @@
#!/usr/bin/env node
'use strict';
/**
* Merge the non-MCP Codex baseline from `.codex/config.toml` into a target
* `config.toml` without overwriting existing user choices.
*
* Strategy: add-only.
* - Missing root keys are inserted before the first TOML table.
* - Missing table keys are appended to existing tables.
* - Missing tables are appended to the end of the file.
*/
const fs = require('fs');
const path = require('path');
let TOML;
try {
TOML = require('@iarna/toml');
} catch {
console.error('[ecc-codex] Missing dependency: @iarna/toml');
console.error('[ecc-codex] Run: npm install (from the ECC repo root)');
process.exit(1);
}
const ROOT_KEYS = ['approval_policy', 'sandbox_mode', 'web_search', 'notify', 'persistent_instructions'];
const TABLE_PATHS = [
'features',
'profiles.strict',
'profiles.yolo',
'agents',
'agents.explorer',
'agents.reviewer',
'agents.docs_researcher',
];
const TOML_HEADER_RE = /^[ \t]*(?:\[[^[\]\n][^\]\n]*\]|\[\[[^[\]\n][^\]\n]*\]\])[ \t]*(?:#.*)?$/m;
function log(message) {
console.log(`[ecc-codex] ${message}`);
}
function warn(message) {
console.warn(`[ecc-codex] WARNING: ${message}`);
}
function getNested(obj, pathParts) {
let current = obj;
for (const part of pathParts) {
if (!current || typeof current !== 'object' || !(part in current)) {
return undefined;
}
current = current[part];
}
return current;
}
function setNested(obj, pathParts, value) {
let current = obj;
for (let i = 0; i < pathParts.length - 1; i += 1) {
const part = pathParts[i];
if (!current[part] || typeof current[part] !== 'object' || Array.isArray(current[part])) {
current[part] = {};
}
current = current[part];
}
current[pathParts[pathParts.length - 1]] = value;
}
function findFirstTableIndex(raw) {
const match = TOML_HEADER_RE.exec(raw);
return match ? match.index : -1;
}
function findTableRange(raw, tablePath) {
const escaped = tablePath.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
const headerPattern = new RegExp(`^[ \\t]*\\[${escaped}\\][ \\t]*(?:#.*)?$`, 'm');
const match = headerPattern.exec(raw);
if (!match) {
return null;
}
const headerEnd = raw.indexOf('\n', match.index);
const bodyStart = headerEnd === -1 ? raw.length : headerEnd + 1;
const nextHeaderRel = raw.slice(bodyStart).search(TOML_HEADER_RE);
const bodyEnd = nextHeaderRel === -1 ? raw.length : bodyStart + nextHeaderRel;
return { bodyStart, bodyEnd };
}
function ensureTrailingNewline(text) {
return text.endsWith('\n') ? text : `${text}\n`;
}
function insertBeforeFirstTable(raw, block) {
const normalizedBlock = ensureTrailingNewline(block.trimEnd());
const firstTableIndex = findFirstTableIndex(raw);
if (firstTableIndex === -1) {
const prefix = raw.trimEnd();
return prefix ? `${prefix}\n${normalizedBlock}` : normalizedBlock;
}
const before = raw.slice(0, firstTableIndex).trimEnd();
const after = raw.slice(firstTableIndex).replace(/^\n+/, '');
return `${before}\n\n${normalizedBlock}\n${after}`;
}
function appendBlock(raw, block) {
const prefix = raw.trimEnd();
const normalizedBlock = block.trimEnd();
return prefix ? `${prefix}\n\n${normalizedBlock}\n` : `${normalizedBlock}\n`;
}
function stringifyValue(value) {
return TOML.stringify({ value }).trim().replace(/^value = /, '');
}
function updateInlineTableKeys(raw, tablePath, missingKeys) {
const pathParts = tablePath.split('.');
if (pathParts.length < 2) {
return null;
}
const parentPath = pathParts.slice(0, -1).join('.');
const parentRange = findTableRange(raw, parentPath);
if (!parentRange) {
return null;
}
const tableKey = pathParts[pathParts.length - 1];
const escapedKey = tableKey.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
const body = raw.slice(parentRange.bodyStart, parentRange.bodyEnd);
const lines = body.split('\n');
for (let index = 0; index < lines.length; index += 1) {
const inlinePattern = new RegExp(`^(\\s*${escapedKey}\\s*=\\s*\\{)(.*?)(\\}\\s*(?:#.*)?)$`);
const match = inlinePattern.exec(lines[index]);
if (!match) {
continue;
}
const additions = Object.entries(missingKeys)
.map(([key, value]) => `${key} = ${stringifyValue(value)}`)
.join(', ');
const existingEntries = match[2].trim();
const nextEntries = existingEntries ? `${existingEntries}, ${additions}` : additions;
lines[index] = `${match[1]}${nextEntries}${match[3]}`;
return `${raw.slice(0, parentRange.bodyStart)}${lines.join('\n')}${raw.slice(parentRange.bodyEnd)}`;
}
return null;
}
function appendImplicitTable(raw, tablePath, missingKeys) {
const candidate = appendBlock(raw, stringifyTable(tablePath, missingKeys));
try {
TOML.parse(candidate);
return candidate;
} catch {
return null;
}
}
function appendToTable(raw, tablePath, block, missingKeys = null) {
const range = findTableRange(raw, tablePath);
if (!range) {
if (missingKeys) {
const inlineUpdated = updateInlineTableKeys(raw, tablePath, missingKeys);
if (inlineUpdated) {
return inlineUpdated;
}
const appendedTable = appendImplicitTable(raw, tablePath, missingKeys);
if (appendedTable) {
return appendedTable;
}
}
warn(`Skipping missing keys for [${tablePath}] because it has no standalone header and could not be safely updated`);
return raw;
}
const before = raw.slice(0, range.bodyEnd).trimEnd();
const after = raw.slice(range.bodyEnd).replace(/^\n*/, '\n');
return `${before}\n${block.trimEnd()}\n${after}`;
}
function stringifyRootKeys(keys) {
return TOML.stringify(keys).trim();
}
function stringifyTable(tablePath, value) {
const scalarOnly = {};
for (const [key, entryValue] of Object.entries(value)) {
if (entryValue && typeof entryValue === 'object' && !Array.isArray(entryValue)) {
continue;
}
scalarOnly[key] = entryValue;
}
const snippet = {};
setNested(snippet, tablePath.split('.'), scalarOnly);
return TOML.stringify(snippet).trim();
}
function stringifyTableKeys(tableValue) {
const lines = [];
for (const [key, value] of Object.entries(tableValue)) {
if (value && typeof value === 'object' && !Array.isArray(value)) {
continue;
}
lines.push(TOML.stringify({ [key]: value }).trim());
}
return lines.join('\n');
}
function main() {
const args = process.argv.slice(2);
const configPath = args.find(arg => !arg.startsWith('-'));
const dryRun = args.includes('--dry-run');
if (!configPath) {
console.error('Usage: merge-codex-config.js <config.toml> [--dry-run]');
process.exit(1);
}
const referencePath = path.join(__dirname, '..', '..', '.codex', 'config.toml');
if (!fs.existsSync(referencePath)) {
console.error(`[ecc-codex] Reference config not found: ${referencePath}`);
process.exit(1);
}
if (!fs.existsSync(configPath)) {
console.error(`[ecc-codex] Config file not found: ${configPath}`);
process.exit(1);
}
const raw = fs.readFileSync(configPath, 'utf8');
const referenceRaw = fs.readFileSync(referencePath, 'utf8');
let targetConfig;
let referenceConfig;
try {
targetConfig = TOML.parse(raw);
referenceConfig = TOML.parse(referenceRaw);
} catch (error) {
console.error(`[ecc-codex] Failed to parse TOML: ${error.message}`);
process.exit(1);
}
const missingRootKeys = {};
for (const key of ROOT_KEYS) {
if (referenceConfig[key] !== undefined && targetConfig[key] === undefined) {
missingRootKeys[key] = referenceConfig[key];
}
}
const missingTables = [];
const missingTableKeys = [];
for (const tablePath of TABLE_PATHS) {
const pathParts = tablePath.split('.');
const referenceValue = getNested(referenceConfig, pathParts);
if (referenceValue === undefined) {
continue;
}
const targetValue = getNested(targetConfig, pathParts);
if (targetValue === undefined) {
missingTables.push(tablePath);
continue;
}
const missingKeys = {};
for (const [key, value] of Object.entries(referenceValue)) {
if (value && typeof value === 'object' && !Array.isArray(value)) {
continue;
}
if (targetValue[key] === undefined) {
missingKeys[key] = value;
}
}
if (Object.keys(missingKeys).length > 0) {
missingTableKeys.push({ tablePath, missingKeys });
}
}
if (
Object.keys(missingRootKeys).length === 0 &&
missingTables.length === 0 &&
missingTableKeys.length === 0
) {
log('All baseline Codex settings already present. Nothing to do.');
return;
}
let nextRaw = raw;
if (Object.keys(missingRootKeys).length > 0) {
log(` [add-root] ${Object.keys(missingRootKeys).join(', ')}`);
nextRaw = insertBeforeFirstTable(nextRaw, stringifyRootKeys(missingRootKeys));
}
for (const { tablePath, missingKeys } of missingTableKeys) {
log(` [add-keys] [${tablePath}] -> ${Object.keys(missingKeys).join(', ')}`);
nextRaw = appendToTable(nextRaw, tablePath, stringifyTableKeys(missingKeys), missingKeys);
}
for (const tablePath of missingTables) {
log(` [add-table] [${tablePath}]`);
nextRaw = appendBlock(nextRaw, stringifyTable(tablePath, getNested(referenceConfig, tablePath.split('.'))));
}
if (dryRun) {
log('Dry run — would write the merged Codex baseline.');
return;
}
fs.writeFileSync(configPath, nextRaw, 'utf8');
log('Done. Baseline Codex settings merged.');
}
main();

View File

@@ -54,7 +54,7 @@ NC='\033[0m'
log() { echo -e "${BLUE}[GAN-HARNESS]${NC} $*"; }
ok() { echo -e "${GREEN}[✓]${NC} $*"; }
warn() { echo -e "${YELLOW}[]${NC} $*"; }
warn() { echo -e "${YELLOW}[WARN]${NC} $*"; }
fail() { echo -e "${RED}[✗]${NC} $*"; }
phase() { echo -e "\n${PURPLE}═══════════════════════════════════════════════${NC}"; echo -e "${PURPLE} $*${NC}"; echo -e "${PURPLE}═══════════════════════════════════════════════${NC}\n"; }
@@ -159,7 +159,7 @@ for (( i=1; i<=MAX_ITERATIONS; i++ )); do
log "━━━ Iteration $i / $MAX_ITERATIONS ━━━"
# ── GENERATE ──
echo -e "${GREEN} GENERATOR (iteration $i)${NC}"
echo -e "${GREEN}>> GENERATOR (iteration $i)${NC}"
FEEDBACK_CONTEXT=""
if [ $i -gt 1 ] && [ -f "${FEEDBACK_DIR}/feedback-$(printf '%03d' $((i-1))).md" ]; then
@@ -181,7 +181,7 @@ Update gan-harness/generator-state.md." \
ok "Generator completed iteration $i"
# ── EVALUATE ──
echo -e "${RED} EVALUATOR (iteration $i)${NC}"
echo -e "${RED}>> EVALUATOR (iteration $i)${NC}"
claude -p --model "$EVALUATOR_MODEL" \
--allowedTools "Read,Write,Bash,Grep,Glob" \
@@ -217,7 +217,7 @@ Include the weighted TOTAL score in the format: | **TOTAL** | | | **X.X** |" \
# ── CHECK PASS ──
if score_passes "$SCORE" "$PASS_THRESHOLD"; then
echo ""
ok "🎉 PASSED at iteration $i with score $SCORE (threshold: $PASS_THRESHOLD)"
ok "PASSED at iteration $i with score $SCORE (threshold: $PASS_THRESHOLD)"
break
fi
@@ -256,7 +256,7 @@ cat > "${HARNESS_DIR}/build-report.md" << EOF
# GAN Harness Build Report
**Brief:** $BRIEF
**Result:** $(score_passes "$FINAL_SCORE" "$PASS_THRESHOLD" && echo "PASS" || echo "FAIL")
**Result:** $(score_passes "$FINAL_SCORE" "$PASS_THRESHOLD" && echo "PASS" || echo "FAIL")
**Iterations:** $NUM_ITERATIONS / $MAX_ITERATIONS
**Final Score:** $FINAL_SCORE / 10.0 (threshold: $PASS_THRESHOLD)
**Elapsed:** $ELAPSED
@@ -287,9 +287,9 @@ ok "Report written to ${HARNESS_DIR}/build-report.md"
echo ""
log "━━━ Final Results ━━━"
if score_passes "$FINAL_SCORE" "$PASS_THRESHOLD"; then
echo -e "${GREEN} Result: PASS${NC}"
echo -e "${GREEN} Result: PASS${NC}"
else
echo -e "${RED} Result: FAIL${NC}"
echo -e "${RED} Result: FAIL${NC}"
fi
echo -e " Score: ${CYAN}${FINAL_SCORE}${NC} / 10.0"
echo -e " Iterations: ${NUM_ITERATIONS} / ${MAX_ITERATIONS}"

View File

@@ -0,0 +1,131 @@
#!/usr/bin/env node
/**
* PostToolUse hook: lightweight frontend design-quality reminder.
*
* This stays self-contained inside ECC. It does not call remote models or
* install packages. The goal is to catch obviously generic UI drift and keep
* frontend edits aligned with ECC's stronger design standards.
*/
'use strict';
const fs = require('fs');
const path = require('path');
const FRONTEND_EXTENSIONS = /\.(astro|css|html|jsx|scss|svelte|tsx|vue)$/i;
const MAX_STDIN = 1024 * 1024;
const GENERIC_SIGNALS = [
{ pattern: /\bget started\b/i, label: '"Get Started" CTA copy' },
{ pattern: /\blearn more\b/i, label: '"Learn more" CTA copy' },
{ pattern: /\bgrid-cols-(3|4)\b/, label: 'uniform multi-card grid' },
{ pattern: /\bbg-gradient-to-[trbl]/, label: 'stock gradient utility usage' },
{ pattern: /\btext-center\b/, label: 'centered default layout cues' },
{ pattern: /\bfont-(sans|inter)\b/i, label: 'default font utility' },
];
const CHECKLIST = [
'visual hierarchy with real contrast',
'intentional spacing rhythm',
'depth, layering, or overlap',
'purposeful hover and focus states',
'color and typography that feel specific',
];
function getFilePaths(input) {
const toolInput = input?.tool_input || {};
if (toolInput.file_path) {
return [String(toolInput.file_path)];
}
if (Array.isArray(toolInput.edits)) {
return toolInput.edits
.map(edit => String(edit?.file_path || ''))
.filter(Boolean);
}
return [];
}
function readContent(filePath) {
try {
return fs.readFileSync(path.resolve(filePath), 'utf8');
} catch {
return '';
}
}
function detectSignals(content) {
return GENERIC_SIGNALS.filter(signal => signal.pattern.test(content)).map(signal => signal.label);
}
function buildWarning(frontendPaths, findings) {
const pathLines = frontendPaths.map(fp => ` - ${fp}`).join('\n');
const signalLines = findings.length > 0
? findings.map(item => ` - ${item}`).join('\n')
: ' - no obvious canned-template strings detected';
return [
'[Hook] DESIGN CHECK: frontend file(s) modified:',
pathLines,
'[Hook] Review for generic/template drift. Frontend should have:',
CHECKLIST.map(item => ` - ${item}`).join('\n'),
'[Hook] Heuristic signals:',
signalLines,
].join('\n');
}
function run(inputOrRaw) {
let input;
let rawInput = inputOrRaw;
try {
if (typeof inputOrRaw === 'string') {
rawInput = inputOrRaw;
input = inputOrRaw.trim() ? JSON.parse(inputOrRaw) : {};
} else {
input = inputOrRaw || {};
rawInput = JSON.stringify(inputOrRaw ?? {});
}
} catch {
return { exitCode: 0, stdout: typeof rawInput === 'string' ? rawInput : '' };
}
const filePaths = getFilePaths(input);
const frontendPaths = filePaths.filter(filePath => FRONTEND_EXTENSIONS.test(filePath));
if (frontendPaths.length === 0) {
return { exitCode: 0, stdout: typeof rawInput === 'string' ? rawInput : '' };
}
const findings = [];
for (const filePath of frontendPaths) {
const content = readContent(filePath);
findings.push(...detectSignals(content));
}
return {
exitCode: 0,
stdout: typeof rawInput === 'string' ? rawInput : '',
stderr: buildWarning(frontendPaths, findings),
};
}
module.exports = { run };
if (require.main === module) {
let raw = '';
process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (raw.length < MAX_STDIN) {
const remaining = MAX_STDIN - raw.length;
raw += chunk.substring(0, remaining);
}
});
process.stdin.on('end', () => {
const result = run(raw);
if (result.stderr) process.stderr.write(`${result.stderr}\n`);
process.stdout.write(typeof result.stdout === 'string' ? result.stdout : raw);
process.exit(Number.isInteger(result.exitCode) ? result.exitCode : 0);
});
}

View File

@@ -0,0 +1,73 @@
#!/usr/bin/env node
'use strict';
const fs = require('fs');
const os = require('os');
const path = require('path');
const MAX_STDIN = 1024 * 1024;
let raw = '';
const MODE_CONFIG = {
audit: {
fileName: 'bash-commands.log',
format: command => `[${new Date().toISOString()}] ${command}`,
},
cost: {
fileName: 'cost-tracker.log',
format: command => `[${new Date().toISOString()}] tool=Bash command=${command}`,
},
};
function sanitizeCommand(command) {
return String(command || '')
.replace(/\n/g, ' ')
.replace(/--token[= ][^ ]*/g, '--token=<REDACTED>')
.replace(/Authorization:[: ]*[^ ]*[: ]*[^ ]*/gi, 'Authorization:<REDACTED>')
.replace(/\bAKIA[A-Z0-9]{16}\b/g, '<REDACTED>')
.replace(/\bASIA[A-Z0-9]{16}\b/g, '<REDACTED>')
.replace(/password[= ][^ ]*/gi, 'password=<REDACTED>')
.replace(/\bghp_[A-Za-z0-9_]+\b/g, '<REDACTED>')
.replace(/\bgho_[A-Za-z0-9_]+\b/g, '<REDACTED>')
.replace(/\bghs_[A-Za-z0-9_]+\b/g, '<REDACTED>')
.replace(/\bgithub_pat_[A-Za-z0-9_]+\b/g, '<REDACTED>');
}
function appendLine(filePath, line) {
fs.mkdirSync(path.dirname(filePath), { recursive: true });
fs.appendFileSync(filePath, `${line}\n`, 'utf8');
}
function main() {
const config = MODE_CONFIG[process.argv[2]];
process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (raw.length < MAX_STDIN) {
const remaining = MAX_STDIN - raw.length;
raw += chunk.substring(0, remaining);
}
});
process.stdin.on('end', () => {
try {
if (config) {
const input = raw.trim() ? JSON.parse(raw) : {};
const command = sanitizeCommand(input.tool_input?.command || '?');
appendLine(path.join(os.homedir(), '.claude', config.fileName), config.format(command));
}
} catch {
// Logging must never block the calling hook.
}
process.stdout.write(raw);
});
}
if (require.main === module) {
main();
}
module.exports = {
sanitizeCommand,
};

View File

@@ -67,7 +67,7 @@ function findFileIssues(filePath) {
try {
const content = getStagedFileContent(filePath);
if (content == null) {
if (content === null || content === undefined) {
return issues;
}
const lines = content.split('\n');

View File

@@ -2,12 +2,50 @@
'use strict';
/**
* Session end marker hook - outputs stdin to stdout unchanged.
* Exports run() for in-process execution (avoids spawnSync issues on Windows).
* Session end marker hook - performs lightweight observer cleanup and
* outputs stdin to stdout unchanged. Exports run() for in-process execution.
*/
const {
resolveProjectContext,
removeSessionLease,
listSessionLeases,
stopObserverForContext,
resolveSessionId
} = require('../lib/observer-sessions');
function log(message) {
process.stderr.write(`[SessionEnd] ${message}\n`);
}
function run(rawInput) {
return rawInput || '';
const output = rawInput || '';
const sessionId = resolveSessionId();
if (!sessionId) {
log('No CLAUDE_SESSION_ID available; skipping observer cleanup');
return output;
}
try {
const observerContext = resolveProjectContext();
removeSessionLease(observerContext, sessionId);
const remainingLeases = listSessionLeases(observerContext);
if (remainingLeases.length === 0) {
if (stopObserverForContext(observerContext)) {
log(`Stopped observer for project ${observerContext.projectId} after final session lease ended`);
} else {
log(`No running observer to stop for project ${observerContext.projectId}`);
}
} else {
log(`Retained observer for project ${observerContext.projectId}; ${remainingLeases.length} session lease(s) remain`);
}
} catch (err) {
log(`Observer cleanup skipped: ${err.message}`);
}
return output;
}
// Legacy CLI execution (when run directly)
@@ -22,7 +60,7 @@ if (require.main === module) {
}
});
process.stdin.on('end', () => {
process.stdout.write(raw);
process.stdout.write(run(raw));
});
}

View File

@@ -20,6 +20,7 @@ const {
stripAnsi,
log
} = require('../lib/utils');
const { resolveProjectContext, writeSessionLease, resolveSessionId } = require('../lib/observer-sessions');
const { getPackageManager, getSelectionPrompt } = require('../lib/package-manager');
const { listAliases } = require('../lib/session-aliases');
const { detectProjectType } = require('../lib/project-detect');
@@ -163,6 +164,18 @@ async function main() {
ensureDir(sessionsDir);
ensureDir(learnedDir);
const observerSessionId = resolveSessionId();
if (observerSessionId) {
const observerContext = resolveProjectContext();
writeSessionLease(observerContext, observerSessionId, {
hook: 'SessionStart',
projectRoot: observerContext.projectRoot
});
log(`[SessionStart] Registered observer lease for ${observerSessionId}`);
} else {
log('[SessionStart] No CLAUDE_SESSION_ID available; skipping observer lease registration');
}
// Check for recent session files (last 7 days)
const recentSessions = dedupeRecentSessions(getSessionSearchDirs());

View File

@@ -1,7 +1,7 @@
const fs = require('fs');
const os = require('os');
const path = require('path');
const { planInstallTargetScaffold } = require('./install-targets/registry');
const { getInstallTargetAdapter, planInstallTargetScaffold } = require('./install-targets/registry');
const DEFAULT_REPO_ROOT = path.join(__dirname, '../..');
const SUPPORTED_INSTALL_TARGETS = ['claude', 'cursor', 'antigravity', 'codex', 'gemini', 'opencode', 'codebuddy'];
@@ -76,6 +76,48 @@ function dedupeStrings(values) {
return [...new Set((Array.isArray(values) ? values : []).map(value => String(value).trim()).filter(Boolean))];
}
function readOptionalStringOption(options, key) {
if (
!Object.prototype.hasOwnProperty.call(options, key)
|| options[key] === null
|| options[key] === undefined
) {
return null;
}
if (typeof options[key] !== 'string' || options[key].trim() === '') {
throw new Error(`${key} must be a non-empty string when provided`);
}
return options[key];
}
function readModuleTargetsOrThrow(module) {
const moduleId = module && module.id ? module.id : '<unknown>';
const targets = module && module.targets;
if (!Array.isArray(targets)) {
throw new Error(`Install module ${moduleId} has invalid targets; expected an array of supported target ids`);
}
const normalizedTargets = targets.map(target => (
typeof target === 'string' ? target.trim() : ''
));
if (normalizedTargets.some(target => target.length === 0)) {
throw new Error(`Install module ${moduleId} has invalid targets; expected an array of supported target ids`);
}
const unsupportedTargets = normalizedTargets.filter(target => !SUPPORTED_INSTALL_TARGETS.includes(target));
if (unsupportedTargets.length > 0) {
throw new Error(
`Install module ${moduleId} has unsupported targets: ${unsupportedTargets.join(', ')}`
);
}
return normalizedTargets;
}
function assertKnownModuleIds(moduleIds, manifests) {
const unknownModuleIds = dedupeStrings(moduleIds)
.filter(moduleId => !manifests.modulesById.has(moduleId));
@@ -125,6 +167,11 @@ function loadInstallManifests(options = {}) {
? profilesData.profiles
: {};
const components = Array.isArray(componentsData.components) ? componentsData.components : [];
for (const module of modules) {
readModuleTargetsOrThrow(module);
}
const modulesById = new Map(modules.map(module => [module.id, module]));
const componentsById = new Map(components.map(component => [component.id, component]));
@@ -361,6 +408,16 @@ function resolveInstallPlan(options = {}) {
`Unknown install target: ${target}. Expected one of ${SUPPORTED_INSTALL_TARGETS.join(', ')}`
);
}
const validatedProjectRoot = readOptionalStringOption(options, 'projectRoot');
const validatedHomeDir = readOptionalStringOption(options, 'homeDir');
const targetPlanningInput = target
? {
repoRoot: manifests.repoRoot,
projectRoot: validatedProjectRoot || manifests.repoRoot,
homeDir: validatedHomeDir || os.homedir(),
}
: null;
const targetAdapter = target ? getInstallTargetAdapter(target) : null;
const effectiveRequestedIds = dedupeStrings(
requestedModuleIds.filter(moduleId => !excludedModuleOwners.has(moduleId))
@@ -396,7 +453,13 @@ function resolveInstallPlan(options = {}) {
return;
}
if (target && !module.targets.includes(target)) {
const supportsTarget = !target
|| (
readModuleTargetsOrThrow(module).includes(target)
&& (!targetAdapter || targetAdapter.supportsModule(module, targetPlanningInput))
);
if (!supportsTarget) {
if (dependencyOf) {
skippedTargetIds.add(rootRequesterId || dependencyOf);
return false;
@@ -444,9 +507,9 @@ function resolveInstallPlan(options = {}) {
const scaffoldPlan = target
? planInstallTargetScaffold({
target,
repoRoot: manifests.repoRoot,
projectRoot: options.projectRoot || manifests.repoRoot,
homeDir: options.homeDir || os.homedir(),
repoRoot: targetPlanningInput.repoRoot,
projectRoot: targetPlanningInput.projectRoot,
homeDir: targetPlanningInput.homeDir,
modules: selectedModules,
})
: null;

View File

@@ -4,14 +4,28 @@ const {
createFlatRuleOperations,
createInstallTargetAdapter,
createManagedScaffoldOperation,
normalizeRelativePath,
} = require('./helpers');
const SUPPORTED_SOURCE_PREFIXES = ['rules', 'commands', 'agents', 'skills', '.agents', 'AGENTS.md'];
function supportsAntigravitySourcePath(sourceRelativePath) {
const normalizedPath = normalizeRelativePath(sourceRelativePath);
return SUPPORTED_SOURCE_PREFIXES.some(prefix => (
normalizedPath === prefix || normalizedPath.startsWith(`${prefix}/`)
));
}
module.exports = createInstallTargetAdapter({
id: 'antigravity-project',
target: 'antigravity',
kind: 'project',
rootSegments: ['.agent'],
installStatePathSegments: ['ecc-install-state.json'],
supportsModule(module) {
const paths = Array.isArray(module && module.paths) ? module.paths : [];
return paths.length > 0;
},
planOperations(input, adapter) {
const modules = Array.isArray(input.modules)
? input.modules
@@ -30,7 +44,9 @@ module.exports = createInstallTargetAdapter({
return modules.flatMap(module => {
const paths = Array.isArray(module.paths) ? module.paths : [];
return paths.flatMap(sourceRelativePath => {
return paths
.filter(supportsAntigravitySourcePath)
.flatMap(sourceRelativePath => {
if (sourceRelativePath === 'rules') {
return createFlatRuleOperations({
moduleId: module.id,
@@ -62,8 +78,8 @@ module.exports = createInstallTargetAdapter({
];
}
return [adapter.createScaffoldOperation(module.id, sourceRelativePath, planningInput)];
});
return [adapter.createScaffoldOperation(module.id, sourceRelativePath, planningInput)];
});
});
},
});

View File

@@ -276,6 +276,13 @@ function createInstallTargetAdapter(config) {
input
));
},
supportsModule(module, input = {}) {
if (typeof config.supportsModule === 'function') {
return config.supportsModule(module, input, adapter);
}
return true;
},
validate(input = {}) {
if (typeof config.validate === 'function') {
return config.validate(input, adapter);

View File

@@ -20,18 +20,100 @@ function readJsonObject(filePath, label) {
return parsed;
}
function mergeHookEntries(existingEntries, incomingEntries) {
function replacePluginRootPlaceholders(value, pluginRoot) {
if (!pluginRoot) {
return value;
}
if (typeof value === 'string') {
return value.split('${CLAUDE_PLUGIN_ROOT}').join(pluginRoot);
}
if (Array.isArray(value)) {
return value.map(item => replacePluginRootPlaceholders(item, pluginRoot));
}
if (value && typeof value === 'object') {
return Object.fromEntries(
Object.entries(value).map(([key, nestedValue]) => [
key,
replacePluginRootPlaceholders(nestedValue, pluginRoot),
])
);
}
return value;
}
function buildLegacyHookSignature(entry, pluginRoot) {
if (!entry || typeof entry !== 'object' || Array.isArray(entry)) {
return null;
}
const normalizedEntry = replacePluginRootPlaceholders(entry, pluginRoot);
if (typeof normalizedEntry.matcher !== 'string' || !Array.isArray(normalizedEntry.hooks)) {
return null;
}
const hookSignature = normalizedEntry.hooks.map(hook => JSON.stringify({
type: hook && typeof hook === 'object' ? hook.type : undefined,
command: hook && typeof hook === 'object' ? hook.command : undefined,
timeout: hook && typeof hook === 'object' ? hook.timeout : undefined,
async: hook && typeof hook === 'object' ? hook.async : undefined,
}));
return JSON.stringify({
matcher: normalizedEntry.matcher,
hooks: hookSignature,
});
}
function getHookEntryAliases(entry, pluginRoot) {
const aliases = [];
if (!entry || typeof entry !== 'object' || Array.isArray(entry)) {
return aliases;
}
const normalizedEntry = replacePluginRootPlaceholders(entry, pluginRoot);
if (typeof normalizedEntry.id === 'string' && normalizedEntry.id.trim().length > 0) {
aliases.push(`id:${normalizedEntry.id.trim()}`);
}
const legacySignature = buildLegacyHookSignature(normalizedEntry, pluginRoot);
if (legacySignature) {
aliases.push(`legacy:${legacySignature}`);
}
aliases.push(`json:${JSON.stringify(normalizedEntry)}`);
return aliases;
}
function mergeHookEntries(existingEntries, incomingEntries, pluginRoot) {
const mergedEntries = [];
const seenEntries = new Set();
for (const entry of [...existingEntries, ...incomingEntries]) {
const entryKey = JSON.stringify(entry);
if (seenEntries.has(entryKey)) {
if (!entry || typeof entry !== 'object' || Array.isArray(entry)) {
continue;
}
seenEntries.add(entryKey);
mergedEntries.push(entry);
if ('id' in entry && typeof entry.id !== 'string') {
continue;
}
const aliases = getHookEntryAliases(entry, pluginRoot);
if (aliases.some(alias => seenEntries.has(alias))) {
continue;
}
for (const alias of aliases) {
seenEntries.add(alias);
}
mergedEntries.push(replacePluginRootPlaceholders(entry, pluginRoot));
}
return mergedEntries;
@@ -47,6 +129,7 @@ function buildMergedSettings(plan) {
return null;
}
const pluginRoot = plan.targetRoot;
const hooksDestinationPath = path.join(plan.targetRoot, 'hooks', 'hooks.json');
const hooksSourcePath = findHooksSourcePath(plan, hooksDestinationPath) || hooksDestinationPath;
if (!fs.existsSync(hooksSourcePath)) {
@@ -54,7 +137,7 @@ function buildMergedSettings(plan) {
}
const hooksConfig = readJsonObject(hooksSourcePath, 'hooks config');
const incomingHooks = hooksConfig.hooks;
const incomingHooks = replacePluginRootPlaceholders(hooksConfig.hooks, pluginRoot);
if (!incomingHooks || typeof incomingHooks !== 'object' || Array.isArray(incomingHooks)) {
throw new Error(`Invalid hooks config at ${hooksSourcePath}: expected "hooks" to be a JSON object`);
}
@@ -73,7 +156,7 @@ function buildMergedSettings(plan) {
for (const [eventName, incomingEntries] of Object.entries(incomingHooks)) {
const currentEntries = Array.isArray(existingHooks[eventName]) ? existingHooks[eventName] : [];
const nextEntries = Array.isArray(incomingEntries) ? incomingEntries : [];
mergedHooks[eventName] = mergeHookEntries(currentEntries, nextEntries);
mergedHooks[eventName] = mergeHookEntries(currentEntries, nextEntries, pluginRoot);
}
const mergedSettings = {
@@ -84,6 +167,11 @@ function buildMergedSettings(plan) {
return {
settingsPath,
mergedSettings,
hooksDestinationPath,
resolvedHooksConfig: {
...hooksConfig,
hooks: incomingHooks,
},
};
}
@@ -96,6 +184,12 @@ function applyInstallPlan(plan) {
}
if (mergedSettingsPlan) {
fs.mkdirSync(path.dirname(mergedSettingsPlan.hooksDestinationPath), { recursive: true });
fs.writeFileSync(
mergedSettingsPlan.hooksDestinationPath,
JSON.stringify(mergedSettingsPlan.resolvedHooksConfig, null, 2) + '\n',
'utf8'
);
fs.mkdirSync(path.dirname(mergedSettingsPlan.settingsPath), { recursive: true });
fs.writeFileSync(
mergedSettingsPlan.settingsPath,

View File

@@ -0,0 +1,175 @@
const fs = require('fs');
const path = require('path');
const crypto = require('crypto');
const { spawnSync } = require('child_process');
const { getClaudeDir, ensureDir, sanitizeSessionId } = require('./utils');
function getHomunculusDir() {
return path.join(getClaudeDir(), 'homunculus');
}
function getProjectsDir() {
return path.join(getHomunculusDir(), 'projects');
}
function getProjectRegistryPath() {
return path.join(getHomunculusDir(), 'projects.json');
}
function readProjectRegistry() {
try {
return JSON.parse(fs.readFileSync(getProjectRegistryPath(), 'utf8'));
} catch {
return {};
}
}
function runGit(args, cwd) {
const result = spawnSync('git', args, {
cwd,
encoding: 'utf8',
stdio: ['ignore', 'pipe', 'ignore']
});
if (result.status !== 0) return '';
return (result.stdout || '').trim();
}
function stripRemoteCredentials(remoteUrl) {
if (!remoteUrl) return '';
return String(remoteUrl).replace(/:\/\/[^@]+@/, '://');
}
function resolveProjectRoot(cwd = process.cwd()) {
const envRoot = process.env.CLAUDE_PROJECT_DIR;
if (envRoot && fs.existsSync(envRoot)) {
return path.resolve(envRoot);
}
const gitRoot = runGit(['rev-parse', '--show-toplevel'], cwd);
if (gitRoot) return path.resolve(gitRoot);
return '';
}
function computeProjectId(projectRoot) {
const remoteUrl = stripRemoteCredentials(runGit(['remote', 'get-url', 'origin'], projectRoot));
return crypto.createHash('sha256').update(remoteUrl || projectRoot).digest('hex').slice(0, 12);
}
function resolveProjectContext(cwd = process.cwd()) {
const projectRoot = resolveProjectRoot(cwd);
if (!projectRoot) {
const projectDir = getHomunculusDir();
ensureDir(projectDir);
return { projectId: 'global', projectRoot: '', projectDir, isGlobal: true };
}
const registry = readProjectRegistry();
const registryEntry = Object.values(registry).find(entry => entry && path.resolve(entry.root || '') === projectRoot);
const projectId = registryEntry?.id || computeProjectId(projectRoot);
const projectDir = path.join(getProjectsDir(), projectId);
ensureDir(projectDir);
return { projectId, projectRoot, projectDir, isGlobal: false };
}
function getObserverPidFile(context) {
return path.join(context.projectDir, '.observer.pid');
}
function getObserverSignalCounterFile(context) {
return path.join(context.projectDir, '.observer-signal-counter');
}
function getObserverActivityFile(context) {
return path.join(context.projectDir, '.observer-last-activity');
}
function getSessionLeaseDir(context) {
return path.join(context.projectDir, '.observer-sessions');
}
function resolveSessionId(rawSessionId = process.env.CLAUDE_SESSION_ID) {
return sanitizeSessionId(rawSessionId || '') || '';
}
function getSessionLeaseFile(context, rawSessionId = process.env.CLAUDE_SESSION_ID) {
const sessionId = resolveSessionId(rawSessionId);
if (!sessionId) return '';
return path.join(getSessionLeaseDir(context), `${sessionId}.json`);
}
function writeSessionLease(context, rawSessionId = process.env.CLAUDE_SESSION_ID, extra = {}) {
const leaseFile = getSessionLeaseFile(context, rawSessionId);
if (!leaseFile) return '';
ensureDir(getSessionLeaseDir(context));
const payload = {
sessionId: resolveSessionId(rawSessionId),
cwd: process.cwd(),
pid: process.pid,
updatedAt: new Date().toISOString(),
...extra
};
fs.writeFileSync(leaseFile, JSON.stringify(payload, null, 2) + '\n');
return leaseFile;
}
function removeSessionLease(context, rawSessionId = process.env.CLAUDE_SESSION_ID) {
const leaseFile = getSessionLeaseFile(context, rawSessionId);
if (!leaseFile) return false;
try {
fs.rmSync(leaseFile, { force: true });
return true;
} catch {
return false;
}
}
function listSessionLeases(context) {
const leaseDir = getSessionLeaseDir(context);
if (!fs.existsSync(leaseDir)) return [];
return fs.readdirSync(leaseDir)
.filter(name => name.endsWith('.json'))
.map(name => path.join(leaseDir, name));
}
function stopObserverForContext(context) {
const pidFile = getObserverPidFile(context);
if (!fs.existsSync(pidFile)) return false;
const pid = (fs.readFileSync(pidFile, 'utf8') || '').trim();
if (!/^[0-9]+$/.test(pid) || pid === '0' || pid === '1') {
fs.rmSync(pidFile, { force: true });
return false;
}
try {
process.kill(Number(pid), 0);
} catch {
fs.rmSync(pidFile, { force: true });
return false;
}
try {
process.kill(Number(pid), 'SIGTERM');
} catch {
return false;
}
fs.rmSync(pidFile, { force: true });
fs.rmSync(getObserverSignalCounterFile(context), { force: true });
return true;
}
module.exports = {
resolveProjectContext,
getObserverActivityFile,
getObserverPidFile,
getSessionLeaseDir,
writeSessionLease,
removeSessionLease,
listSessionLeases,
stopObserverForContext,
resolveSessionId
};

View File

@@ -37,8 +37,8 @@ function createSkillObservation(input) {
? input.skill.path.trim()
: null;
const success = Boolean(input.success);
const error = input.error == null ? null : String(input.error);
const feedback = input.feedback == null ? null : String(input.feedback);
const error = input.error === null || input.error === undefined ? null : String(input.error);
const feedback = input.feedback === null || input.feedback === undefined ? null : String(input.feedback);
const variant = typeof input.variant === 'string' && input.variant.trim().length > 0
? input.variant.trim()
: 'baseline';

View File

@@ -25,6 +25,10 @@ const WINDOWS_RESERVED_SESSION_IDS = new Set([
* Get the user's home directory (cross-platform)
*/
function getHomeDir() {
const explicitHome = process.env.HOME || process.env.USERPROFILE;
if (explicitHome && explicitHome.trim().length > 0) {
return path.resolve(explicitHome);
}
return os.homedir();
}

View File

@@ -27,8 +27,11 @@ CONFIG_FILE="$CODEX_HOME/config.toml"
AGENTS_FILE="$CODEX_HOME/AGENTS.md"
AGENTS_ROOT_SRC="$REPO_ROOT/AGENTS.md"
AGENTS_CODEX_SUPP_SRC="$REPO_ROOT/.codex/AGENTS.md"
CODEX_AGENTS_SRC="$REPO_ROOT/.codex/agents"
CODEX_AGENTS_DEST="$CODEX_HOME/agents"
PROMPTS_SRC="$REPO_ROOT/commands"
PROMPTS_DEST="$CODEX_HOME/prompts"
BASELINE_MERGE_SCRIPT="$REPO_ROOT/scripts/codex/merge-codex-config.js"
HOOKS_INSTALLER="$REPO_ROOT/scripts/codex/install-global-git-hooks.sh"
SANITY_CHECKER="$REPO_ROOT/scripts/codex/check-codex-global-state.sh"
CURSOR_RULES_DIR="$REPO_ROOT/.cursor/rules"
@@ -106,7 +109,23 @@ extract_toml_value() {
extract_context7_key() {
local file="$1"
grep -oP -- '--key",[[:space:]]*"\K[^"]+' "$file" | head -n 1 || true
node - "$file" <<'EOF'
const fs = require('fs');
const filePath = process.argv[2];
let source = '';
try {
source = fs.readFileSync(filePath, 'utf8');
} catch {
process.exit(0);
}
const match = source.match(/--key",\s*"([^"]+)"/);
if (match && match[1]) {
process.stdout.write(`${match[1]}\n`);
}
EOF
}
generate_prompt_file() {
@@ -130,7 +149,9 @@ MCP_MERGE_SCRIPT="$REPO_ROOT/scripts/codex/merge-mcp-config.js"
require_path "$REPO_ROOT/AGENTS.md" "ECC AGENTS.md"
require_path "$AGENTS_CODEX_SUPP_SRC" "ECC Codex AGENTS supplement"
require_path "$CODEX_AGENTS_SRC" "ECC Codex agent roles"
require_path "$PROMPTS_SRC" "ECC commands directory"
require_path "$BASELINE_MERGE_SCRIPT" "ECC Codex baseline merge script"
require_path "$HOOKS_INSTALLER" "ECC global git hooks installer"
require_path "$SANITY_CHECKER" "ECC global sanity checker"
require_path "$CURSOR_RULES_DIR" "ECC Cursor rules directory"
@@ -231,6 +252,26 @@ else
fi
fi
log "Merging ECC Codex baseline into $CONFIG_FILE (add-only, preserving user config)"
if [[ "$MODE" == "dry-run" ]]; then
node "$BASELINE_MERGE_SCRIPT" "$CONFIG_FILE" --dry-run
else
node "$BASELINE_MERGE_SCRIPT" "$CONFIG_FILE"
fi
log "Syncing sample Codex agent role files"
run_or_echo mkdir -p "$CODEX_AGENTS_DEST"
for agent_file in "$CODEX_AGENTS_SRC"/*.toml; do
[[ -f "$agent_file" ]] || continue
agent_name="$(basename "$agent_file")"
dest="$CODEX_AGENTS_DEST/$agent_name"
if [[ -e "$dest" ]]; then
log "Keeping existing Codex agent role file: $dest"
else
run_or_echo cp "$agent_file" "$dest"
fi
done
# Skills are NOT synced here — Codex CLI reads directly from
# ~/.agents/skills/ (installed by ECC installer / npx skills).
# Copying into ~/.codex/skills/ was unnecessary.

View File

@@ -6,7 +6,7 @@ origin: ECC
# Article Writing
Write long-form content that sounds like a real person or brand, not generic AI output.
Write long-form content that sounds like an actual person with a point of view, not an LLM smoothing itself into paste.
## When to Activate
@@ -17,69 +17,63 @@ Write long-form content that sounds like a real person or brand, not generic AI
## Core Rules
1. Lead with the concrete thing: example, output, anecdote, number, screenshot description, or code block.
1. Lead with the concrete thing: artifact, example, output, anecdote, number, screenshot, or code.
2. Explain after the example, not before.
3. Prefer short, direct sentences over padded ones.
4. Use specific numbers when available and sourced.
5. Never invent biographical facts, company metrics, or customer evidence.
3. Keep sentences tight unless the source voice is intentionally expansive.
4. Use proof instead of adjectives.
5. Never invent facts, credibility, or customer evidence.
## Voice Capture Workflow
## Voice Handling
If the user wants a specific voice, collect one or more of:
- published articles
- newsletters
- X / LinkedIn posts
- docs or memos
- a short style guide
If the user wants a specific voice, run `brand-voice` first and reuse its `VOICE PROFILE`.
Do not duplicate a second style-analysis pass here unless the user explicitly asks for one.
Then extract:
- sentence length and rhythm
- whether the voice is formal, conversational, or sharp
- favored rhetorical devices such as parentheses, lists, fragments, or questions
- tolerance for humor, opinion, and contrarian framing
- formatting habits such as headers, bullets, code blocks, and pull quotes
If no voice references are given, default to a direct, operator-style voice: concrete, practical, and low on hype.
If no voice references are given, default to a sharp operator voice: concrete, unsentimental, useful.
## Banned Patterns
Delete and rewrite any of these:
- generic openings like "In today's rapidly evolving landscape"
- filler transitions such as "Moreover" and "Furthermore"
- hype phrases like "game-changer", "cutting-edge", or "revolutionary"
- vague claims without evidence
- biography or credibility claims not backed by provided context
- "In today's rapidly evolving landscape"
- "game-changer", "cutting-edge", "revolutionary"
- "here's why this matters" as a standalone bridge
- fake vulnerability arcs
- a closing question added only to juice engagement
- biography padding that does not move the argument
- generic AI throat-clearing that delays the point
## Writing Process
1. Clarify the audience and purpose.
2. Build a skeletal outline with one purpose per section.
3. Start each section with evidence, example, or scene.
4. Expand only where the next sentence earns its place.
5. Remove anything that sounds templated or self-congratulatory.
2. Build a hard outline with one job per section.
3. Start sections with proof, artifact, conflict, or example.
4. Expand only where the next sentence earns space.
5. Cut anything that sounds templated, overexplained, or self-congratulatory.
## Structure Guidance
### Technical Guides
- open with what the reader gets
- use code or terminal examples in every major section
- end with concrete takeaways, not a soft summary
### Essays / Opinion Pieces
- start with tension, contradiction, or a sharp observation
- open with what the reader gets
- use code, commands, screenshots, or concrete output in major sections
- end with actionable takeaways, not a soft recap
### Essays / Opinion
- start with tension, contradiction, or a specific observation
- keep one argument thread per section
- use examples that earn the opinion
- make opinions answer to evidence
### Newsletters
- keep the first screen strong
- mix insight with updates, not diary filler
- use clear section labels and easy skim structure
- keep the first screen doing real work
- do not front-load diary filler
- use section labels only when they improve scanability
## Quality Gate
Before delivering:
- verify factual claims against provided sources
- remove filler and corporate language
- confirm the voice matches the supplied examples
- ensure every section adds new information
- check formatting for the intended platform
- factual claims are backed by provided sources
- generic AI transitions are gone
- the voice matches the supplied examples or the agreed `VOICE PROFILE`
- every section adds something new
- formatting matches the intended medium

View File

@@ -0,0 +1,97 @@
---
name: brand-voice
description: Build a source-derived writing style profile from real posts, essays, launch notes, docs, or site copy, then reuse that profile across content, outreach, and social workflows. Use when the user wants voice consistency without generic AI writing tropes.
origin: ECC
---
# Brand Voice
Build a durable voice profile from real source material, then use that profile everywhere instead of re-deriving style from scratch or defaulting to generic AI copy.
## When to Activate
- the user wants content or outreach in a specific voice
- writing for X, LinkedIn, email, launch posts, threads, or product updates
- adapting a known author's tone across channels
- the existing content lane needs a reusable style system instead of one-off mimicry
## Source Priority
Use the strongest real source set available, in this order:
1. recent original X posts and threads
2. articles, essays, memos, launch notes, or newsletters
3. real outbound emails or DMs that worked
4. product docs, changelogs, README framing, and site copy
Do not use generic platform exemplars as source material.
## Collection Workflow
1. Gather 5 to 20 representative samples when available.
2. Prefer recent material over old material unless the user says the older writing is more canonical.
3. Separate "public launch voice" from "private working voice" if the source set clearly splits.
4. If live X access is available, use `x-api` to pull recent original posts before drafting.
5. If site copy matters, include the current ECC landing page and repo/plugin framing.
## What to Extract
- rhythm and sentence length
- compression vs explanation
- capitalization norms
- parenthetical use
- question frequency and purpose
- how sharply claims are made
- how often numbers, mechanisms, or receipts show up
- how transitions work
- what the author never does
## Output Contract
Produce a reusable `VOICE PROFILE` block that downstream skills can consume directly. Use the schema in [references/voice-profile-schema.md](references/voice-profile-schema.md).
Keep the profile structured and short enough to reuse in session context. The point is not literary criticism. The point is operational reuse.
## Affaan / ECC Defaults
If the user wants Affaan / ECC voice and live sources are thin, start here unless newer source material overrides it:
- direct, compressed, concrete
- specifics, mechanisms, receipts, and numbers beat adjectives
- parentheticals are for qualification, narrowing, or over-clarification
- capitalization is conventional unless there is a real reason to break it
- questions are rare and should not be used as bait
- tone can be sharp, blunt, skeptical, or dry
- transitions should feel earned, not smoothed over
## Hard Bans
Delete and rewrite any of these:
- fake curiosity hooks
- "not X, just Y"
- "no fluff"
- forced lowercase
- LinkedIn thought-leader cadence
- bait questions
- "Excited to share"
- generic founder-journey filler
- corny parentheticals
## Persistence Rules
- Reuse the latest confirmed `VOICE PROFILE` across related tasks in the same session.
- If the user asks for a durable artifact, save the profile in the requested workspace location or memory surface.
- Do not create repo-tracked files that store personal voice fingerprints unless the user explicitly asks for that.
## Downstream Use
Use this skill before or inside:
- `content-engine`
- `crosspost`
- `lead-intelligence`
- article or launch writing
- cold or warm outbound across X, LinkedIn, and email
If another skill already has a partial voice capture section, this skill is the canonical source of truth.

View File

@@ -0,0 +1,55 @@
# Voice Profile Schema
Use this exact structure when building a reusable voice profile:
```text
VOICE PROFILE
=============
Author:
Goal:
Confidence:
Source Set
- source 1
- source 2
- source 3
Rhythm
- short note on sentence length, pacing, and fragmentation
Compression
- how dense or explanatory the writing is
Capitalization
- conventional, mixed, or situational
Parentheticals
- how they are used and how they are not used
Question Use
- rare, frequent, rhetorical, direct, or mostly absent
Claim Style
- how claims are framed, supported, and sharpened
Preferred Moves
- concrete moves the author does use
Banned Moves
- specific patterns the author does not use
CTA Rules
- how, when, or whether to close with asks
Channel Notes
- X:
- LinkedIn:
- Email:
```
Guidelines:
- Keep the profile concrete and source-backed.
- Use short bullets, not essay paragraphs.
- Every banned move should be observable in the source set or explicitly requested by the user.
- If the source set conflicts, call out the split instead of averaging it into mush.

Some files were not shown because too many files have changed in this diff Show More