1 Commits

Author SHA1 Message Date
Affaan Mustafa
63410afcad feat(ecc2): add token/cost meter widget (#775)
- TokenMeter widget using ratatui Gauge with color gradient (green->yellow->red)
- Budget fields (cost_budget_usd, token_budget) in Config
- Aggregate cost display in status bar
- Warning state at 80%+ budget consumption
- Tests for gradient, config fallback, and meter rendering
2026-03-24 03:54:15 -07:00
855 changed files with 7243 additions and 111773 deletions

View File

@@ -1,21 +0,0 @@
{
"name": "ecc",
"interface": {
"displayName": "Everything Claude Code"
},
"plugins": [
{
"name": "ecc",
"version": "2.0.0-rc.1",
"source": {
"source": "local",
"path": "../.."
},
"policy": {
"installation": "AVAILABLE",
"authentication": "ON_INSTALL"
},
"category": "Productivity"
}
]
}

View File

@@ -1,152 +0,0 @@
---
name: agent-introspection-debugging
description: Structured self-debugging workflow for AI agent failures using capture, diagnosis, contained recovery, and introspection reports.
---
# Agent Introspection Debugging
Use this skill when an agent run is failing repeatedly, consuming tokens without progress, looping on the same tools, or drifting away from the intended task.
This is a workflow skill, not a hidden runtime. It teaches the agent to debug itself systematically before escalating to a human.
## When to Activate
- Maximum tool call / loop-limit failures
- Repeated retries with no forward progress
- Context growth or prompt drift that starts degrading output quality
- File-system or environment state mismatch between expectation and reality
- Tool failures that are likely recoverable with diagnosis and a smaller corrective action
## Scope Boundaries
Activate this skill for:
- capturing failure state before retrying blindly
- diagnosing common agent-specific failure patterns
- applying contained recovery actions
- producing a structured human-readable debug report
Do not use this skill as the primary source for:
- feature verification after code changes; use `verification-loop`
- framework-specific debugging when a narrower ECC skill already exists
- runtime promises the current harness cannot enforce automatically
## Four-Phase Loop
### Phase 1: Failure Capture
Before trying to recover, record the failure precisely.
Capture:
- error type, message, and stack trace when available
- last meaningful tool call sequence
- what the agent was trying to do
- current context pressure: repeated prompts, oversized pasted logs, duplicated plans, or runaway notes
- current environment assumptions: cwd, branch, relevant service state, expected files
Minimum capture template:
```markdown
## Failure Capture
- Session / task:
- Goal in progress:
- Error:
- Last successful step:
- Last failed tool / command:
- Repeated pattern seen:
- Environment assumptions to verify:
```
### Phase 2: Root-Cause Diagnosis
Match the failure to a known pattern before changing anything.
| Pattern | Likely Cause | Check |
| --- | --- | --- |
| Maximum tool calls / repeated same command | loop or no-exit observer path | inspect the last N tool calls for repetition |
| Context overflow / degraded reasoning | unbounded notes, repeated plans, oversized logs | inspect recent context for duplication and low-signal bulk |
| `ECONNREFUSED` / timeout | service unavailable or wrong port | verify service health, URL, and port assumptions |
| `429` / quota exhaustion | retry storm or missing backoff | count repeated calls and inspect retry spacing |
| file missing after write / stale diff | race, wrong cwd, or branch drift | re-check path, cwd, git status, and actual file existence |
| tests still failing after “fix” | wrong hypothesis | isolate the exact failing test and re-derive the bug |
Diagnosis questions:
- is this a logic failure, state failure, environment failure, or policy failure?
- did the agent lose the real objective and start optimizing the wrong subtask?
- is the failure deterministic or transient?
- what is the smallest reversible action that would validate the diagnosis?
### Phase 3: Contained Recovery
Recover with the smallest action that changes the diagnosis surface.
Safe recovery actions:
- stop repeated retries and restate the hypothesis
- trim low-signal context and keep only the active goal, blockers, and evidence
- re-check the actual filesystem / branch / process state
- narrow the task to one failing command, one file, or one test
- switch from speculative reasoning to direct observation
- escalate to a human when the failure is high-risk or externally blocked
Do not claim unsupported auto-healing actions like “reset agent state” or “update harness config” unless you are actually doing them through real tools in the current environment.
Contained recovery checklist:
```markdown
## Recovery Action
- Diagnosis chosen:
- Smallest action taken:
- Why this is safe:
- What evidence would prove the fix worked:
```
### Phase 4: Introspection Report
End with a report that makes the recovery legible to the next agent or human.
```markdown
## Agent Self-Debug Report
- Session / task:
- Failure:
- Root cause:
- Recovery action:
- Result: success | partial | blocked
- Token / time burn risk:
- Follow-up needed:
- Preventive change to encode later:
```
## Recovery Heuristics
Prefer these interventions in order:
1. Restate the real objective in one sentence.
2. Verify the world state instead of trusting memory.
3. Shrink the failing scope.
4. Run one discriminating check.
5. Only then retry.
Bad pattern:
- retrying the same action three times with slightly different wording
Good pattern:
- capture failure
- classify the pattern
- run one direct check
- change the plan only if the check supports it
## Integration with ECC
- Use `verification-loop` after recovery if code was changed.
- Use `continuous-learning-v2` when the failure pattern is worth turning into an instinct or later skill.
- Use `council` when the issue is not technical failure but decision ambiguity.
- Use `workspace-surface-audit` if the failure came from conflicting local state or repo drift.
## Output Standard
When this skill is active, do not end with “I fixed it” alone.
Always provide:
- the failure pattern
- the root-cause hypothesis
- the recovery action
- the evidence that the situation is now better or still blocked

View File

@@ -1,7 +0,0 @@
interface:
display_name: "Agent Introspection Debugging"
short_description: "Structured self-debugging for AI agent failures"
brand_color: "#0EA5E9"
default_prompt: "Use $agent-introspection-debugging to diagnose and recover from an AI agent failure."
policy:
allow_implicit_invocation: true

View File

@@ -1,214 +0,0 @@
---
name: agent-sort
description: Build an evidence-backed ECC install plan for a specific repo by sorting skills, commands, rules, hooks, and extras into DAILY vs LIBRARY buckets using parallel repo-aware review passes. Use when ECC should be trimmed to what a project actually needs instead of loading the full bundle.
---
# Agent Sort
Use this skill when a repo needs a project-specific ECC surface instead of the default full install.
The goal is not to guess what "feels useful." The goal is to classify ECC components with evidence from the actual codebase.
## When to Use
- A project only needs a subset of ECC and full installs are too noisy
- The repo stack is clear, but nobody wants to hand-curate skills one by one
- A team wants a repeatable install decision backed by grep evidence instead of opinion
- You need to separate always-loaded daily workflow surfaces from searchable library/reference surfaces
- A repo has drifted into the wrong language, rule, or hook set and needs cleanup
## Non-Negotiable Rules
- Use the current repository as the source of truth, not generic preferences
- Every DAILY decision must cite concrete repo evidence
- LIBRARY does not mean "delete"; it means "keep accessible without loading by default"
- Do not install hooks, rules, or scripts that the current repo cannot use
- Prefer ECC-native surfaces; do not introduce a second install system
## Outputs
Produce these artifacts in order:
1. DAILY inventory
2. LIBRARY inventory
3. install plan
4. verification report
5. optional `skill-library` router if the project wants one
## Classification Model
Use two buckets only:
- `DAILY`
- should load every session for this repo
- strongly matched to the repo's language, framework, workflow, or operator surface
- `LIBRARY`
- useful to retain, but not worth loading by default
- should remain reachable through search, router skill, or selective manual use
## Evidence Sources
Use repo-local evidence before making any classification:
- file extensions
- package managers and lockfiles
- framework configs
- CI and hook configs
- build/test scripts
- imports and dependency manifests
- repo docs that explicitly describe the stack
Useful commands include:
```bash
rg --files
rg -n "typescript|react|next|supabase|django|spring|flutter|swift"
cat package.json
cat pyproject.toml
cat Cargo.toml
cat pubspec.yaml
cat go.mod
```
## Parallel Review Passes
If parallel subagents are available, split the review into these passes:
1. Agents
- classify `agents/*`
2. Skills
- classify `skills/*`
3. Commands
- classify `commands/*`
4. Rules
- classify `rules/*`
5. Hooks and scripts
- classify hook surfaces, MCP health checks, helper scripts, and OS compatibility
6. Extras
- classify contexts, examples, MCP configs, templates, and guidance docs
If subagents are not available, run the same passes sequentially.
## Core Workflow
### 1. Read the repo
Establish the real stack before classifying anything:
- languages in use
- frameworks in use
- primary package manager
- test stack
- lint/format stack
- deployment/runtime surface
- operator integrations already present
### 2. Build the evidence table
For every candidate surface, record:
- component path
- component type
- proposed bucket
- repo evidence
- short justification
Use this format:
```text
skills/frontend-patterns | skill | DAILY | 84 .tsx files, next.config.ts present | core frontend stack
skills/django-patterns | skill | LIBRARY | no .py files, no pyproject.toml | not active in this repo
rules/typescript/* | rules | DAILY | package.json + tsconfig.json | active TS repo
rules/python/* | rules | LIBRARY | zero Python source files | keep accessible only
```
### 3. Decide DAILY vs LIBRARY
Promote to `DAILY` when:
- the repo clearly uses the matching stack
- the component is general enough to help every session
- the repo already depends on the corresponding runtime or workflow
Demote to `LIBRARY` when:
- the component is off-stack
- the repo might need it later, but not every day
- it adds context overhead without immediate relevance
### 4. Build the install plan
Translate the classification into action:
- DAILY skills -> install or keep in `.claude/skills/`
- DAILY commands -> keep as explicit shims only if still useful
- DAILY rules -> install only matching language sets
- DAILY hooks/scripts -> keep only compatible ones
- LIBRARY surfaces -> keep accessible through search or `skill-library`
If the repo already uses selective installs, update that plan instead of creating another system.
### 5. Create the optional library router
If the project wants a searchable library surface, create:
- `.claude/skills/skill-library/SKILL.md`
That router should contain:
- a short explanation of DAILY vs LIBRARY
- grouped trigger keywords
- where the library references live
Do not duplicate every skill body inside the router.
### 6. Verify the result
After the plan is applied, verify:
- every DAILY file exists where expected
- stale language rules were not left active
- incompatible hooks were not installed
- the resulting install actually matches the repo stack
Return a compact report with:
- DAILY count
- LIBRARY count
- removed stale surfaces
- open questions
## Handoffs
If the next step is interactive installation or repair, hand off to:
- `configure-ecc`
If the next step is overlap cleanup or catalog review, hand off to:
- `skill-stocktake`
If the next step is broader context trimming, hand off to:
- `strategic-compact`
## Output Format
Return the result in this order:
```text
STACK
- language/framework/runtime summary
DAILY
- always-loaded items with evidence
LIBRARY
- searchable/reference items with evidence
INSTALL PLAN
- what should be installed, removed, or routed
VERIFICATION
- checks run and remaining gaps
```

View File

@@ -1,7 +0,0 @@
interface:
display_name: "Agent Sort"
short_description: "Evidence-backed ECC install planning"
brand_color: "#0EA5E9"
default_prompt: "Use $agent-sort to build an evidence-backed ECC install plan."
policy:
allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: api-design name: api-design
description: REST API design patterns including resource naming, status codes, pagination, filtering, error responses, versioning, and rate limiting for production APIs. description: REST API design patterns including resource naming, status codes, pagination, filtering, error responses, versioning, and rate limiting for production APIs.
origin: ECC
--- ---
# API Design Patterns # API Design Patterns

View File

@@ -2,6 +2,6 @@ interface:
display_name: "API Design" display_name: "API Design"
short_description: "REST API design patterns and best practices" short_description: "REST API design patterns and best practices"
brand_color: "#F97316" brand_color: "#F97316"
default_prompt: "Use $api-design to design production REST API resources and responses." default_prompt: "Design REST API: resources, status codes, pagination"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,11 +1,12 @@
--- ---
name: article-writing name: article-writing
description: Write articles, guides, blog posts, tutorials, newsletter issues, and other long-form content in a distinctive voice derived from supplied examples or brand guidance. Use when the user wants polished written content longer than a paragraph, especially when voice consistency, structure, and credibility matter. description: Write articles, guides, blog posts, tutorials, newsletter issues, and other long-form content in a distinctive voice derived from supplied examples or brand guidance. Use when the user wants polished written content longer than a paragraph, especially when voice consistency, structure, and credibility matter.
origin: ECC
--- ---
# Article Writing # Article Writing
Write long-form content that sounds like an actual person with a point of view, not an LLM smoothing itself into paste. Write long-form content that sounds like a real person or brand, not generic AI output.
## When to Activate ## When to Activate
@@ -16,63 +17,69 @@ Write long-form content that sounds like an actual person with a point of view,
## Core Rules ## Core Rules
1. Lead with the concrete thing: artifact, example, output, anecdote, number, screenshot, or code. 1. Lead with the concrete thing: example, output, anecdote, number, screenshot description, or code block.
2. Explain after the example, not before. 2. Explain after the example, not before.
3. Keep sentences tight unless the source voice is intentionally expansive. 3. Prefer short, direct sentences over padded ones.
4. Use proof instead of adjectives. 4. Use specific numbers when available and sourced.
5. Never invent facts, credibility, or customer evidence. 5. Never invent biographical facts, company metrics, or customer evidence.
## Voice Handling ## Voice Capture Workflow
If the user wants a specific voice, run `brand-voice` first and reuse its `VOICE PROFILE`. If the user wants a specific voice, collect one or more of:
Do not duplicate a second style-analysis pass here unless the user explicitly asks for one. - published articles
- newsletters
- X / LinkedIn posts
- docs or memos
- a short style guide
If no voice references are given, default to a sharp operator voice: concrete, unsentimental, useful. Then extract:
- sentence length and rhythm
- whether the voice is formal, conversational, or sharp
- favored rhetorical devices such as parentheses, lists, fragments, or questions
- tolerance for humor, opinion, and contrarian framing
- formatting habits such as headers, bullets, code blocks, and pull quotes
If no voice references are given, default to a direct, operator-style voice: concrete, practical, and low on hype.
## Banned Patterns ## Banned Patterns
Delete and rewrite any of these: Delete and rewrite any of these:
- "In today's rapidly evolving landscape" - generic openings like "In today's rapidly evolving landscape"
- "game-changer", "cutting-edge", "revolutionary" - filler transitions such as "Moreover" and "Furthermore"
- "here's why this matters" as a standalone bridge - hype phrases like "game-changer", "cutting-edge", or "revolutionary"
- fake vulnerability arcs - vague claims without evidence
- a closing question added only to juice engagement - biography or credibility claims not backed by provided context
- biography padding that does not move the argument
- generic AI throat-clearing that delays the point
## Writing Process ## Writing Process
1. Clarify the audience and purpose. 1. Clarify the audience and purpose.
2. Build a hard outline with one job per section. 2. Build a skeletal outline with one purpose per section.
3. Start sections with proof, artifact, conflict, or example. 3. Start each section with evidence, example, or scene.
4. Expand only where the next sentence earns space. 4. Expand only where the next sentence earns its place.
5. Cut anything that sounds templated, overexplained, or self-congratulatory. 5. Remove anything that sounds templated or self-congratulatory.
## Structure Guidance ## Structure Guidance
### Technical Guides ### Technical Guides
- open with what the reader gets - open with what the reader gets
- use code, commands, screenshots, or concrete output in major sections - use code or terminal examples in every major section
- end with actionable takeaways, not a soft recap - end with concrete takeaways, not a soft summary
### Essays / Opinion ### Essays / Opinion Pieces
- start with tension, contradiction, or a sharp observation
- start with tension, contradiction, or a specific observation
- keep one argument thread per section - keep one argument thread per section
- make opinions answer to evidence - use examples that earn the opinion
### Newsletters ### Newsletters
- keep the first screen strong
- keep the first screen doing real work - mix insight with updates, not diary filler
- do not front-load diary filler - use clear section labels and easy skim structure
- use section labels only when they improve scanability
## Quality Gate ## Quality Gate
Before delivering: Before delivering:
- factual claims are backed by provided sources - verify factual claims against provided sources
- generic AI transitions are gone - remove filler and corporate language
- the voice matches the supplied examples or the agreed `VOICE PROFILE` - confirm the voice matches the supplied examples
- every section adds something new - ensure every section adds new information
- formatting matches the intended medium - check formatting for the intended platform

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Article Writing" display_name: "Article Writing"
short_description: "Long-form content in a supplied voice" short_description: "Write long-form content in a supplied voice without sounding templated"
brand_color: "#B45309" brand_color: "#B45309"
default_prompt: "Use $article-writing to draft polished long-form content in the supplied voice." default_prompt: "Draft a sharp long-form article from these notes and examples"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: backend-patterns name: backend-patterns
description: Backend architecture patterns, API design, database optimization, and server-side best practices for Node.js, Express, and Next.js API routes. description: Backend architecture patterns, API design, database optimization, and server-side best practices for Node.js, Express, and Next.js API routes.
origin: ECC
--- ---
# Backend Development Patterns # Backend Development Patterns
@@ -22,7 +23,7 @@ Backend architecture patterns and best practices for scalable server-side applic
### RESTful API Structure ### RESTful API Structure
```typescript ```typescript
// PASS: Resource-based URLs // Resource-based URLs
GET /api/markets # List resources GET /api/markets # List resources
GET /api/markets/:id # Get single resource GET /api/markets/:id # Get single resource
POST /api/markets # Create resource POST /api/markets # Create resource
@@ -30,7 +31,7 @@ PUT /api/markets/:id # Replace resource
PATCH /api/markets/:id # Update resource PATCH /api/markets/:id # Update resource
DELETE /api/markets/:id # Delete resource DELETE /api/markets/:id # Delete resource
// PASS: Query parameters for filtering, sorting, pagination // Query parameters for filtering, sorting, pagination
GET /api/markets?status=active&sort=volume&limit=20&offset=0 GET /api/markets?status=active&sort=volume&limit=20&offset=0
``` ```
@@ -130,7 +131,7 @@ export default withAuth(async (req, res) => {
### Query Optimization ### Query Optimization
```typescript ```typescript
// PASS: GOOD: Select only needed columns // GOOD: Select only needed columns
const { data } = await supabase const { data } = await supabase
.from('markets') .from('markets')
.select('id, name, status, volume') .select('id, name, status, volume')
@@ -138,7 +139,7 @@ const { data } = await supabase
.order('volume', { ascending: false }) .order('volume', { ascending: false })
.limit(10) .limit(10)
// FAIL: BAD: Select everything // BAD: Select everything
const { data } = await supabase const { data } = await supabase
.from('markets') .from('markets')
.select('*') .select('*')
@@ -147,13 +148,13 @@ const { data } = await supabase
### N+1 Query Prevention ### N+1 Query Prevention
```typescript ```typescript
// FAIL: BAD: N+1 query problem // BAD: N+1 query problem
const markets = await getMarkets() const markets = await getMarkets()
for (const market of markets) { for (const market of markets) {
market.creator = await getUser(market.creator_id) // N queries market.creator = await getUser(market.creator_id) // N queries
} }
// PASS: GOOD: Batch fetch // GOOD: Batch fetch
const markets = await getMarkets() const markets = await getMarkets()
const creatorIds = markets.map(m => m.creator_id) const creatorIds = markets.map(m => m.creator_id)
const creators = await getUsers(creatorIds) // 1 query const creators = await getUsers(creatorIds) // 1 query

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Backend Patterns" display_name: "Backend Patterns"
short_description: "API, database, and server-side patterns" short_description: "API design, database, and server-side patterns"
brand_color: "#F59E0B" brand_color: "#F59E0B"
default_prompt: "Use $backend-patterns to apply backend architecture and API patterns." default_prompt: "Apply backend patterns: API design, repository, caching"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,96 +0,0 @@
---
name: brand-voice
description: Build a source-derived writing style profile from real posts, essays, launch notes, docs, or site copy, then reuse that profile across content, outreach, and social workflows. Use when the user wants voice consistency without generic AI writing tropes.
---
# Brand Voice
Build a durable voice profile from real source material, then use that profile everywhere instead of re-deriving style from scratch or defaulting to generic AI copy.
## When to Activate
- the user wants content or outreach in a specific voice
- writing for X, LinkedIn, email, launch posts, threads, or product updates
- adapting a known author's tone across channels
- the existing content lane needs a reusable style system instead of one-off mimicry
## Source Priority
Use the strongest real source set available, in this order:
1. recent original X posts and threads
2. articles, essays, memos, launch notes, or newsletters
3. real outbound emails or DMs that worked
4. product docs, changelogs, README framing, and site copy
Do not use generic platform exemplars as source material.
## Collection Workflow
1. Gather 5 to 20 representative samples when available.
2. Prefer recent material over old material unless the user says the older writing is more canonical.
3. Separate "public launch voice" from "private working voice" if the source set clearly splits.
4. If live X access is available, use `x-api` to pull recent original posts before drafting.
5. If site copy matters, include the current ECC landing page and repo/plugin framing.
## What to Extract
- rhythm and sentence length
- compression vs explanation
- capitalization norms
- parenthetical use
- question frequency and purpose
- how sharply claims are made
- how often numbers, mechanisms, or receipts show up
- how transitions work
- what the author never does
## Output Contract
Produce a reusable `VOICE PROFILE` block that downstream skills can consume directly. Use the schema in [references/voice-profile-schema.md](references/voice-profile-schema.md).
Keep the profile structured and short enough to reuse in session context. The point is not literary criticism. The point is operational reuse.
## Affaan / ECC Defaults
If the user wants Affaan / ECC voice and live sources are thin, start here unless newer source material overrides it:
- direct, compressed, concrete
- specifics, mechanisms, receipts, and numbers beat adjectives
- parentheticals are for qualification, narrowing, or over-clarification
- capitalization is conventional unless there is a real reason to break it
- questions are rare and should not be used as bait
- tone can be sharp, blunt, skeptical, or dry
- transitions should feel earned, not smoothed over
## Hard Bans
Delete and rewrite any of these:
- fake curiosity hooks
- "not X, just Y"
- "no fluff"
- forced lowercase
- LinkedIn thought-leader cadence
- bait questions
- "Excited to share"
- generic founder-journey filler
- corny parentheticals
## Persistence Rules
- Reuse the latest confirmed `VOICE PROFILE` across related tasks in the same session.
- If the user asks for a durable artifact, save the profile in the requested workspace location or memory surface.
- Do not create repo-tracked files that store personal voice fingerprints unless the user explicitly asks for that.
## Downstream Use
Use this skill before or inside:
- `content-engine`
- `crosspost`
- `lead-intelligence`
- article or launch writing
- cold or warm outbound across X, LinkedIn, and email
If another skill already has a partial voice capture section, this skill is the canonical source of truth.

View File

@@ -1,7 +0,0 @@
interface:
display_name: "Brand Voice"
short_description: "Source-derived writing style profiles"
brand_color: "#0EA5E9"
default_prompt: "Use $brand-voice to derive and reuse a source-grounded writing style."
policy:
allow_implicit_invocation: true

View File

@@ -1,55 +0,0 @@
# Voice Profile Schema
Use this exact structure when building a reusable voice profile:
```text
VOICE PROFILE
=============
Author:
Goal:
Confidence:
Source Set
- source 1
- source 2
- source 3
Rhythm
- short note on sentence length, pacing, and fragmentation
Compression
- how dense or explanatory the writing is
Capitalization
- conventional, mixed, or situational
Parentheticals
- how they are used and how they are not used
Question Use
- rare, frequent, rhetorical, direct, or mostly absent
Claim Style
- how claims are framed, supported, and sharpened
Preferred Moves
- concrete moves the author does use
Banned Moves
- specific patterns the author does not use
CTA Rules
- how, when, or whether to close with asks
Channel Notes
- X:
- LinkedIn:
- Email:
```
Guidelines:
- Keep the profile concrete and source-backed.
- Use short bullets, not essay paragraphs.
- Every banned move should be observable in the source set or explicitly requested by the user.
- If the source set conflicts, call out the split instead of averaging it into mush.

View File

@@ -1,6 +1,7 @@
--- ---
name: bun-runtime name: bun-runtime
description: Bun as runtime, package manager, bundler, and test runner. When to choose Bun vs Node, migration notes, and Vercel support. description: Bun as runtime, package manager, bundler, and test runner. When to choose Bun vs Node, migration notes, and Vercel support.
origin: ECC
--- ---
# Bun Runtime # Bun Runtime

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Bun Runtime" display_name: "Bun Runtime"
short_description: "Bun runtime, package manager, and test runner" short_description: "Bun as runtime, package manager, bundler, and test runner"
brand_color: "#FBF0DF" brand_color: "#FBF0DF"
default_prompt: "Use $bun-runtime to choose and apply Bun runtime workflows." default_prompt: "Use Bun for scripts, install, or run"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: claude-api name: claude-api
description: Anthropic Claude API patterns for Python and TypeScript. Covers Messages API, streaming, tool use, vision, extended thinking, batches, prompt caching, and Claude Agent SDK. Use when building applications with the Claude API or Anthropic SDKs. description: Anthropic Claude API patterns for Python and TypeScript. Covers Messages API, streaming, tool use, vision, extended thinking, batches, prompt caching, and Claude Agent SDK. Use when building applications with the Claude API or Anthropic SDKs.
origin: ECC
--- ---
# Claude API # Claude API

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Claude API" display_name: "Claude API"
short_description: "Claude API patterns for Python and TypeScript" short_description: "Anthropic Claude API patterns and SDKs"
brand_color: "#D97706" brand_color: "#D97706"
default_prompt: "Use $claude-api to build with Claude API and Anthropic SDK patterns." default_prompt: "Build applications with the Claude API using Messages, tool use, streaming, and Agent SDK"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,17 +1,12 @@
--- ---
name: coding-standards name: coding-standards
description: Baseline cross-project coding conventions for naming, readability, immutability, and code-quality review. Use detailed frontend or backend skills for framework-specific patterns. description: Universal coding standards, best practices, and patterns for TypeScript, JavaScript, React, and Node.js development.
origin: ECC
--- ---
# Coding Standards & Best Practices # Coding Standards & Best Practices
Baseline coding conventions applicable across projects. Universal coding standards applicable across all projects.
This skill is the shared floor, not the detailed framework playbook.
- Use `frontend-patterns` for React, state, forms, rendering, and UI architecture.
- Use `backend-patterns` or `api-design` for repository/service layers, endpoint design, validation, and server-specific concerns.
- Use `rules/common/coding-style.md` when you need the shortest reusable rule layer instead of a full skill walkthrough.
## When to Activate ## When to Activate
@@ -22,19 +17,6 @@ This skill is the shared floor, not the detailed framework playbook.
- Setting up linting, formatting, or type-checking rules - Setting up linting, formatting, or type-checking rules
- Onboarding new contributors to coding conventions - Onboarding new contributors to coding conventions
## Scope Boundaries
Activate this skill for:
- descriptive naming
- immutability defaults
- readability, KISS, DRY, and YAGNI enforcement
- error-handling expectations and code-smell review
Do not use this skill as the primary source for:
- React composition, hooks, or rendering patterns
- backend architecture, API design, or database layering
- domain-specific framework guidance when a narrower ECC skill already exists
## Code Quality Principles ## Code Quality Principles
### 1. Readability First ### 1. Readability First
@@ -66,12 +48,12 @@ Do not use this skill as the primary source for:
### Variable Naming ### Variable Naming
```typescript ```typescript
// PASS: GOOD: Descriptive names // GOOD: Descriptive names
const marketSearchQuery = 'election' const marketSearchQuery = 'election'
const isUserAuthenticated = true const isUserAuthenticated = true
const totalRevenue = 1000 const totalRevenue = 1000
// FAIL: BAD: Unclear names // BAD: Unclear names
const q = 'election' const q = 'election'
const flag = true const flag = true
const x = 1000 const x = 1000
@@ -80,12 +62,12 @@ const x = 1000
### Function Naming ### Function Naming
```typescript ```typescript
// PASS: GOOD: Verb-noun pattern // GOOD: Verb-noun pattern
async function fetchMarketData(marketId: string) { } async function fetchMarketData(marketId: string) { }
function calculateSimilarity(a: number[], b: number[]) { } function calculateSimilarity(a: number[], b: number[]) { }
function isValidEmail(email: string): boolean { } function isValidEmail(email: string): boolean { }
// FAIL: BAD: Unclear or noun-only // BAD: Unclear or noun-only
async function market(id: string) { } async function market(id: string) { }
function similarity(a, b) { } function similarity(a, b) { }
function email(e) { } function email(e) { }
@@ -94,7 +76,7 @@ function email(e) { }
### Immutability Pattern (CRITICAL) ### Immutability Pattern (CRITICAL)
```typescript ```typescript
// PASS: ALWAYS use spread operator // ALWAYS use spread operator
const updatedUser = { const updatedUser = {
...user, ...user,
name: 'New Name' name: 'New Name'
@@ -102,7 +84,7 @@ const updatedUser = {
const updatedArray = [...items, newItem] const updatedArray = [...items, newItem]
// FAIL: NEVER mutate directly // NEVER mutate directly
user.name = 'New Name' // BAD user.name = 'New Name' // BAD
items.push(newItem) // BAD items.push(newItem) // BAD
``` ```
@@ -110,7 +92,7 @@ items.push(newItem) // BAD
### Error Handling ### Error Handling
```typescript ```typescript
// PASS: GOOD: Comprehensive error handling // GOOD: Comprehensive error handling
async function fetchData(url: string) { async function fetchData(url: string) {
try { try {
const response = await fetch(url) const response = await fetch(url)
@@ -126,7 +108,7 @@ async function fetchData(url: string) {
} }
} }
// FAIL: BAD: No error handling // BAD: No error handling
async function fetchData(url) { async function fetchData(url) {
const response = await fetch(url) const response = await fetch(url)
return response.json() return response.json()
@@ -136,14 +118,14 @@ async function fetchData(url) {
### Async/Await Best Practices ### Async/Await Best Practices
```typescript ```typescript
// PASS: GOOD: Parallel execution when possible // GOOD: Parallel execution when possible
const [users, markets, stats] = await Promise.all([ const [users, markets, stats] = await Promise.all([
fetchUsers(), fetchUsers(),
fetchMarkets(), fetchMarkets(),
fetchStats() fetchStats()
]) ])
// FAIL: BAD: Sequential when unnecessary // BAD: Sequential when unnecessary
const users = await fetchUsers() const users = await fetchUsers()
const markets = await fetchMarkets() const markets = await fetchMarkets()
const stats = await fetchStats() const stats = await fetchStats()
@@ -152,7 +134,7 @@ const stats = await fetchStats()
### Type Safety ### Type Safety
```typescript ```typescript
// PASS: GOOD: Proper types // GOOD: Proper types
interface Market { interface Market {
id: string id: string
name: string name: string
@@ -164,7 +146,7 @@ function getMarket(id: string): Promise<Market> {
// Implementation // Implementation
} }
// FAIL: BAD: Using 'any' // BAD: Using 'any'
function getMarket(id: any): Promise<any> { function getMarket(id: any): Promise<any> {
// Implementation // Implementation
} }
@@ -175,7 +157,7 @@ function getMarket(id: any): Promise<any> {
### Component Structure ### Component Structure
```typescript ```typescript
// PASS: GOOD: Functional component with types // GOOD: Functional component with types
interface ButtonProps { interface ButtonProps {
children: React.ReactNode children: React.ReactNode
onClick: () => void onClick: () => void
@@ -200,7 +182,7 @@ export function Button({
) )
} }
// FAIL: BAD: No types, unclear structure // BAD: No types, unclear structure
export function Button(props) { export function Button(props) {
return <button onClick={props.onClick}>{props.children}</button> return <button onClick={props.onClick}>{props.children}</button>
} }
@@ -209,7 +191,7 @@ export function Button(props) {
### Custom Hooks ### Custom Hooks
```typescript ```typescript
// PASS: GOOD: Reusable custom hook // GOOD: Reusable custom hook
export function useDebounce<T>(value: T, delay: number): T { export function useDebounce<T>(value: T, delay: number): T {
const [debouncedValue, setDebouncedValue] = useState<T>(value) const [debouncedValue, setDebouncedValue] = useState<T>(value)
@@ -231,25 +213,25 @@ const debouncedQuery = useDebounce(searchQuery, 500)
### State Management ### State Management
```typescript ```typescript
// PASS: GOOD: Proper state updates // GOOD: Proper state updates
const [count, setCount] = useState(0) const [count, setCount] = useState(0)
// Functional update for state based on previous state // Functional update for state based on previous state
setCount(prev => prev + 1) setCount(prev => prev + 1)
// FAIL: BAD: Direct state reference // BAD: Direct state reference
setCount(count + 1) // Can be stale in async scenarios setCount(count + 1) // Can be stale in async scenarios
``` ```
### Conditional Rendering ### Conditional Rendering
```typescript ```typescript
// PASS: GOOD: Clear conditional rendering // GOOD: Clear conditional rendering
{isLoading && <Spinner />} {isLoading && <Spinner />}
{error && <ErrorMessage error={error} />} {error && <ErrorMessage error={error} />}
{data && <DataDisplay data={data} />} {data && <DataDisplay data={data} />}
// FAIL: BAD: Ternary hell // BAD: Ternary hell
{isLoading ? <Spinner /> : error ? <ErrorMessage error={error} /> : data ? <DataDisplay data={data} /> : null} {isLoading ? <Spinner /> : error ? <ErrorMessage error={error} /> : data ? <DataDisplay data={data} /> : null}
``` ```
@@ -272,7 +254,7 @@ GET /api/markets?status=active&limit=10&offset=0
### Response Format ### Response Format
```typescript ```typescript
// PASS: GOOD: Consistent response structure // GOOD: Consistent response structure
interface ApiResponse<T> { interface ApiResponse<T> {
success: boolean success: boolean
data?: T data?: T
@@ -303,7 +285,7 @@ return NextResponse.json({
```typescript ```typescript
import { z } from 'zod' import { z } from 'zod'
// PASS: GOOD: Schema validation // GOOD: Schema validation
const CreateMarketSchema = z.object({ const CreateMarketSchema = z.object({
name: z.string().min(1).max(200), name: z.string().min(1).max(200),
description: z.string().min(1).max(2000), description: z.string().min(1).max(2000),
@@ -366,14 +348,14 @@ types/market.types.ts # camelCase with .types suffix
### When to Comment ### When to Comment
```typescript ```typescript
// PASS: GOOD: Explain WHY, not WHAT // GOOD: Explain WHY, not WHAT
// Use exponential backoff to avoid overwhelming the API during outages // Use exponential backoff to avoid overwhelming the API during outages
const delay = Math.min(1000 * Math.pow(2, retryCount), 30000) const delay = Math.min(1000 * Math.pow(2, retryCount), 30000)
// Deliberately using mutation here for performance with large arrays // Deliberately using mutation here for performance with large arrays
items.push(newItem) items.push(newItem)
// FAIL: BAD: Stating the obvious // BAD: Stating the obvious
// Increment counter by 1 // Increment counter by 1
count++ count++
@@ -413,12 +395,12 @@ export async function searchMarkets(
```typescript ```typescript
import { useMemo, useCallback } from 'react' import { useMemo, useCallback } from 'react'
// PASS: GOOD: Memoize expensive computations // GOOD: Memoize expensive computations
const sortedMarkets = useMemo(() => { const sortedMarkets = useMemo(() => {
return markets.sort((a, b) => b.volume - a.volume) return markets.sort((a, b) => b.volume - a.volume)
}, [markets]) }, [markets])
// PASS: GOOD: Memoize callbacks // GOOD: Memoize callbacks
const handleSearch = useCallback((query: string) => { const handleSearch = useCallback((query: string) => {
setSearchQuery(query) setSearchQuery(query)
}, []) }, [])
@@ -429,7 +411,7 @@ const handleSearch = useCallback((query: string) => {
```typescript ```typescript
import { lazy, Suspense } from 'react' import { lazy, Suspense } from 'react'
// PASS: GOOD: Lazy load heavy components // GOOD: Lazy load heavy components
const HeavyChart = lazy(() => import('./HeavyChart')) const HeavyChart = lazy(() => import('./HeavyChart'))
export function Dashboard() { export function Dashboard() {
@@ -444,13 +426,13 @@ export function Dashboard() {
### Database Queries ### Database Queries
```typescript ```typescript
// PASS: GOOD: Select only needed columns // GOOD: Select only needed columns
const { data } = await supabase const { data } = await supabase
.from('markets') .from('markets')
.select('id, name, status') .select('id, name, status')
.limit(10) .limit(10)
// FAIL: BAD: Select everything // BAD: Select everything
const { data } = await supabase const { data } = await supabase
.from('markets') .from('markets')
.select('*') .select('*')
@@ -477,12 +459,12 @@ test('calculates similarity correctly', () => {
### Test Naming ### Test Naming
```typescript ```typescript
// PASS: GOOD: Descriptive test names // GOOD: Descriptive test names
test('returns empty array when no markets match query', () => { }) test('returns empty array when no markets match query', () => { })
test('throws error when OpenAI API key is missing', () => { }) test('throws error when OpenAI API key is missing', () => { })
test('falls back to substring search when Redis unavailable', () => { }) test('falls back to substring search when Redis unavailable', () => { })
// FAIL: BAD: Vague test names // BAD: Vague test names
test('works', () => { }) test('works', () => { })
test('test search', () => { }) test('test search', () => { })
``` ```
@@ -493,12 +475,12 @@ Watch for these anti-patterns:
### 1. Long Functions ### 1. Long Functions
```typescript ```typescript
// FAIL: BAD: Function > 50 lines // BAD: Function > 50 lines
function processMarketData() { function processMarketData() {
// 100 lines of code // 100 lines of code
} }
// PASS: GOOD: Split into smaller functions // GOOD: Split into smaller functions
function processMarketData() { function processMarketData() {
const validated = validateData() const validated = validateData()
const transformed = transformData(validated) const transformed = transformData(validated)
@@ -508,7 +490,7 @@ function processMarketData() {
### 2. Deep Nesting ### 2. Deep Nesting
```typescript ```typescript
// FAIL: BAD: 5+ levels of nesting // BAD: 5+ levels of nesting
if (user) { if (user) {
if (user.isAdmin) { if (user.isAdmin) {
if (market) { if (market) {
@@ -521,7 +503,7 @@ if (user) {
} }
} }
// PASS: GOOD: Early returns // GOOD: Early returns
if (!user) return if (!user) return
if (!user.isAdmin) return if (!user.isAdmin) return
if (!market) return if (!market) return
@@ -533,11 +515,11 @@ if (!hasPermission) return
### 3. Magic Numbers ### 3. Magic Numbers
```typescript ```typescript
// FAIL: BAD: Unexplained numbers // BAD: Unexplained numbers
if (retryCount > 3) { } if (retryCount > 3) { }
setTimeout(callback, 500) setTimeout(callback, 500)
// PASS: GOOD: Named constants // GOOD: Named constants
const MAX_RETRIES = 3 const MAX_RETRIES = 3
const DEBOUNCE_DELAY_MS = 500 const DEBOUNCE_DELAY_MS = 500

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Coding Standards" display_name: "Coding Standards"
short_description: "Cross-project coding conventions and review" short_description: "Universal coding standards and best practices"
brand_color: "#3B82F6" brand_color: "#3B82F6"
default_prompt: "Use $coding-standards to review code against cross-project standards." default_prompt: "Apply standards: immutability, error handling, type safety"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,130 +1,88 @@
--- ---
name: content-engine name: content-engine
description: Create platform-native content systems for X, LinkedIn, TikTok, YouTube, newsletters, and repurposed multi-platform campaigns. Use when the user wants social posts, threads, scripts, content calendars, or one source asset adapted cleanly across platforms. description: Create platform-native content systems for X, LinkedIn, TikTok, YouTube, newsletters, and repurposed multi-platform campaigns. Use when the user wants social posts, threads, scripts, content calendars, or one source asset adapted cleanly across platforms.
origin: ECC
--- ---
# Content Engine # Content Engine
Build platform-native content without flattening the author's real voice into platform slop. Turn one idea into strong, platform-native content instead of posting the same thing everywhere.
## When to Activate ## When to Activate
- writing X posts or threads - writing X posts or threads
- drafting LinkedIn posts or launch updates - drafting LinkedIn posts or launch updates
- scripting short-form video or YouTube explainers - scripting short-form video or YouTube explainers
- repurposing articles, podcasts, demos, docs, or internal notes into public content - repurposing articles, podcasts, demos, or docs into social content
- building a launch sequence or ongoing content system around a product, insight, or narrative - building a lightweight content plan around a launch, milestone, or theme
## Non-Negotiables ## First Questions
1. Start from source material, not generic post formulas. Clarify:
2. Adapt the format for the platform, not the persona. - source asset: what are we adapting from
3. One post should carry one actual claim. - audience: builders, investors, customers, operators, or general audience
4. Specificity beats adjectives. - platform: X, LinkedIn, TikTok, YouTube, newsletter, or multi-platform
5. No engagement bait unless the user explicitly asks for it. - goal: awareness, conversion, recruiting, authority, launch support, or engagement
## Source-First Workflow ## Core Rules
Before drafting, identify the source set: 1. Adapt for the platform. Do not cross-post the same copy.
- published articles 2. Hooks matter more than summaries.
- notes or internal memos 3. Every post should carry one clear idea.
- product demos 4. Use specifics over slogans.
- docs or changelogs 5. Keep the ask small and clear.
- transcripts
- screenshots
- prior posts from the same author
If the user wants a specific voice, build a voice profile from real examples before writing. ## Platform Guidance
Use `brand-voice` as the canonical workflow when voice consistency matters across more than one output.
## Voice Handling
`brand-voice` is the canonical voice layer.
Run it first when:
- there are multiple downstream outputs
- the user explicitly cares about writing style
- the content is launch, outreach, or reputation-sensitive
Reuse the resulting `VOICE PROFILE` here instead of rebuilding a second voice model.
If the user wants Affaan / ECC voice specifically, still treat `brand-voice` as the source of truth and feed it the best live or source-derived material available.
## Hard Bans
Delete and rewrite any of these:
- "In today's rapidly evolving landscape"
- "game-changer", "revolutionary", "cutting-edge"
- "here's why this matters" unless it is followed immediately by something concrete
- ending with a LinkedIn-style question just to farm replies
- forced casualness on LinkedIn
- fake engagement padding that was not present in the source material
## Platform Adaptation Rules
### X ### X
- open fast
- open with the strongest claim, artifact, or tension - one idea per post or per tweet in a thread
- keep the compression if the source voice is compressed - keep links out of the main body unless necessary
- if writing a thread, each post must advance the argument - avoid hashtag spam
- do not pad with context the audience does not need
### LinkedIn ### LinkedIn
- strong first line
- short paragraphs
- more explicit framing around lessons, results, and takeaways
- expand only enough for people outside the immediate niche to follow ### TikTok / Short Video
- do not turn it into a fake lesson post unless the source material actually is reflective - first 3 seconds must interrupt attention
- no corporate inspiration cadence - script around visuals, not just narration
- no praise-stacking, no "journey" filler - one demo, one claim, one CTA
### Short Video
- script around the visual sequence and proof points
- first seconds should show the result, problem, or punch
- do not write narration that sounds better on paper than on screen
### YouTube ### YouTube
- show the result early
- show the result or tension early - structure by chapter
- organize by argument or progression, not filler sections - refresh the visual every 20-30 seconds
- use chaptering only when it helps clarity
### Newsletter ### Newsletter
- deliver one clear lens, not a bundle of unrelated items
- open with the point, conflict, or artifact - make section titles skimmable
- do not spend the first paragraph warming up - keep the opening paragraph doing real work
- every section needs to add something new
## Repurposing Flow ## Repurposing Flow
1. Pick the anchor asset. Default cascade:
2. Extract 3 to 7 atomic claims or scenes. 1. anchor asset: article, video, demo, memo, or launch doc
3. Rank them by sharpness, novelty, and proof. 2. extract 3-7 atomic ideas
4. Assign one strong idea per output. 3. write platform-native variants
5. Adapt structure for each platform. 4. trim repetition across outputs
6. Strip platform-shaped filler. 5. align CTAs with platform intent
7. Run the quality gate.
## Deliverables ## Deliverables
When asked for a campaign, return: When asked for a campaign, return:
- a short voice profile if voice matching matters
- the core angle - the core angle
- platform-native drafts - platform-specific drafts
- posting order only if it helps execution - optional posting order
- gaps that must be filled before publishing - optional CTA variants
- any missing inputs needed before publishing
## Quality Gate ## Quality Gate
Before delivering: Before delivering:
- every draft sounds like the intended author, not the platform stereotype - each draft reads natively for its platform
- every draft contains a real claim, proof point, or concrete observation - hooks are strong and specific
- no generic hype language remains - no generic hype language
- no fake engagement bait remains
- no duplicated copy across platforms unless requested - no duplicated copy across platforms unless requested
- any CTA is earned and user-approved - the CTA matches the content and audience
## Related Skills
- `brand-voice` for source-derived voice profiles
- `crosspost` for platform-specific distribution
- `x-api` for sourcing recent posts and publishing approved X output

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Content Engine" display_name: "Content Engine"
short_description: "Platform-native content systems and campaigns" short_description: "Turn one idea into platform-native social and content outputs"
brand_color: "#DC2626" brand_color: "#DC2626"
default_prompt: "Use $content-engine to turn source material into platform-native content." default_prompt: "Turn this source asset into strong multi-platform content"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,110 +1,188 @@
--- ---
name: crosspost name: crosspost
description: Multi-platform content distribution across X, LinkedIn, Threads, and Bluesky. Adapts content per platform using content-engine patterns. Never posts identical content cross-platform. Use when the user wants to distribute content across social platforms. description: Multi-platform content distribution across X, LinkedIn, Threads, and Bluesky. Adapts content per platform using content-engine patterns. Never posts identical content cross-platform. Use when the user wants to distribute content across social platforms.
origin: ECC
--- ---
# Crosspost # Crosspost
Distribute content across platforms without turning it into the same fake post in four costumes. Distribute content across multiple social platforms with platform-native adaptation.
## When to Activate ## When to Activate
- the user wants to publish the same underlying idea across multiple platforms - User wants to post content to multiple platforms
- a launch, update, release, or essay needs platform-specific versions - Publishing announcements, launches, or updates across social media
- the user says "crosspost", "post this everywhere", or "adapt this for X and LinkedIn" - Repurposing a post from one platform to others
- User says "crosspost", "post everywhere", "share on all platforms", or "distribute this"
## Core Rules ## Core Rules
1. Do not publish identical copy across platforms. 1. **Never post identical content cross-platform.** Each platform gets a native adaptation.
2. Preserve the author's voice across platforms. 2. **Primary platform first.** Post to the main platform, then adapt for others.
3. Adapt for constraints, not stereotypes. 3. **Respect platform conventions.** Length limits, formatting, link handling all differ.
4. One post should still be about one thing. 4. **One idea per post.** If the source content has multiple ideas, split across posts.
5. Do not invent a CTA, question, or moral if the source did not earn one. 5. **Attribution matters.** If crossposting someone else's content, credit the source.
## Platform Specifications
| Platform | Max Length | Link Handling | Hashtags | Media |
|----------|-----------|---------------|----------|-------|
| X | 280 chars (4000 for Premium) | Counted in length | Minimal (1-2 max) | Images, video, GIFs |
| LinkedIn | 3000 chars | Not counted in length | 3-5 relevant | Images, video, docs, carousels |
| Threads | 500 chars | Separate link attachment | None typical | Images, video |
| Bluesky | 300 chars | Via facets (rich text) | None (use feeds) | Images |
## Workflow ## Workflow
### Step 1: Start with the Primary Version ### Step 1: Create Source Content
Pick the strongest source version first: Start with the core idea. Use `content-engine` skill for high-quality drafts:
- the original X post - Identify the single core message
- the original article - Determine the primary platform (where the audience is biggest)
- the launch note - Draft the primary platform version first
- the thread
- the memo or changelog
Use `content-engine` first if the source still needs voice shaping. ### Step 2: Identify Target Platforms
### Step 2: Capture the Voice Fingerprint Ask the user or determine from context:
- Which platforms to target
- Priority order (primary gets the best version)
- Any platform-specific requirements (e.g., LinkedIn needs professional tone)
Run `brand-voice` first if the source voice is not already captured in the current session. ### Step 3: Adapt Per Platform
Reuse the resulting `VOICE PROFILE` directly. For each target platform, transform the content:
Do not build a second ad hoc voice checklist here unless the user explicitly wants a fresh override for this campaign.
### Step 3: Adapt by Platform Constraint **X adaptation:**
- Open with a hook, not a summary
- Cut to the core insight fast
- Keep links out of main body when possible
- Use thread format for longer content
### X **LinkedIn adaptation:**
- Strong first line (visible before "see more")
- Short paragraphs with line breaks
- Frame around lessons, results, or professional takeaways
- More explicit context than X (LinkedIn audience needs framing)
- keep it compressed **Threads adaptation:**
- lead with the sharpest claim or artifact - Conversational, casual tone
- use a thread only when a single post would collapse the argument - Shorter than LinkedIn, less compressed than X
- avoid hashtags and generic filler - Visual-first if possible
### LinkedIn **Bluesky adaptation:**
- Direct and concise (300 char limit)
- Community-oriented tone
- Use feeds/lists for topic targeting instead of hashtags
- add only the context needed for people outside the niche ### Step 4: Post Primary Platform
- do not turn it into a fake founder-reflection post
- do not add a closing question just because it is LinkedIn
- do not force a polished "professional tone" if the author is naturally sharper
### Threads Post to the primary platform first:
- Use `x-api` skill for X
- Use platform-specific APIs or tools for others
- Capture the post URL for cross-referencing
- keep it readable and direct ### Step 5: Post to Secondary Platforms
- do not write fake hyper-casual creator copy
- do not paste the LinkedIn version and shorten it
### Bluesky Post adapted versions to remaining platforms:
- Stagger timing (not all at once — 30-60 min gaps)
- Include cross-platform references where appropriate ("longer thread on X" etc.)
- keep it concise ## Content Adaptation Examples
- preserve the author's cadence
- do not rely on hashtags or feed-gaming language
## Posting Order ### Source: Product Launch
Default: **X version:**
1. post the strongest native version first ```
2. adapt for the secondary platforms We just shipped [feature].
3. stagger timing only if the user wants sequencing help
Do not add cross-platform references unless useful. Most of the time, the post should stand on its own. [One specific thing it does that's impressive]
## Banned Patterns [Link]
```
Delete and rewrite any of these: **LinkedIn version:**
- "Excited to share" ```
- "Here's what I learned" Excited to share: we just launched [feature] at [Company].
- "What do you think?"
- "link in bio" unless that is literally true
- generic "professional takeaway" paragraphs that were not in the source
## Output Format Here's why it matters:
Return: [2-3 short paragraphs with context]
- the primary platform version
- adapted variants for each requested platform [Takeaway for the audience]
- a short note on what changed and why
- any publishing constraint the user still needs to resolve [Link]
```
**Threads version:**
```
just shipped something cool — [feature]
[casual explanation of what it does]
link in bio
```
### Source: Technical Insight
**X version:**
```
TIL: [specific technical insight]
[Why it matters in one sentence]
```
**LinkedIn version:**
```
A pattern I've been using that's made a real difference:
[Technical insight with professional framing]
[How it applies to teams/orgs]
#relevantHashtag
```
## API Integration
### Batch Crossposting Service (Example Pattern)
If using a crossposting service (e.g., Postbridge, Buffer, or a custom API), the pattern looks like:
```python
import os
import requests
resp = requests.post(
"https://api.postbridge.io/v1/posts",
headers={"Authorization": f"Bearer {os.environ['POSTBRIDGE_API_KEY']}"},
json={
"platforms": ["twitter", "linkedin", "threads"],
"content": {
"twitter": {"text": x_version},
"linkedin": {"text": linkedin_version},
"threads": {"text": threads_version}
}
}
)
```
### Manual Posting
Without Postbridge, post to each platform using its native API:
- X: Use `x-api` skill patterns
- LinkedIn: LinkedIn API v2 with OAuth 2.0
- Threads: Threads API (Meta)
- Bluesky: AT Protocol API
## Quality Gate ## Quality Gate
Before delivering: Before posting:
- each version reads like the same author under different constraints - [ ] Each platform version reads naturally for that platform
- no platform version feels padded or sanitized - [ ] No identical content across platforms
- no copy is duplicated verbatim across platforms - [ ] Length limits respected
- any extra context added for LinkedIn or newsletter use is actually necessary - [ ] Links work and are placed appropriately
- [ ] Tone matches platform conventions
- [ ] Media is sized correctly for each platform
## Related Skills ## Related Skills
- `brand-voice` for reusable source-derived voice capture - `content-engine` — Generate platform-native content
- `content-engine` for voice capture and source shaping - `x-api` — X/Twitter API integration
- `x-api` for X publishing workflows

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Crosspost" display_name: "Crosspost"
short_description: "Multi-platform social distribution" short_description: "Multi-platform content distribution with native adaptation"
brand_color: "#EC4899" brand_color: "#EC4899"
default_prompt: "Use $crosspost to adapt content for multiple social platforms." default_prompt: "Distribute content across X, LinkedIn, Threads, and Bluesky with platform-native adaptation"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: deep-research name: deep-research
description: Multi-source deep research using firecrawl and exa MCPs. Searches the web, synthesizes findings, and delivers cited reports with source attribution. Use when the user wants thorough research on any topic with evidence and citations. description: Multi-source deep research using firecrawl and exa MCPs. Searches the web, synthesizes findings, and delivers cited reports with source attribution. Use when the user wants thorough research on any topic with evidence and citations.
origin: ECC
--- ---
# Deep Research # Deep Research

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Deep Research" display_name: "Deep Research"
short_description: "Multi-source cited research reports" short_description: "Multi-source deep research with firecrawl and exa MCPs"
brand_color: "#6366F1" brand_color: "#6366F1"
default_prompt: "Use $deep-research to produce a cited multi-source research report." default_prompt: "Research the given topic using firecrawl and exa, produce a cited report"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: dmux-workflows name: dmux-workflows
description: Multi-agent orchestration using dmux (tmux pane manager for AI agents). Patterns for parallel agent workflows across Claude Code, Codex, OpenCode, and other harnesses. Use when running multiple agent sessions in parallel or coordinating multi-agent development workflows. description: Multi-agent orchestration using dmux (tmux pane manager for AI agents). Patterns for parallel agent workflows across Claude Code, Codex, OpenCode, and other harnesses. Use when running multiple agent sessions in parallel or coordinating multi-agent development workflows.
origin: ECC
--- ---
# dmux Workflows # dmux Workflows

View File

@@ -2,6 +2,6 @@ interface:
display_name: "dmux Workflows" display_name: "dmux Workflows"
short_description: "Multi-agent orchestration with dmux" short_description: "Multi-agent orchestration with dmux"
brand_color: "#14B8A6" brand_color: "#14B8A6"
default_prompt: "Use $dmux-workflows to orchestrate parallel agent sessions with dmux." default_prompt: "Orchestrate parallel agent sessions using dmux pane manager"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: documentation-lookup name: documentation-lookup
description: Use up-to-date library and framework docs via Context7 MCP instead of training data. Activates for setup questions, API references, code examples, or when the user names a framework (e.g. React, Next.js, Prisma). description: Use up-to-date library and framework docs via Context7 MCP instead of training data. Activates for setup questions, API references, code examples, or when the user names a framework (e.g. React, Next.js, Prisma).
origin: ECC
--- ---
# Documentation Lookup (Context7) # Documentation Lookup (Context7)

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Documentation Lookup" display_name: "Documentation Lookup"
short_description: "Current library docs via Context7" short_description: "Fetch up-to-date library docs via Context7 MCP"
brand_color: "#6366F1" brand_color: "#6366F1"
default_prompt: "Use $documentation-lookup to fetch current library documentation via Context7." default_prompt: "Look up docs for a library or API"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: e2e-testing name: e2e-testing
description: Playwright E2E testing patterns, Page Object Model, configuration, CI/CD integration, artifact management, and flaky test strategies. description: Playwright E2E testing patterns, Page Object Model, configuration, CI/CD integration, artifact management, and flaky test strategies.
origin: ECC
--- ---
# E2E Testing Patterns # E2E Testing Patterns

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "E2E Testing" display_name: "E2E Testing"
short_description: "Playwright E2E testing patterns" short_description: "Playwright end-to-end testing"
brand_color: "#06B6D4" brand_color: "#06B6D4"
default_prompt: "Use $e2e-testing to design Playwright end-to-end test coverage." default_prompt: "Generate Playwright E2E tests with Page Object Model"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,7 +1,8 @@
--- ---
name: eval-harness name: eval-harness
description: Formal evaluation framework for Claude Code sessions implementing eval-driven development (EDD) principles description: Formal evaluation framework for Claude Code sessions implementing eval-driven development (EDD) principles
allowed-tools: Read, Write, Edit, Bash, Grep, Glob origin: ECC
tools: Read, Write, Edit, Bash, Grep, Glob
--- ---
# Eval Harness Skill # Eval Harness Skill

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Eval Harness" display_name: "Eval Harness"
short_description: "Eval-driven development harnesses" short_description: "Eval-driven development with pass/fail criteria"
brand_color: "#EC4899" brand_color: "#EC4899"
default_prompt: "Use $eval-harness to define eval-driven development checks." default_prompt: "Set up eval-driven development with pass/fail criteria"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,5 +1,5 @@
--- ---
name: everything-claude-code name: everything-claude-code-conventions
description: Development conventions and patterns for everything-claude-code. JavaScript project with conventional commits. description: Development conventions and patterns for everything-claude-code. JavaScript project with conventional commits.
--- ---
@@ -304,24 +304,24 @@ Register the agent in AGENTS.md
Optionally update README.md and docs/COMMAND-AGENT-MAP.md Optionally update README.md and docs/COMMAND-AGENT-MAP.md
``` ```
### Add New Workflow Surface ### Add New Command
Adds or updates a workflow entrypoint. Default to skills-first; only add a command shim when legacy slash compatibility is still required. Adds a new command to the system, often paired with a backing skill.
**Frequency**: ~1 times per month **Frequency**: ~1 times per month
**Steps**: **Steps**:
1. Create or update the canonical workflow under skills/{skill-name}/SKILL.md 1. Create a new markdown file under commands/{command-name}.md
2. Only if needed, add or update commands/{command-name}.md as a compatibility shim 2. Optionally add or update a backing skill under skills/{skill-name}/SKILL.md
**Files typically involved**: **Files typically involved**:
- `commands/*.md`
- `skills/*/SKILL.md` - `skills/*/SKILL.md`
- `commands/*.md` (only when a legacy shim is intentionally retained)
**Example commit sequence**: **Example commit sequence**:
``` ```
Create or update the canonical skill under skills/{skill-name}/SKILL.md Create a new markdown file under commands/{command-name}.md
Only if needed, add or update commands/{command-name}.md as a compatibility shim Optionally add or update a backing skill under skills/{skill-name}/SKILL.md
``` ```
### Sync Catalog Counts ### Sync Catalog Counts

View File

@@ -1,7 +1,6 @@
interface: interface:
display_name: "Everything Claude Code" display_name: "Everything Claude Code"
short_description: "Repo workflows for everything-claude-code" short_description: "Repo-specific patterns and workflows for everything-claude-code"
brand_color: "#0EA5E9" default_prompt: "Use the everything-claude-code repo skill to follow existing architecture, testing, and workflow conventions."
default_prompt: "Use $everything-claude-code to follow this repository's conventions and workflows."
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: exa-search name: exa-search
description: Neural search via Exa MCP for web, code, and company research. Use when the user needs web search, code examples, company intel, people lookup, or AI-powered deep research with Exa's neural search engine. description: Neural search via Exa MCP for web, code, and company research. Use when the user needs web search, code examples, company intel, people lookup, or AI-powered deep research with Exa's neural search engine.
origin: ECC
--- ---
# Exa Search # Exa Search

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Exa Search" display_name: "Exa Search"
short_description: "Neural search via Exa MCP" short_description: "Neural search via Exa MCP for web, code, and companies"
brand_color: "#8B5CF6" brand_color: "#8B5CF6"
default_prompt: "Use $exa-search to search web, code, or company data through Exa." default_prompt: "Search using Exa MCP tools for web content, code, or company research"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: fal-ai-media name: fal-ai-media
description: Unified media generation via fal.ai MCP — image, video, and audio. Covers text-to-image (Nano Banana), text/image-to-video (Seedance, Kling, Veo 3), text-to-speech (CSM-1B), and video-to-audio (ThinkSound). Use when the user wants to generate images, videos, or audio with AI. description: Unified media generation via fal.ai MCP — image, video, and audio. Covers text-to-image (Nano Banana), text/image-to-video (Seedance, Kling, Veo 3), text-to-speech (CSM-1B), and video-to-audio (ThinkSound). Use when the user wants to generate images, videos, or audio with AI.
origin: ECC
--- ---
# fal.ai Media Generation # fal.ai Media Generation

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "fal.ai Media" display_name: "fal.ai Media"
short_description: "AI media generation via fal.ai" short_description: "AI image, video, and audio generation via fal.ai"
brand_color: "#F43F5E" brand_color: "#F43F5E"
default_prompt: "Use $fal-ai-media to generate image, video, or audio assets with fal.ai." default_prompt: "Generate images, videos, or audio using fal.ai models"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,144 +0,0 @@
---
name: frontend-design
description: Create distinctive, production-grade frontend interfaces with high design quality. Use when the user asks to build web components, pages, or applications and the visual direction matters as much as the code quality.
---
# Frontend Design
Use this when the task is not just "make it work" but "make it look designed."
This skill is for product pages, dashboards, app shells, components, or visual systems that need a clear point of view instead of generic AI-looking UI.
## When To Use
- building a landing page, dashboard, or app surface from scratch
- upgrading a bland interface into something intentional and memorable
- translating a product concept into a concrete visual direction
- implementing a frontend where typography, composition, and motion matter
## Core Principle
Pick a direction and commit to it.
Safe-average UI is usually worse than a strong, coherent aesthetic with a few bold choices.
## Design Workflow
### 1. Frame the interface first
Before coding, settle:
- purpose
- audience
- emotional tone
- visual direction
- one thing the user should remember
Possible directions:
- brutally minimal
- editorial
- industrial
- luxury
- playful
- geometric
- retro-futurist
- soft and organic
- maximalist
Do not mix directions casually. Choose one and execute it cleanly.
### 2. Build the visual system
Define:
- type hierarchy
- color variables
- spacing rhythm
- layout logic
- motion rules
- surface / border / shadow treatment
Use CSS variables or the project's token system so the interface stays coherent as it grows.
### 3. Compose with intention
Prefer:
- asymmetry when it sharpens hierarchy
- overlap when it creates depth
- strong whitespace when it clarifies focus
- dense layouts only when the product benefits from density
Avoid defaulting to a symmetrical card grid unless it is clearly the right fit.
### 4. Make motion meaningful
Use animation to:
- reveal hierarchy
- stage information
- reinforce user action
- create one or two memorable moments
Do not scatter generic micro-interactions everywhere. One well-directed load sequence is usually stronger than twenty random hover effects.
## Strong Defaults
### Typography
- pick fonts with character
- pair a distinctive display face with a readable body face when appropriate
- avoid generic defaults when the page is design-led
### Color
- commit to a clear palette
- one dominant field with selective accents usually works better than evenly weighted rainbow palettes
- avoid cliché purple-gradient-on-white unless the product genuinely calls for it
### Background
Use atmosphere:
- gradients
- meshes
- textures
- subtle noise
- patterns
- layered transparency
Flat empty backgrounds are rarely the best answer for a product-facing page.
### Layout
- break the grid when the composition benefits from it
- use diagonals, offsets, and grouping intentionally
- keep reading flow obvious even when the layout is unconventional
## Anti-Patterns
Never default to:
- interchangeable SaaS hero sections
- generic card piles with no hierarchy
- random accent colors without a system
- placeholder-feeling typography
- motion that exists only because animation was easy to add
## Execution Rules
- preserve the established design system when working inside an existing product
- match technical complexity to the visual idea
- keep accessibility and responsiveness intact
- frontends should feel deliberate on desktop and mobile
## Quality Gate
Before delivering:
- the interface has a clear visual point of view
- typography and spacing feel intentional
- color and motion support the product instead of decorating it randomly
- the result does not read like generic AI UI
- the implementation is production-grade, not just visually interesting

View File

@@ -1,7 +0,0 @@
interface:
display_name: "Frontend Design"
short_description: "Production-grade frontend interface design"
brand_color: "#0EA5E9"
default_prompt: "Use $frontend-design to build a distinctive production-grade interface."
policy:
allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: frontend-patterns name: frontend-patterns
description: Frontend development patterns for React, Next.js, state management, performance optimization, and UI best practices. description: Frontend development patterns for React, Next.js, state management, performance optimization, and UI best practices.
origin: ECC
--- ---
# Frontend Development Patterns # Frontend Development Patterns
@@ -22,7 +23,7 @@ Modern frontend patterns for React, Next.js, and performant user interfaces.
### Composition Over Inheritance ### Composition Over Inheritance
```typescript ```typescript
// PASS: GOOD: Component composition // GOOD: Component composition
interface CardProps { interface CardProps {
children: React.ReactNode children: React.ReactNode
variant?: 'default' | 'outlined' variant?: 'default' | 'outlined'
@@ -293,17 +294,17 @@ export function useMarkets() {
### Memoization ### Memoization
```typescript ```typescript
// PASS: useMemo for expensive computations // useMemo for expensive computations
const sortedMarkets = useMemo(() => { const sortedMarkets = useMemo(() => {
return markets.sort((a, b) => b.volume - a.volume) return markets.sort((a, b) => b.volume - a.volume)
}, [markets]) }, [markets])
// PASS: useCallback for functions passed to children // useCallback for functions passed to children
const handleSearch = useCallback((query: string) => { const handleSearch = useCallback((query: string) => {
setSearchQuery(query) setSearchQuery(query)
}, []) }, [])
// PASS: React.memo for pure components // React.memo for pure components
export const MarketCard = React.memo<MarketCardProps>(({ market }) => { export const MarketCard = React.memo<MarketCardProps>(({ market }) => {
return ( return (
<div className="market-card"> <div className="market-card">
@@ -319,7 +320,7 @@ export const MarketCard = React.memo<MarketCardProps>(({ market }) => {
```typescript ```typescript
import { lazy, Suspense } from 'react' import { lazy, Suspense } from 'react'
// PASS: Lazy load heavy components // Lazy load heavy components
const HeavyChart = lazy(() => import('./HeavyChart')) const HeavyChart = lazy(() => import('./HeavyChart'))
const ThreeJsBackground = lazy(() => import('./ThreeJsBackground')) const ThreeJsBackground = lazy(() => import('./ThreeJsBackground'))
@@ -514,7 +515,7 @@ export class ErrorBoundary extends React.Component<
```typescript ```typescript
import { motion, AnimatePresence } from 'framer-motion' import { motion, AnimatePresence } from 'framer-motion'
// PASS: List animations // List animations
export function AnimatedMarketList({ markets }: { markets: Market[] }) { export function AnimatedMarketList({ markets }: { markets: Market[] }) {
return ( return (
<AnimatePresence> <AnimatePresence>
@@ -533,7 +534,7 @@ export function AnimatedMarketList({ markets }: { markets: Market[] }) {
) )
} }
// PASS: Modal animations // Modal animations
export function Modal({ isOpen, onClose, children }: ModalProps) { export function Modal({ isOpen, onClose, children }: ModalProps) {
return ( return (
<AnimatePresence> <AnimatePresence>

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Frontend Patterns" display_name: "Frontend Patterns"
short_description: "React and Next.js frontend patterns" short_description: "React and Next.js patterns and best practices"
brand_color: "#8B5CF6" brand_color: "#8B5CF6"
default_prompt: "Use $frontend-patterns to apply React and Next.js frontend patterns." default_prompt: "Apply React/Next.js patterns and best practices"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: frontend-slides name: frontend-slides
description: Create stunning, animation-rich HTML presentations from scratch or by converting PowerPoint files. Use when the user wants to build a presentation, convert a PPT/PPTX to web, or create slides for a talk/pitch. Helps non-designers discover their aesthetic through visual exploration rather than abstract choices. description: Create stunning, animation-rich HTML presentations from scratch or by converting PowerPoint files. Use when the user wants to build a presentation, convert a PPT/PPTX to web, or create slides for a talk/pitch. Helps non-designers discover their aesthetic through visual exploration rather than abstract choices.
origin: ECC
--- ---
# Frontend Slides # Frontend Slides

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Frontend Slides" display_name: "Frontend Slides"
short_description: "Animation-rich HTML presentation decks" short_description: "Create distinctive HTML slide decks and convert PPTX to web"
brand_color: "#FF6B3D" brand_color: "#FF6B3D"
default_prompt: "Use $frontend-slides to create an animation-rich HTML presentation deck." default_prompt: "Create a viewport-safe HTML presentation with strong visual direction"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: investor-materials name: investor-materials
description: Create and update pitch decks, one-pagers, investor memos, accelerator applications, financial models, and fundraising materials. Use when the user needs investor-facing documents, projections, use-of-funds tables, milestone plans, or materials that must stay internally consistent across multiple fundraising assets. description: Create and update pitch decks, one-pagers, investor memos, accelerator applications, financial models, and fundraising materials. Use when the user needs investor-facing documents, projections, use-of-funds tables, milestone plans, or materials that must stay internally consistent across multiple fundraising assets.
origin: ECC
--- ---
# Investor Materials # Investor Materials

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Investor Materials" display_name: "Investor Materials"
short_description: "Investor decks, memos, and financial materials" short_description: "Create decks, memos, and financial materials from one source of truth"
brand_color: "#7C3AED" brand_color: "#7C3AED"
default_prompt: "Use $investor-materials to draft consistent investor-facing fundraising assets." default_prompt: "Draft investor materials that stay numerically consistent across assets"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,11 +1,12 @@
--- ---
name: investor-outreach name: investor-outreach
description: Draft cold emails, warm intro blurbs, follow-ups, update emails, and investor communications for fundraising. Use when the user wants outreach to angels, VCs, strategic investors, or accelerators and needs concise, personalized, investor-facing messaging. description: Draft cold emails, warm intro blurbs, follow-ups, update emails, and investor communications for fundraising. Use when the user wants outreach to angels, VCs, strategic investors, or accelerators and needs concise, personalized, investor-facing messaging.
origin: ECC
--- ---
# Investor Outreach # Investor Outreach
Write investor communication that is short, concrete, and easy to act on. Write investor communication that is short, personalized, and easy to act on.
## When to Activate ## When to Activate
@@ -19,32 +20,17 @@ Write investor communication that is short, concrete, and easy to act on.
1. Personalize every outbound message. 1. Personalize every outbound message.
2. Keep the ask low-friction. 2. Keep the ask low-friction.
3. Use proof instead of adjectives. 3. Use proof, not adjectives.
4. Stay concise. 4. Stay concise.
5. Never send copy that could go to any investor. 5. Never send generic copy that could go to any investor.
## Voice Handling
If the user's voice matters, run `brand-voice` first and reuse its `VOICE PROFILE`.
This skill should keep the investor-specific structure and ask discipline, not recreate its own parallel voice system.
## Hard Bans
Delete and rewrite any of these:
- "I'd love to connect"
- "excited to share"
- generic thesis praise without a real tie-in
- vague founder adjectives
- begging language
- soft closing questions when a direct ask is clearer
## Cold Email Structure ## Cold Email Structure
1. subject line: short and specific 1. subject line: short and specific
2. opener: why this investor specifically 2. opener: why this investor specifically
3. pitch: what the company does, why now, and what proof matters 3. pitch: what the company does, why now, what proof matters
4. ask: one concrete next step 4. ask: one concrete next step
5. sign-off: name, role, and one credibility anchor if needed 5. sign-off: name, role, one credibility anchor if needed
## Personalization Sources ## Personalization Sources
@@ -54,14 +40,14 @@ Reference one or more of:
- a mutual connection - a mutual connection
- a clear market or product fit with the investor's focus - a clear market or product fit with the investor's focus
If that context is missing, state that the draft still needs personalization instead of pretending it is finished. If that context is missing, ask for it or state that the draft is a template awaiting personalization.
## Follow-Up Cadence ## Follow-Up Cadence
Default: Default:
- day 0: initial outbound - day 0: initial outbound
- day 4 or 5: short follow-up with one new data point - day 4-5: short follow-up with one new data point
- day 10 to 12: final follow-up with a clean close - day 10-12: final follow-up with a clean close
Do not keep nudging after that unless the user wants a longer sequence. Do not keep nudging after that unless the user wants a longer sequence.
@@ -83,8 +69,8 @@ Include:
## Quality Gate ## Quality Gate
Before delivering: Before delivering:
- the message is genuinely personalized - message is personalized
- the ask is explicit - the ask is explicit
- there is no fluff or begging language
- the proof point is concrete - the proof point is concrete
- filler praise and softener language are gone
- word count stays tight - word count stays tight

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Investor Outreach" display_name: "Investor Outreach"
short_description: "Personalized investor outreach and follow-ups" short_description: "Write concise, personalized outreach and follow-ups for fundraising"
brand_color: "#059669" brand_color: "#059669"
default_prompt: "Use $investor-outreach to write concise personalized investor outreach." default_prompt: "Draft a personalized investor outreach email with a clear low-friction ask"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: market-research name: market-research
description: Conduct market research, competitive analysis, investor due diligence, and industry intelligence with source attribution and decision-oriented summaries. Use when the user wants market sizing, competitor comparisons, fund research, technology scans, or research that informs business decisions. description: Conduct market research, competitive analysis, investor due diligence, and industry intelligence with source attribution and decision-oriented summaries. Use when the user wants market sizing, competitor comparisons, fund research, technology scans, or research that informs business decisions.
origin: ECC
--- ---
# Market Research # Market Research

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Market Research" display_name: "Market Research"
short_description: "Source-attributed market research" short_description: "Source-attributed market, competitor, and investor research"
brand_color: "#2563EB" brand_color: "#2563EB"
default_prompt: "Use $market-research to research markets with source-attributed findings." default_prompt: "Research this market and summarize the decision-relevant findings"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: mcp-server-patterns name: mcp-server-patterns
description: Build MCP servers with Node/TypeScript SDK — tools, resources, prompts, Zod validation, stdio vs Streamable HTTP. Use Context7 or official MCP docs for latest API. description: Build MCP servers with Node/TypeScript SDK — tools, resources, prompts, Zod validation, stdio vs Streamable HTTP. Use Context7 or official MCP docs for latest API.
origin: ECC
--- ---
# MCP Server Patterns # MCP Server Patterns

View File

@@ -1,7 +0,0 @@
interface:
display_name: "MCP Server Patterns"
short_description: "MCP server tools, resources, and prompts"
brand_color: "#0EA5E9"
default_prompt: "Use $mcp-server-patterns to build MCP tools, resources, and prompts."
policy:
allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: nextjs-turbopack name: nextjs-turbopack
description: Next.js 16+ and Turbopack — incremental bundling, FS caching, dev speed, and when to use Turbopack vs webpack. description: Next.js 16+ and Turbopack — incremental bundling, FS caching, dev speed, and when to use Turbopack vs webpack.
origin: ECC
--- ---
# Next.js and Turbopack # Next.js and Turbopack

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Next.js Turbopack" display_name: "Next.js Turbopack"
short_description: "Next.js and Turbopack workflow guidance" short_description: "Next.js 16+ and Turbopack dev bundler"
brand_color: "#000000" brand_color: "#000000"
default_prompt: "Use $nextjs-turbopack to work through Next.js and Turbopack decisions." default_prompt: "Next.js dev, Turbopack, or bundle optimization"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,140 +0,0 @@
---
name: product-capability
description: Translate PRD intent, roadmap asks, or product discussions into an implementation-ready capability plan that exposes constraints, invariants, interfaces, and unresolved decisions before multi-service work starts. Use when the user needs an ECC-native PRD-to-SRS lane instead of vague planning prose.
---
# Product Capability
This skill turns product intent into explicit engineering constraints.
Use it when the gap is not "what should we build?" but "what exactly must be true before implementation starts?"
## When to Use
- A PRD, roadmap item, discussion, or founder note exists, but the implementation constraints are still implicit
- A feature crosses multiple services, repos, or teams and needs a capability contract before coding
- Product intent is clear, but architecture, data, lifecycle, or policy implications are still fuzzy
- Senior engineers keep restating the same hidden assumptions during review
- You need a reusable artifact that can survive across harnesses and sessions
## Canonical Artifact
If the repo has a durable product-context file such as `PRODUCT.md`, `docs/product/`, or a program-spec directory, update it there.
If no capability manifest exists yet, create one using the template at:
- `docs/examples/product-capability-template.md`
The goal is not to create another planning stack. The goal is to make hidden capability constraints durable and reusable.
## Non-Negotiable Rules
- Do not invent product truth. Mark unresolved questions explicitly.
- Separate user-visible promises from implementation details.
- Call out what is fixed policy, what is architecture preference, and what is still open.
- If the request conflicts with existing repo constraints, say so clearly instead of smoothing it over.
- Prefer one reusable capability artifact over scattered ad hoc notes.
## Inputs
Read only what is needed:
1. Product intent
- issue, discussion, PRD, roadmap note, founder message
2. Current architecture
- relevant repo docs, contracts, schemas, routes, existing workflows
3. Existing capability context
- `PRODUCT.md`, design docs, RFCs, migration notes, operating-model docs
4. Delivery constraints
- auth, billing, compliance, rollout, backwards compatibility, performance, review policy
## Core Workflow
### 1. Restate the capability
Compress the ask into one precise statement:
- who the user or operator is
- what new capability exists after this ships
- what outcome changes because of it
If this statement is weak, the implementation will drift.
### 2. Resolve capability constraints
Extract the constraints that must hold before implementation:
- business rules
- scope boundaries
- invariants
- trust boundaries
- data ownership
- lifecycle transitions
- rollout / migration requirements
- failure and recovery expectations
These are the things that often live only in senior-engineer memory.
### 3. Define the implementation-facing contract
Produce an SRS-style capability plan with:
- capability summary
- explicit non-goals
- actors and surfaces
- required states and transitions
- interfaces / inputs / outputs
- data model implications
- security / billing / policy constraints
- observability and operator requirements
- open questions blocking implementation
### 4. Translate into execution
End with the exact handoff:
- ready for direct implementation
- needs architecture review first
- needs product clarification first
If useful, point to the next ECC-native lane:
- `project-flow-ops`
- `workspace-surface-audit`
- `api-connector-builder`
- `dashboard-builder`
- `tdd-workflow`
- `verification-loop`
## Output Format
Return the result in this order:
```text
CAPABILITY
- one-paragraph restatement
CONSTRAINTS
- fixed rules, invariants, and boundaries
IMPLEMENTATION CONTRACT
- actors
- surfaces
- states and transitions
- interface/data implications
NON-GOALS
- what this lane explicitly does not own
OPEN QUESTIONS
- blockers or product decisions still required
HANDOFF
- what should happen next and which ECC lane should take it
```
## Good Outcomes
- Product intent is now concrete enough to implement without rediscovering hidden constraints mid-PR.
- Engineering review has a durable artifact instead of relying on memory or Slack context.
- The resulting plan is reusable across Claude Code, Codex, Cursor, OpenCode, and ECC 2.0 planning surfaces.

View File

@@ -1,7 +0,0 @@
interface:
display_name: "Product Capability"
short_description: "Implementation-ready product capability plans"
brand_color: "#0EA5E9"
default_prompt: "Use $product-capability to turn product intent into an implementation plan."
policy:
allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: security-review name: security-review
description: Use this skill when adding authentication, handling user input, working with secrets, creating API endpoints, or implementing payment/sensitive features. Provides comprehensive security checklist and patterns. description: Use this skill when adding authentication, handling user input, working with secrets, creating API endpoints, or implementing payment/sensitive features. Provides comprehensive security checklist and patterns.
origin: ECC
--- ---
# Security Review Skill # Security Review Skill
@@ -21,13 +22,13 @@ This skill ensures all code follows security best practices and identifies poten
### 1. Secrets Management ### 1. Secrets Management
#### FAIL: NEVER Do This #### NEVER Do This
```typescript ```typescript
const apiKey = "sk-proj-xxxxx" // Hardcoded secret const apiKey = "sk-proj-xxxxx" // Hardcoded secret
const dbPassword = "password123" // In source code const dbPassword = "password123" // In source code
``` ```
#### PASS: ALWAYS Do This #### ALWAYS Do This
```typescript ```typescript
const apiKey = process.env.OPENAI_API_KEY const apiKey = process.env.OPENAI_API_KEY
const dbUrl = process.env.DATABASE_URL const dbUrl = process.env.DATABASE_URL
@@ -107,14 +108,14 @@ function validateFileUpload(file: File) {
### 3. SQL Injection Prevention ### 3. SQL Injection Prevention
#### FAIL: NEVER Concatenate SQL #### NEVER Concatenate SQL
```typescript ```typescript
// DANGEROUS - SQL Injection vulnerability // DANGEROUS - SQL Injection vulnerability
const query = `SELECT * FROM users WHERE email = '${userEmail}'` const query = `SELECT * FROM users WHERE email = '${userEmail}'`
await db.query(query) await db.query(query)
``` ```
#### PASS: ALWAYS Use Parameterized Queries #### ALWAYS Use Parameterized Queries
```typescript ```typescript
// Safe - parameterized query // Safe - parameterized query
const { data } = await supabase const { data } = await supabase
@@ -139,10 +140,10 @@ await db.query(
#### JWT Token Handling #### JWT Token Handling
```typescript ```typescript
// FAIL: WRONG: localStorage (vulnerable to XSS) // WRONG: localStorage (vulnerable to XSS)
localStorage.setItem('token', token) localStorage.setItem('token', token)
// PASS: CORRECT: httpOnly cookies // CORRECT: httpOnly cookies
res.setHeader('Set-Cookie', res.setHeader('Set-Cookie',
`token=${token}; HttpOnly; Secure; SameSite=Strict; Max-Age=3600`) `token=${token}; HttpOnly; Secure; SameSite=Strict; Max-Age=3600`)
``` ```
@@ -299,18 +300,18 @@ app.use('/api/search', searchLimiter)
#### Logging #### Logging
```typescript ```typescript
// FAIL: WRONG: Logging sensitive data // WRONG: Logging sensitive data
console.log('User login:', { email, password }) console.log('User login:', { email, password })
console.log('Payment:', { cardNumber, cvv }) console.log('Payment:', { cardNumber, cvv })
// PASS: CORRECT: Redact sensitive data // CORRECT: Redact sensitive data
console.log('User login:', { email, userId }) console.log('User login:', { email, userId })
console.log('Payment:', { last4: card.last4, userId }) console.log('Payment:', { last4: card.last4, userId })
``` ```
#### Error Messages #### Error Messages
```typescript ```typescript
// FAIL: WRONG: Exposing internal details // WRONG: Exposing internal details
catch (error) { catch (error) {
return NextResponse.json( return NextResponse.json(
{ error: error.message, stack: error.stack }, { error: error.message, stack: error.stack },
@@ -318,7 +319,7 @@ catch (error) {
) )
} }
// PASS: CORRECT: Generic error messages // CORRECT: Generic error messages
catch (error) { catch (error) {
console.error('Internal error:', error) console.error('Internal error:', error)
return NextResponse.json( return NextResponse.json(

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Security Review" display_name: "Security Review"
short_description: "Security checklist and vulnerability review" short_description: "Comprehensive security checklist and vulnerability detection"
brand_color: "#EF4444" brand_color: "#EF4444"
default_prompt: "Use $security-review to review sensitive code with the security checklist." default_prompt: "Run security checklist: secrets, input validation, injection prevention"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: strategic-compact name: strategic-compact
description: Suggests manual context compaction at logical intervals to preserve context through task phases rather than arbitrary auto-compaction. description: Suggests manual context compaction at logical intervals to preserve context through task phases rather than arbitrary auto-compaction.
origin: ECC
--- ---
# Strategic Compact Skill # Strategic Compact Skill

View File

@@ -2,6 +2,6 @@ interface:
display_name: "Strategic Compact" display_name: "Strategic Compact"
short_description: "Context management via strategic compaction" short_description: "Context management via strategic compaction"
brand_color: "#14B8A6" brand_color: "#14B8A6"
default_prompt: "Use $strategic-compact to choose a useful context compaction boundary." default_prompt: "Suggest task boundary compaction for context management"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: tdd-workflow name: tdd-workflow
description: Use this skill when writing new features, fixing bugs, or refactoring code. Enforces test-driven development with 80%+ coverage including unit, integration, and E2E tests. description: Use this skill when writing new features, fixing bugs, or refactoring code. Enforces test-driven development with 80%+ coverage including unit, integration, and E2E tests.
origin: ECC
--- ---
# Test-Driven Development Workflow # Test-Driven Development Workflow
@@ -313,39 +314,39 @@ npm run test:coverage
## Common Testing Mistakes to Avoid ## Common Testing Mistakes to Avoid
### FAIL: WRONG: Testing Implementation Details ### WRONG: Testing Implementation Details
```typescript ```typescript
// Don't test internal state // Don't test internal state
expect(component.state.count).toBe(5) expect(component.state.count).toBe(5)
``` ```
### PASS: CORRECT: Test User-Visible Behavior ### CORRECT: Test User-Visible Behavior
```typescript ```typescript
// Test what users see // Test what users see
expect(screen.getByText('Count: 5')).toBeInTheDocument() expect(screen.getByText('Count: 5')).toBeInTheDocument()
``` ```
### FAIL: WRONG: Brittle Selectors ### WRONG: Brittle Selectors
```typescript ```typescript
// Breaks easily // Breaks easily
await page.click('.css-class-xyz') await page.click('.css-class-xyz')
``` ```
### PASS: CORRECT: Semantic Selectors ### CORRECT: Semantic Selectors
```typescript ```typescript
// Resilient to changes // Resilient to changes
await page.click('button:has-text("Submit")') await page.click('button:has-text("Submit")')
await page.click('[data-testid="submit-button"]') await page.click('[data-testid="submit-button"]')
``` ```
### FAIL: WRONG: No Test Isolation ### WRONG: No Test Isolation
```typescript ```typescript
// Tests depend on each other // Tests depend on each other
test('creates user', () => { /* ... */ }) test('creates user', () => { /* ... */ })
test('updates same user', () => { /* depends on previous test */ }) test('updates same user', () => { /* depends on previous test */ })
``` ```
### PASS: CORRECT: Independent Tests ### CORRECT: Independent Tests
```typescript ```typescript
// Each test sets up its own data // Each test sets up its own data
test('creates user', () => { test('creates user', () => {

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "TDD Workflow" display_name: "TDD Workflow"
short_description: "Test-driven development with coverage gates" short_description: "Test-driven development with 80%+ coverage"
brand_color: "#22C55E" brand_color: "#22C55E"
default_prompt: "Use $tdd-workflow to drive the change with tests before implementation." default_prompt: "Follow TDD: write tests first, implement, verify 80%+ coverage"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: verification-loop name: verification-loop
description: "A comprehensive verification system for Claude Code sessions." description: "A comprehensive verification system for Claude Code sessions."
origin: ECC
--- ---
# Verification Loop Skill # Verification Loop Skill

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Verification Loop" display_name: "Verification Loop"
short_description: "Build, test, lint, and typecheck verification" short_description: "Build, test, lint, typecheck verification"
brand_color: "#10B981" brand_color: "#10B981"
default_prompt: "Use $verification-loop to run build, test, lint, and typecheck verification." default_prompt: "Run verification: build, test, lint, typecheck, security"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: video-editing name: video-editing
description: AI-assisted video editing workflows for cutting, structuring, and augmenting real footage. Covers the full pipeline from raw capture through FFmpeg, Remotion, ElevenLabs, fal.ai, and final polish in Descript or CapCut. Use when the user wants to edit video, cut footage, create vlogs, or build video content. description: AI-assisted video editing workflows for cutting, structuring, and augmenting real footage. Covers the full pipeline from raw capture through FFmpeg, Remotion, ElevenLabs, fal.ai, and final polish in Descript or CapCut. Use when the user wants to edit video, cut footage, create vlogs, or build video content.
origin: ECC
--- ---
# Video Editing # Video Editing

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "Video Editing" display_name: "Video Editing"
short_description: "AI-assisted editing for real footage" short_description: "AI-assisted video editing for real footage"
brand_color: "#EF4444" brand_color: "#EF4444"
default_prompt: "Use $video-editing to plan an AI-assisted edit for real footage." default_prompt: "Edit video using AI-assisted pipeline: organize, cut, compose, generate assets, polish"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -1,6 +1,7 @@
--- ---
name: x-api name: x-api
description: X/Twitter API integration for posting tweets, threads, reading timelines, search, and analytics. Covers OAuth auth patterns, rate limits, and platform-native content posting. Use when the user wants to interact with X programmatically. description: X/Twitter API integration for posting tweets, threads, reading timelines, search, and analytics. Covers OAuth auth patterns, rate limits, and platform-native content posting. Use when the user wants to interact with X programmatically.
origin: ECC
--- ---
# X API # X API
@@ -18,7 +19,7 @@ Programmatic interaction with X (Twitter) for posting, reading, searching, and a
## Authentication ## Authentication
### OAuth 2.0 Bearer Token (App-Only) ### OAuth 2.0 (App-Only / User Context)
Best for: read-heavy operations, search, public data. Best for: read-heavy operations, search, public data.
@@ -45,27 +46,25 @@ tweets = resp.json()
### OAuth 1.0a (User Context) ### OAuth 1.0a (User Context)
Required for: posting tweets, managing account, DMs, and any write flow. Required for: posting tweets, managing account, DMs.
```bash ```bash
# Environment setup — source before use # Environment setup — source before use
export X_CONSUMER_KEY="your-consumer-key" export X_API_KEY="your-api-key"
export X_CONSUMER_SECRET="your-consumer-secret" export X_API_SECRET="your-api-secret"
export X_ACCESS_TOKEN="your-access-token" export X_ACCESS_TOKEN="your-access-token"
export X_ACCESS_TOKEN_SECRET="your-access-token-secret" export X_ACCESS_SECRET="your-access-secret"
``` ```
Legacy aliases such as `X_API_KEY`, `X_API_SECRET`, and `X_ACCESS_SECRET` may exist in older setups. Prefer the `X_CONSUMER_*` and `X_ACCESS_TOKEN_SECRET` names when documenting or wiring new flows.
```python ```python
import os import os
from requests_oauthlib import OAuth1Session from requests_oauthlib import OAuth1Session
oauth = OAuth1Session( oauth = OAuth1Session(
os.environ["X_CONSUMER_KEY"], os.environ["X_API_KEY"],
client_secret=os.environ["X_CONSUMER_SECRET"], client_secret=os.environ["X_API_SECRET"],
resource_owner_key=os.environ["X_ACCESS_TOKEN"], resource_owner_key=os.environ["X_ACCESS_TOKEN"],
resource_owner_secret=os.environ["X_ACCESS_TOKEN_SECRET"], resource_owner_secret=os.environ["X_ACCESS_SECRET"],
) )
``` ```
@@ -93,6 +92,7 @@ def post_thread(oauth, tweets: list[str]) -> list[str]:
if reply_to: if reply_to:
payload["reply"] = {"in_reply_to_tweet_id": reply_to} payload["reply"] = {"in_reply_to_tweet_id": reply_to}
resp = oauth.post("https://api.x.com/2/tweets", json=payload) resp = oauth.post("https://api.x.com/2/tweets", json=payload)
resp.raise_for_status()
tweet_id = resp.json()["data"]["id"] tweet_id = resp.json()["data"]["id"]
ids.append(tweet_id) ids.append(tweet_id)
reply_to = tweet_id reply_to = tweet_id
@@ -126,21 +126,6 @@ resp = requests.get(
) )
``` ```
### Pull Recent Original Posts for Voice Modeling
```python
resp = requests.get(
"https://api.x.com/2/tweets/search/recent",
headers=headers,
params={
"query": "from:affaanmustafa -is:retweet -is:reply",
"max_results": 25,
"tweet.fields": "created_at,public_metrics",
}
)
voice_samples = resp.json()
```
### Get User by Username ### Get User by Username
```python ```python
@@ -170,12 +155,17 @@ resp = oauth.post(
) )
``` ```
## Rate Limits ## Rate Limits Reference
X API rate limits vary by endpoint, auth method, and account tier, and they change over time. Always: | Endpoint | Limit | Window |
- Check the current X developer docs before hardcoding assumptions |----------|-------|--------|
- Read `x-rate-limit-remaining` and `x-rate-limit-reset` headers at runtime | POST /2/tweets | 200 | 15 min |
- Back off automatically instead of relying on static tables in code | GET /2/tweets/search/recent | 450 | 15 min |
| GET /2/users/:id/tweets | 1500 | 15 min |
| GET /2/users/by/username | 300 | 15 min |
| POST media/upload | 415 | 15 min |
Always check `x-rate-limit-remaining` and `x-rate-limit-reset` headers.
```python ```python
import time import time
@@ -212,18 +202,13 @@ else:
## Integration with Content Engine ## Integration with Content Engine
Use `brand-voice` plus `content-engine` to generate platform-native content, then post via X API: Use `content-engine` skill to generate platform-native content, then post via X API:
1. Pull recent original posts when voice matching matters 1. Generate content with content-engine (X platform format)
2. Build or reuse a `VOICE PROFILE` 2. Validate length (280 chars for single tweet)
3. Generate content with `content-engine` in X-native format 3. Post via X API using patterns above
4. Validate length and thread structure 4. Track engagement via public_metrics
5. Return the draft for approval unless the user explicitly asked to post now
6. Post via X API only after approval
7. Track engagement via public_metrics
## Related Skills ## Related Skills
- `brand-voice` — Build a reusable voice profile from real X and site/source material
- `content-engine` — Generate platform-native content for X - `content-engine` — Generate platform-native content for X
- `crosspost` — Distribute content across X, LinkedIn, and other platforms - `crosspost` — Distribute content across X, LinkedIn, and other platforms
- `connections-optimizer` — Reorganize the X graph before drafting network-driven outreach

View File

@@ -1,7 +1,7 @@
interface: interface:
display_name: "X API" display_name: "X API"
short_description: "X API posting, timelines, and analytics" short_description: "X/Twitter API integration for posting, threads, and analytics"
brand_color: "#000000" brand_color: "#000000"
default_prompt: "Use $x-api to build X API posting, timeline, or analytics workflows." default_prompt: "Use X API to post tweets, threads, or retrieve timeline and search data"
policy: policy:
allow_implicit_invocation: true allow_implicit_invocation: true

View File

@@ -45,37 +45,60 @@ Example:
The following fields **must always be arrays**: The following fields **must always be arrays**:
* `agents`
* `commands` * `commands`
* `skills` * `skills`
* `hooks` (if present) * `hooks` (if present)
Even if there is only one entry, **strings are not accepted**. Even if there is only one entry, **strings are not accepted**.
### Invalid
```json
{
"agents": "./agents"
}
```
### Valid
```json
{
"agents": ["./agents/planner.md"]
}
```
This applies consistently across all component path fields. This applies consistently across all component path fields.
--- ---
## The `agents` Field: DO NOT ADD ## Path Resolution Rules (Critical)
> WARNING: **CRITICAL:** Do NOT add an `"agents"` field to `plugin.json`. The Claude Code plugin validator rejects it entirely. ### Agents MUST use explicit file paths
### Why This Matters The validator **does not accept directory paths for `agents`**.
The `agents` field is not part of the Claude Code plugin manifest schema. Any form of it -- string path, array of paths, or array of directories -- causes a validation error: Even the following will fail:
``` ```json
agents: Invalid input {
"agents": ["./agents/"]
}
``` ```
Agent `.md` files under `agents/` are discovered automatically by convention (similar to hooks). They do not need to be declared in the manifest. Instead, you must enumerate agent files explicitly:
### History ```json
{
"agents": [
"./agents/planner.md",
"./agents/architect.md",
"./agents/code-reviewer.md"
]
}
```
Previously this repo listed agents explicitly in `plugin.json` as an array of file paths. This passed the repo's own schema but failed Claude Code's actual validator, which does not recognize the field. Removed in #1459. This is the most common source of validation errors.
---
## Path Resolution Rules
### Commands and Skills ### Commands and Skills
@@ -97,7 +120,7 @@ Assume the validator is hostile and literal.
## The `hooks` Field: DO NOT ADD ## The `hooks` Field: DO NOT ADD
> WARNING: **CRITICAL:** Do NOT add a `"hooks"` field to `plugin.json`. This is enforced by a regression test. > ⚠️ **CRITICAL:** Do NOT add a `"hooks"` field to `plugin.json`. This is enforced by a regression test.
### Why This Matters ### Why This Matters
@@ -137,7 +160,7 @@ The test `plugin.json does NOT have explicit hooks declaration` in `tests/hooks/
These look correct but are rejected: These look correct but are rejected:
* String values instead of arrays * String values instead of arrays
* **Adding `"agents"` in any form** - not a recognized manifest field, causes `Invalid input` * Arrays of directories for `agents`
* Missing `version` * Missing `version`
* Relying on inferred paths * Relying on inferred paths
* Assuming marketplace behavior matches local validation * Assuming marketplace behavior matches local validation
@@ -152,6 +175,10 @@ Avoid cleverness. Be explicit.
```json ```json
{ {
"version": "1.1.0", "version": "1.1.0",
"agents": [
"./agents/planner.md",
"./agents/code-reviewer.md"
],
"commands": ["./commands/"], "commands": ["./commands/"],
"skills": ["./skills/"] "skills": ["./skills/"]
} }
@@ -159,7 +186,7 @@ Avoid cleverness. Be explicit.
This structure has been validated against the Claude plugin validator. This structure has been validated against the Claude plugin validator.
**Important:** Notice there is NO `"hooks"` field and NO `"agents"` field. Both are loaded automatically by convention. Adding either explicitly causes errors. **Important:** Notice there is NO `"hooks"` field. The `hooks/hooks.json` file is loaded automatically by convention. Adding it explicitly causes a duplicate error.
--- ---
@@ -167,9 +194,9 @@ This structure has been validated against the Claude plugin validator.
Before submitting changes that touch `plugin.json`: Before submitting changes that touch `plugin.json`:
1. Ensure all component fields are arrays 1. Use explicit file paths for agents
2. Include a `version` 2. Ensure all component fields are arrays
3. Do NOT add `agents` or `hooks` fields (both are auto-loaded by convention) 3. Include a `version`
4. Run: 4. Run:
```bash ```bash

View File

@@ -1,12 +1,12 @@
### Plugin Manifest Gotchas ### Plugin Manifest Gotchas
If you plan to edit `.claude-plugin/plugin.json`, be aware that the Claude plugin validator enforces several **undocumented but strict constraints** that can cause installs to fail with vague errors (for example, `agents: Invalid input`). In particular, component fields must be arrays, `agents` is not a supported manifest field and must not be included in plugin.json, and a `version` field is required for reliable validation and installation. If you plan to edit `.claude-plugin/plugin.json`, be aware that the Claude plugin validator enforces several **undocumented but strict constraints** that can cause installs to fail with vague errors (for example, `agents: Invalid input`). In particular, component fields must be arrays, `agents` must use explicit file paths rather than directories, and a `version` field is required for reliable validation and installation.
These constraints are not obvious from public examples and have caused repeated installation failures in the past. They are documented in detail in `.claude-plugin/PLUGIN_SCHEMA_NOTES.md`, which should be reviewed before making any changes to the plugin manifest. These constraints are not obvious from public examples and have caused repeated installation failures in the past. They are documented in detail in `.claude-plugin/PLUGIN_SCHEMA_NOTES.md`, which should be reviewed before making any changes to the plugin manifest.
### Custom Endpoints and Gateways ### Custom Endpoints and Gateways
ECC does not override Claude Code transport settings. If Claude Code is configured to run through an official LLM gateway or a compatible custom endpoint, the plugin continues to work because hooks, skills, and any retained legacy command shims execute locally after the CLI starts successfully. ECC does not override Claude Code transport settings. If Claude Code is configured to run through an official LLM gateway or a compatible custom endpoint, the plugin continues to work because hooks, commands, and skills execute locally after the CLI starts successfully.
Use Claude Code's own environment/configuration for transport selection, for example: Use Claude Code's own environment/configuration for transport selection, for example:

View File

@@ -1,5 +1,7 @@
{ {
"$schema": "https://anthropic.com/claude-code/marketplace.schema.json",
"name": "everything-claude-code", "name": "everything-claude-code",
"description": "Battle-tested Claude Code configurations from an Anthropic hackathon winner — agents, skills, hooks, commands, and rules evolved over 10+ months of intensive daily use",
"owner": { "owner": {
"name": "Affaan Mustafa", "name": "Affaan Mustafa",
"email": "me@affaanmustafa.com" "email": "me@affaanmustafa.com"
@@ -11,13 +13,13 @@
{ {
"name": "everything-claude-code", "name": "everything-claude-code",
"source": "./", "source": "./",
"description": "The most comprehensive Claude Code plugin — 48 agents, 184 skills, 79 legacy command shims, selective install profiles, and production-ready hooks for TDD, security scanning, code review, and continuous learning", "description": "The most comprehensive Claude Code plugin — 14+ agents, 56+ skills, 33+ commands, and production-ready hooks for TDD, security scanning, code review, and continuous learning",
"version": "2.0.0-rc.1", "version": "1.9.0",
"author": { "author": {
"name": "Affaan Mustafa", "name": "Affaan Mustafa",
"email": "me@affaanmustafa.com" "email": "me@affaanmustafa.com"
}, },
"homepage": "https://ecc.tools", "homepage": "https://github.com/affaan-m/everything-claude-code",
"repository": "https://github.com/affaan-m/everything-claude-code", "repository": "https://github.com/affaan-m/everything-claude-code",
"license": "MIT", "license": "MIT",
"keywords": [ "keywords": [

View File

@@ -1,12 +1,12 @@
{ {
"name": "everything-claude-code", "name": "everything-claude-code",
"version": "2.0.0-rc.1", "version": "1.9.0",
"description": "Battle-tested Claude Code plugin for engineering teams — 48 agents, 184 skills, 79 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use", "description": "Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, and rules evolved over 10+ months of intensive daily use",
"author": { "author": {
"name": "Affaan Mustafa", "name": "Affaan Mustafa",
"url": "https://x.com/affaanmustafa" "url": "https://x.com/affaanmustafa"
}, },
"homepage": "https://ecc.tools", "homepage": "https://github.com/affaan-m/everything-claude-code",
"repository": "https://github.com/affaan-m/everything-claude-code", "repository": "https://github.com/affaan-m/everything-claude-code",
"license": "MIT", "license": "MIT",
"keywords": [ "keywords": [
@@ -21,7 +21,5 @@
"workflow", "workflow",
"automation", "automation",
"best-practices" "best-practices"
], ]
"skills": ["./skills/"],
"commands": ["./commands/"]
} }

View File

@@ -1,47 +0,0 @@
# Node.js Rules for everything-claude-code
> Project-specific rules for the ECC codebase. Extends common rules.
## Stack
- **Runtime**: Node.js >=18 (no transpilation, plain CommonJS)
- **Test runner**: `node tests/run-all.js` — individual files via `node tests/**/*.test.js`
- **Linter**: ESLint (`@eslint/js`, flat config)
- **Coverage**: c8
- **Lint**: markdownlint-cli for `.md` files
## File Conventions
- `scripts/` — Node.js utilities, hooks. CommonJS (`require`/`module.exports`)
- `agents/`, `commands/`, `skills/`, `rules/` — Markdown with YAML frontmatter
- `tests/` — Mirror the `scripts/` structure. Test files named `*.test.js`
- File naming: **lowercase with hyphens** (e.g. `session-start.js`, `post-edit-format.js`)
## Code Style
- CommonJS only — no ESM (`import`/`export`) unless file ends in `.mjs`
- No TypeScript — plain `.js` throughout
- Prefer `const` over `let`; never `var`
- Keep hook scripts under 200 lines — extract helpers to `scripts/lib/`
- All hooks must `exit 0` on non-critical errors (never block tool execution unexpectedly)
## Hook Development
- Hook scripts normally receive JSON on stdin, but hooks routed through `scripts/hooks/run-with-flags.js` can export `run(rawInput)` and let the wrapper handle parsing/gating
- Async hooks: mark `"async": true` in `settings.json` with a timeout ≤30s
- Blocking hooks (PreToolUse, stop): keep fast (<200ms) — no network calls
- Use `run-with-flags.js` wrapper for all hooks so `ECC_HOOK_PROFILE` and `ECC_DISABLED_HOOKS` runtime gating works
- Always exit 0 on parse errors; log to stderr with `[HookName]` prefix
## Testing Requirements
- Run `node tests/run-all.js` before committing
- New scripts in `scripts/lib/` require a matching test in `tests/lib/`
- New hooks require at least one integration test in `tests/hooks/`
## Markdown / Agent Files
- Agents: YAML frontmatter with `name`, `description`, `tools`, `model`
- Skills: sections — When to Use, How It Works, Examples
- Commands: `description:` frontmatter line required
- Run `npx markdownlint-cli '**/*.md' --ignore node_modules` before committing

View File

@@ -1,98 +0,0 @@
# Everything Claude Code for CodeBuddy
Bring Everything Claude Code (ECC) workflows to CodeBuddy IDE. This repository provides custom commands, agents, skills, and rules that can be installed into any CodeBuddy project using the unified Target Adapter architecture.
## Quick Start (Recommended)
Use the unified install system for full lifecycle management:
```bash
# Install with default profile
node scripts/install-apply.js --target codebuddy --profile developer
# Install with full profile (all modules)
node scripts/install-apply.js --target codebuddy --profile full
# Dry-run to preview changes
node scripts/install-apply.js --target codebuddy --profile full --dry-run
```
## Management Commands
```bash
# Check installation health
node scripts/doctor.js --target codebuddy
# Repair installation
node scripts/repair.js --target codebuddy
# Uninstall cleanly (tracked via install-state)
node scripts/uninstall.js --target codebuddy
```
## Shell Script (Legacy)
The legacy shell scripts are still available for quick setup:
```bash
# Install to current project
cd /path/to/your/project
.codebuddy/install.sh
# Install globally
.codebuddy/install.sh ~
```
## What's Included
### Commands
Commands are on-demand workflows invocable via the `/` menu in CodeBuddy chat. All commands are reused directly from the project root's `commands/` folder.
### Agents
Agents are specialized AI assistants with specific tool configurations. All agents are reused directly from the project root's `agents/` folder.
### Skills
Skills are on-demand workflows invocable via the `/` menu in chat. All skills are reused directly from the project's `skills/` folder.
### Rules
Rules provide always-on rules and context that shape how the agent works with your code. Rules are flattened into namespaced files (e.g., `common-coding-style.md`) for CodeBuddy compatibility.
## Project Structure
```
.codebuddy/
├── commands/ # Command files (reused from project root)
├── agents/ # Agent files (reused from project root)
├── skills/ # Skill files (reused from skills/)
├── rules/ # Rule files (flattened from rules/)
├── ecc-install-state.json # Install state tracking
├── install.sh # Legacy install script
├── uninstall.sh # Legacy uninstall script
└── README.md # This file
```
## Benefits of Target Adapter Install
- **Install-state tracking**: Safe uninstall that only removes ECC-managed files
- **Doctor checks**: Verify installation health and detect drift
- **Repair**: Auto-fix broken installations
- **Selective install**: Choose specific modules via profiles
- **Cross-platform**: Node.js-based, works on Windows/macOS/Linux
## Recommended Workflow
1. **Start with planning**: Use `/plan` command to break down complex features
2. **Write tests first**: Invoke `/tdd` command before implementing
3. **Review your code**: Use `/code-review` after writing code
4. **Check security**: Use `/code-review` again for auth, API endpoints, or sensitive data handling
5. **Fix build errors**: Use `/build-fix` if there are build errors
## Next Steps
- Open your project in CodeBuddy
- Type `/` to see available commands
- Enjoy the ECC workflows!

View File

@@ -1,98 +0,0 @@
# Everything Claude Code for CodeBuddy
为 CodeBuddy IDE 带来 Everything Claude Code (ECC) 工作流。此仓库提供自定义命令、智能体、技能和规则,可以通过统一的 Target Adapter 架构安装到任何 CodeBuddy 项目中。
## 快速开始(推荐)
使用统一安装系统,获得完整的生命周期管理:
```bash
# 使用默认配置安装
node scripts/install-apply.js --target codebuddy --profile developer
# 使用完整配置安装(所有模块)
node scripts/install-apply.js --target codebuddy --profile full
# 预览模式查看变更
node scripts/install-apply.js --target codebuddy --profile full --dry-run
```
## 管理命令
```bash
# 检查安装健康状态
node scripts/doctor.js --target codebuddy
# 修复安装
node scripts/repair.js --target codebuddy
# 清洁卸载(通过 install-state 跟踪)
node scripts/uninstall.js --target codebuddy
```
## Shell 脚本(旧版)
旧版 Shell 脚本仍然可用于快速设置:
```bash
# 安装到当前项目
cd /path/to/your/project
.codebuddy/install.sh
# 全局安装
.codebuddy/install.sh ~
```
## 包含的内容
### 命令
命令是通过 CodeBuddy 聊天中的 `/` 菜单调用的按需工作流。所有命令都直接复用自项目根目录的 `commands/` 文件夹。
### 智能体
智能体是具有特定工具配置的专门 AI 助手。所有智能体都直接复用自项目根目录的 `agents/` 文件夹。
### 技能
技能是通过聊天中的 `/` 菜单调用的按需工作流。所有技能都直接复用自项目的 `skills/` 文件夹。
### 规则
规则提供始终适用的规则和上下文,塑造智能体处理代码的方式。规则会被扁平化为命名空间文件(如 `common-coding-style.md`)以兼容 CodeBuddy。
## 项目结构
```
.codebuddy/
├── commands/ # 命令文件(复用自项目根目录)
├── agents/ # 智能体文件(复用自项目根目录)
├── skills/ # 技能文件(复用自 skills/
├── rules/ # 规则文件(从 rules/ 扁平化)
├── ecc-install-state.json # 安装状态跟踪
├── install.sh # 旧版安装脚本
├── uninstall.sh # 旧版卸载脚本
└── README.zh-CN.md # 此文件
```
## Target Adapter 安装的优势
- **安装状态跟踪**:安全卸载,仅删除 ECC 管理的文件
- **Doctor 检查**:验证安装健康状态并检测偏移
- **修复**:自动修复损坏的安装
- **选择性安装**:通过配置文件选择特定模块
- **跨平台**:基于 Node.js支持 Windows/macOS/Linux
## 推荐的工作流
1. **从计划开始**:使用 `/plan` 命令分解复杂功能
2. **先写测试**:在实现之前调用 `/tdd` 命令
3. **审查您的代码**:编写代码后使用 `/code-review`
4. **检查安全性**对于身份验证、API 端点或敏感数据处理,再次使用 `/code-review`
5. **修复构建错误**:如果有构建错误,使用 `/build-fix`
## 下一步
- 在 CodeBuddy 中打开您的项目
- 输入 `/` 以查看可用命令
- 享受 ECC 工作流!

View File

@@ -1,312 +0,0 @@
#!/usr/bin/env node
/**
* ECC CodeBuddy Installer (Cross-platform Node.js version)
* Installs Everything Claude Code workflows into a CodeBuddy project.
*
* Usage:
* node install.js # Install to current directory
* node install.js ~ # Install globally to ~/.codebuddy/
*/
const fs = require('fs');
const path = require('path');
const os = require('os');
// Platform detection
const isWindows = process.platform === 'win32';
/**
* Get home directory cross-platform
*/
function getHomeDir() {
return process.env.USERPROFILE || process.env.HOME || os.homedir();
}
/**
* Ensure directory exists
*/
function ensureDir(dirPath) {
try {
if (!fs.existsSync(dirPath)) {
fs.mkdirSync(dirPath, { recursive: true });
}
} catch (err) {
if (err.code !== 'EEXIST') {
throw err;
}
}
}
/**
* Read lines from a file
*/
function readLines(filePath) {
try {
if (!fs.existsSync(filePath)) {
return [];
}
const content = fs.readFileSync(filePath, 'utf8');
return content.split('\n').filter(line => line.length > 0);
} catch {
return [];
}
}
/**
* Check if manifest contains an entry
*/
function manifestHasEntry(manifestPath, entry) {
const lines = readLines(manifestPath);
return lines.includes(entry);
}
/**
* Add entry to manifest
*/
function ensureManifestEntry(manifestPath, entry) {
try {
const lines = readLines(manifestPath);
if (!lines.includes(entry)) {
const content = lines.join('\n') + (lines.length > 0 ? '\n' : '') + entry + '\n';
fs.writeFileSync(manifestPath, content, 'utf8');
}
} catch (err) {
console.error(`Error updating manifest: ${err.message}`);
}
}
/**
* Copy a file and manage in manifest
*/
function copyManagedFile(sourcePath, targetPath, manifestPath, manifestEntry, makeExecutable = false) {
const alreadyManaged = manifestHasEntry(manifestPath, manifestEntry);
// If target file already exists
if (fs.existsSync(targetPath)) {
if (alreadyManaged) {
ensureManifestEntry(manifestPath, manifestEntry);
}
return false;
}
// Copy the file
try {
ensureDir(path.dirname(targetPath));
fs.copyFileSync(sourcePath, targetPath);
// Make executable on Unix systems
if (makeExecutable && !isWindows) {
fs.chmodSync(targetPath, 0o755);
}
ensureManifestEntry(manifestPath, manifestEntry);
return true;
} catch (err) {
console.error(`Error copying ${sourcePath}: ${err.message}`);
return false;
}
}
/**
* Recursively find files in a directory
*/
function findFiles(dir, extension = '') {
const results = [];
try {
if (!fs.existsSync(dir)) {
return results;
}
function walk(currentPath) {
try {
const entries = fs.readdirSync(currentPath, { withFileTypes: true });
for (const entry of entries) {
const fullPath = path.join(currentPath, entry.name);
if (entry.isDirectory()) {
walk(fullPath);
} else if (!extension || entry.name.endsWith(extension)) {
results.push(fullPath);
}
}
} catch {
// Ignore permission errors
}
}
walk(dir);
} catch {
// Ignore errors
}
return results.sort();
}
/**
* Main install function
*/
function doInstall() {
// Resolve script directory (where this file lives)
const scriptDir = path.dirname(path.resolve(__filename));
const repoRoot = path.dirname(scriptDir);
const codebuddyDirName = '.codebuddy';
// Parse arguments
let targetDir = process.cwd();
if (process.argv.length > 2) {
const arg = process.argv[2];
if (arg === '~' || arg === getHomeDir()) {
targetDir = getHomeDir();
} else {
targetDir = path.resolve(arg);
}
}
// Determine codebuddy full path
let codebuddyFullPath;
const baseName = path.basename(targetDir);
if (baseName === codebuddyDirName) {
codebuddyFullPath = targetDir;
} else {
codebuddyFullPath = path.join(targetDir, codebuddyDirName);
}
console.log('ECC CodeBuddy Installer');
console.log('=======================');
console.log('');
console.log(`Source: ${repoRoot}`);
console.log(`Target: ${codebuddyFullPath}/`);
console.log('');
// Create subdirectories
const subdirs = ['commands', 'agents', 'skills', 'rules'];
for (const dir of subdirs) {
ensureDir(path.join(codebuddyFullPath, dir));
}
// Manifest file
const manifest = path.join(codebuddyFullPath, '.ecc-manifest');
ensureDir(path.dirname(manifest));
// Counters
let commands = 0;
let agents = 0;
let skills = 0;
let rules = 0;
// Copy commands
const commandsDir = path.join(repoRoot, 'commands');
if (fs.existsSync(commandsDir)) {
const files = findFiles(commandsDir, '.md');
for (const file of files) {
if (path.basename(path.dirname(file)) === 'commands') {
const localName = path.basename(file);
const targetPath = path.join(codebuddyFullPath, 'commands', localName);
if (copyManagedFile(file, targetPath, manifest, `commands/${localName}`)) {
commands += 1;
}
}
}
}
// Copy agents
const agentsDir = path.join(repoRoot, 'agents');
if (fs.existsSync(agentsDir)) {
const files = findFiles(agentsDir, '.md');
for (const file of files) {
if (path.basename(path.dirname(file)) === 'agents') {
const localName = path.basename(file);
const targetPath = path.join(codebuddyFullPath, 'agents', localName);
if (copyManagedFile(file, targetPath, manifest, `agents/${localName}`)) {
agents += 1;
}
}
}
}
// Copy skills (with subdirectories)
const skillsDir = path.join(repoRoot, 'skills');
if (fs.existsSync(skillsDir)) {
const skillDirs = fs.readdirSync(skillsDir, { withFileTypes: true })
.filter(entry => entry.isDirectory())
.map(entry => entry.name);
for (const skillName of skillDirs) {
const sourceSkillDir = path.join(skillsDir, skillName);
const targetSkillDir = path.join(codebuddyFullPath, 'skills', skillName);
let skillCopied = false;
const skillFiles = findFiles(sourceSkillDir);
for (const sourceFile of skillFiles) {
const relativePath = path.relative(sourceSkillDir, sourceFile);
const targetPath = path.join(targetSkillDir, relativePath);
const manifestEntry = `skills/${skillName}/${relativePath.replace(/\\/g, '/')}`;
if (copyManagedFile(sourceFile, targetPath, manifest, manifestEntry)) {
skillCopied = true;
}
}
if (skillCopied) {
skills += 1;
}
}
}
// Copy rules (with subdirectories)
const rulesDir = path.join(repoRoot, 'rules');
if (fs.existsSync(rulesDir)) {
const ruleFiles = findFiles(rulesDir);
for (const ruleFile of ruleFiles) {
const relativePath = path.relative(rulesDir, ruleFile);
const targetPath = path.join(codebuddyFullPath, 'rules', relativePath);
const manifestEntry = `rules/${relativePath.replace(/\\/g, '/')}`;
if (copyManagedFile(ruleFile, targetPath, manifest, manifestEntry)) {
rules += 1;
}
}
}
// Copy README files (skip install/uninstall scripts to avoid broken
// path references when the copied script runs from the target directory)
const readmeFiles = ['README.md', 'README.zh-CN.md'];
for (const readmeFile of readmeFiles) {
const sourcePath = path.join(scriptDir, readmeFile);
if (fs.existsSync(sourcePath)) {
const targetPath = path.join(codebuddyFullPath, readmeFile);
copyManagedFile(sourcePath, targetPath, manifest, readmeFile);
}
}
// Add manifest itself
ensureManifestEntry(manifest, '.ecc-manifest');
// Print summary
console.log('Installation complete!');
console.log('');
console.log('Components installed:');
console.log(` Commands: ${commands}`);
console.log(` Agents: ${agents}`);
console.log(` Skills: ${skills}`);
console.log(` Rules: ${rules}`);
console.log('');
console.log(`Directory: ${path.basename(codebuddyFullPath)}`);
console.log('');
console.log('Next steps:');
console.log(' 1. Open your project in CodeBuddy');
console.log(' 2. Type / to see available commands');
console.log(' 3. Enjoy the ECC workflows!');
console.log('');
console.log('To uninstall later:');
console.log(` cd ${codebuddyFullPath}`);
console.log(' node uninstall.js');
console.log('');
}
// Run installer
try {
doInstall();
} catch (error) {
console.error(`Error: ${error.message}`);
process.exit(1);
}

View File

@@ -1,231 +0,0 @@
#!/bin/bash
#
# ECC CodeBuddy Installer
# Installs Everything Claude Code workflows into a CodeBuddy project.
#
# Usage:
# ./install.sh # Install to current directory
# ./install.sh ~ # Install globally to ~/.codebuddy/
#
set -euo pipefail
# When globs match nothing, expand to empty list instead of the literal pattern
shopt -s nullglob
# Resolve the directory where this script lives
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
# Locate the ECC repo root by walking up from SCRIPT_DIR to find the marker
# file (VERSION). This keeps the script working even when it has been copied
# into a target project's .codebuddy/ directory.
find_repo_root() {
local dir="$(dirname "$SCRIPT_DIR")"
# First try the parent of SCRIPT_DIR (original layout: .codebuddy/ lives in repo root)
if [ -f "$dir/VERSION" ] && [ -d "$dir/commands" ] && [ -d "$dir/agents" ]; then
echo "$dir"
return 0
fi
echo ""
return 1
}
REPO_ROOT="$(find_repo_root)"
if [ -z "$REPO_ROOT" ]; then
echo "Error: Cannot locate the ECC repository root."
echo "This script must be run from within the ECC repository's .codebuddy/ directory."
exit 1
fi
# CodeBuddy directory name
CODEBUDDY_DIR=".codebuddy"
ensure_manifest_entry() {
local manifest="$1"
local entry="$2"
touch "$manifest"
if ! grep -Fqx "$entry" "$manifest"; then
echo "$entry" >> "$manifest"
fi
}
manifest_has_entry() {
local manifest="$1"
local entry="$2"
[ -f "$manifest" ] && grep -Fqx "$entry" "$manifest"
}
copy_managed_file() {
local source_path="$1"
local target_path="$2"
local manifest="$3"
local manifest_entry="$4"
local make_executable="${5:-0}"
local already_managed=0
if manifest_has_entry "$manifest" "$manifest_entry"; then
already_managed=1
fi
if [ -f "$target_path" ]; then
if [ "$already_managed" -eq 1 ]; then
ensure_manifest_entry "$manifest" "$manifest_entry"
fi
return 1
fi
cp "$source_path" "$target_path"
if [ "$make_executable" -eq 1 ]; then
chmod +x "$target_path"
fi
ensure_manifest_entry "$manifest" "$manifest_entry"
return 0
}
# Install function
do_install() {
local target_dir="$PWD"
# Check if ~ was specified (or expanded to $HOME)
if [ "$#" -ge 1 ]; then
if [ "$1" = "~" ] || [ "$1" = "$HOME" ]; then
target_dir="$HOME"
fi
fi
# Check if we're already inside a .codebuddy directory
local current_dir_name="$(basename "$target_dir")"
local codebuddy_full_path
if [ "$current_dir_name" = ".codebuddy" ]; then
# Already inside the codebuddy directory, use it directly
codebuddy_full_path="$target_dir"
else
# Normal case: append CODEBUDDY_DIR to target_dir
codebuddy_full_path="$target_dir/$CODEBUDDY_DIR"
fi
echo "ECC CodeBuddy Installer"
echo "======================="
echo ""
echo "Source: $REPO_ROOT"
echo "Target: $codebuddy_full_path/"
echo ""
# Subdirectories to create
SUBDIRS="commands agents skills rules"
# Create all required codebuddy subdirectories
for dir in $SUBDIRS; do
mkdir -p "$codebuddy_full_path/$dir"
done
# Manifest file to track installed files
MANIFEST="$codebuddy_full_path/.ecc-manifest"
touch "$MANIFEST"
# Counters for summary
commands=0
agents=0
skills=0
rules=0
# Copy commands from repo root
if [ -d "$REPO_ROOT/commands" ]; then
for f in "$REPO_ROOT/commands"/*.md; do
[ -f "$f" ] || continue
local_name=$(basename "$f")
target_path="$codebuddy_full_path/commands/$local_name"
if copy_managed_file "$f" "$target_path" "$MANIFEST" "commands/$local_name"; then
commands=$((commands + 1))
fi
done
fi
# Copy agents from repo root
if [ -d "$REPO_ROOT/agents" ]; then
for f in "$REPO_ROOT/agents"/*.md; do
[ -f "$f" ] || continue
local_name=$(basename "$f")
target_path="$codebuddy_full_path/agents/$local_name"
if copy_managed_file "$f" "$target_path" "$MANIFEST" "agents/$local_name"; then
agents=$((agents + 1))
fi
done
fi
# Copy skills from repo root (if available)
if [ -d "$REPO_ROOT/skills" ]; then
for d in "$REPO_ROOT/skills"/*/; do
[ -d "$d" ] || continue
skill_name="$(basename "$d")"
target_skill_dir="$codebuddy_full_path/skills/$skill_name"
skill_copied=0
while IFS= read -r source_file; do
relative_path="${source_file#$d}"
target_path="$target_skill_dir/$relative_path"
mkdir -p "$(dirname "$target_path")"
if copy_managed_file "$source_file" "$target_path" "$MANIFEST" "skills/$skill_name/$relative_path"; then
skill_copied=1
fi
done < <(find "$d" -type f | sort)
if [ "$skill_copied" -eq 1 ]; then
skills=$((skills + 1))
fi
done
fi
# Copy rules from repo root
if [ -d "$REPO_ROOT/rules" ]; then
while IFS= read -r rule_file; do
relative_path="${rule_file#$REPO_ROOT/rules/}"
target_path="$codebuddy_full_path/rules/$relative_path"
mkdir -p "$(dirname "$target_path")"
if copy_managed_file "$rule_file" "$target_path" "$MANIFEST" "rules/$relative_path"; then
rules=$((rules + 1))
fi
done < <(find "$REPO_ROOT/rules" -type f | sort)
fi
# Copy README files (skip install/uninstall scripts to avoid broken
# path references when the copied script runs from the target directory)
for readme_file in "$SCRIPT_DIR/README.md" "$SCRIPT_DIR/README.zh-CN.md"; do
if [ -f "$readme_file" ]; then
local_name=$(basename "$readme_file")
target_path="$codebuddy_full_path/$local_name"
copy_managed_file "$readme_file" "$target_path" "$MANIFEST" "$local_name" || true
fi
done
# Add manifest file itself to manifest
ensure_manifest_entry "$MANIFEST" ".ecc-manifest"
# Installation summary
echo "Installation complete!"
echo ""
echo "Components installed:"
echo " Commands: $commands"
echo " Agents: $agents"
echo " Skills: $skills"
echo " Rules: $rules"
echo ""
echo "Directory: $(basename "$codebuddy_full_path")"
echo ""
echo "Next steps:"
echo " 1. Open your project in CodeBuddy"
echo " 2. Type / to see available commands"
echo " 3. Enjoy the ECC workflows!"
echo ""
echo "To uninstall later:"
echo " cd $codebuddy_full_path"
echo " ./uninstall.sh"
}
# Main logic
do_install "$@"

View File

@@ -1,291 +0,0 @@
#!/usr/bin/env node
/**
* ECC CodeBuddy Uninstaller (Cross-platform Node.js version)
* Uninstalls Everything Claude Code workflows from a CodeBuddy project.
*
* Usage:
* node uninstall.js # Uninstall from current directory
* node uninstall.js ~ # Uninstall globally from ~/.codebuddy/
*/
const fs = require('fs');
const path = require('path');
const os = require('os');
const readline = require('readline');
/**
* Get home directory cross-platform
*/
function getHomeDir() {
return process.env.USERPROFILE || process.env.HOME || os.homedir();
}
/**
* Resolve a path to its canonical form
*/
function resolvePath(filePath) {
try {
return fs.realpathSync(filePath);
} catch {
// If realpath fails, return the path as-is
return path.resolve(filePath);
}
}
/**
* Check if a manifest entry is valid (security check)
*/
function isValidManifestEntry(entry) {
// Reject empty, absolute paths, parent directory references
if (!entry || entry.length === 0) return false;
if (entry.startsWith('/')) return false;
if (entry.startsWith('~')) return false;
if (entry.includes('/../') || entry.includes('/..')) return false;
if (entry.startsWith('../') || entry.startsWith('..\\')) return false;
if (entry === '..' || entry === '...' || entry.includes('\\..\\')||entry.includes('/..')) return false;
return true;
}
/**
* Read lines from manifest file
*/
function readManifest(manifestPath) {
try {
if (!fs.existsSync(manifestPath)) {
return [];
}
const content = fs.readFileSync(manifestPath, 'utf8');
return content.split('\n').filter(line => line.length > 0);
} catch {
return [];
}
}
/**
* Recursively find empty directories
*/
function findEmptyDirs(dirPath) {
const emptyDirs = [];
function walkDirs(currentPath) {
try {
const entries = fs.readdirSync(currentPath, { withFileTypes: true });
const subdirs = entries.filter(e => e.isDirectory());
for (const subdir of subdirs) {
const subdirPath = path.join(currentPath, subdir.name);
walkDirs(subdirPath);
}
// Check if directory is now empty
try {
const remaining = fs.readdirSync(currentPath);
if (remaining.length === 0 && currentPath !== dirPath) {
emptyDirs.push(currentPath);
}
} catch {
// Directory might have been deleted
}
} catch {
// Ignore errors
}
}
walkDirs(dirPath);
return emptyDirs.sort().reverse(); // Sort in reverse for removal
}
/**
* Prompt user for confirmation
*/
async function promptConfirm(question) {
return new Promise((resolve) => {
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
rl.question(question, (answer) => {
rl.close();
resolve(/^[yY]$/.test(answer));
});
});
}
/**
* Main uninstall function
*/
async function doUninstall() {
const codebuddyDirName = '.codebuddy';
// Parse arguments
let targetDir = process.cwd();
if (process.argv.length > 2) {
const arg = process.argv[2];
if (arg === '~' || arg === getHomeDir()) {
targetDir = getHomeDir();
} else {
targetDir = path.resolve(arg);
}
}
// Determine codebuddy full path
let codebuddyFullPath;
const baseName = path.basename(targetDir);
if (baseName === codebuddyDirName) {
codebuddyFullPath = targetDir;
} else {
codebuddyFullPath = path.join(targetDir, codebuddyDirName);
}
console.log('ECC CodeBuddy Uninstaller');
console.log('==========================');
console.log('');
console.log(`Target: ${codebuddyFullPath}/`);
console.log('');
// Check if codebuddy directory exists
if (!fs.existsSync(codebuddyFullPath)) {
console.error(`Error: ${codebuddyDirName} directory not found at ${targetDir}`);
process.exit(1);
}
const codebuddyRootResolved = resolvePath(codebuddyFullPath);
const manifest = path.join(codebuddyFullPath, '.ecc-manifest');
// Handle missing manifest
if (!fs.existsSync(manifest)) {
console.log('Warning: No manifest file found (.ecc-manifest)');
console.log('');
console.log('This could mean:');
console.log(' 1. ECC was installed with an older version without manifest support');
console.log(' 2. The manifest file was manually deleted');
console.log('');
const confirmed = await promptConfirm(`Do you want to remove the entire ${codebuddyDirName} directory? (y/N) `);
if (!confirmed) {
console.log('Uninstall cancelled.');
process.exit(0);
}
try {
fs.rmSync(codebuddyFullPath, { recursive: true, force: true });
console.log('Uninstall complete!');
console.log('');
console.log(`Removed: ${codebuddyFullPath}/`);
} catch (err) {
console.error(`Error removing directory: ${err.message}`);
process.exit(1);
}
return;
}
console.log('Found manifest file - will only remove files installed by ECC');
console.log('');
const confirmed = await promptConfirm(`Are you sure you want to uninstall ECC from ${codebuddyDirName}? (y/N) `);
if (!confirmed) {
console.log('Uninstall cancelled.');
process.exit(0);
}
// Read manifest and remove files
const manifestLines = readManifest(manifest);
let removed = 0;
let skipped = 0;
for (const filePath of manifestLines) {
if (!filePath || filePath.length === 0) continue;
if (!isValidManifestEntry(filePath)) {
console.log(`Skipped: ${filePath} (invalid manifest entry)`);
skipped += 1;
continue;
}
const fullPath = path.join(codebuddyFullPath, filePath);
// Security check: use path.relative() to ensure the manifest entry
// resolves inside the codebuddy directory. This is stricter than
// startsWith and correctly handles edge-cases with symlinks.
const relative = path.relative(codebuddyRootResolved, path.resolve(fullPath));
if (relative.startsWith('..') || path.isAbsolute(relative)) {
console.log(`Skipped: ${filePath} (outside target directory)`);
skipped += 1;
continue;
}
try {
const stats = fs.lstatSync(fullPath);
if (stats.isFile() || stats.isSymbolicLink()) {
fs.unlinkSync(fullPath);
console.log(`Removed: ${filePath}`);
removed += 1;
} else if (stats.isDirectory()) {
try {
const files = fs.readdirSync(fullPath);
if (files.length === 0) {
fs.rmdirSync(fullPath);
console.log(`Removed: ${filePath}/`);
removed += 1;
} else {
console.log(`Skipped: ${filePath}/ (not empty - contains user files)`);
skipped += 1;
}
} catch {
console.log(`Skipped: ${filePath}/ (not empty - contains user files)`);
skipped += 1;
}
}
} catch {
skipped += 1;
}
}
// Remove empty directories
const emptyDirs = findEmptyDirs(codebuddyFullPath);
for (const emptyDir of emptyDirs) {
try {
fs.rmdirSync(emptyDir);
const relativePath = path.relative(codebuddyFullPath, emptyDir);
console.log(`Removed: ${relativePath}/`);
removed += 1;
} catch {
// Directory might not be empty anymore
}
}
// Try to remove main codebuddy directory if empty
try {
const files = fs.readdirSync(codebuddyFullPath);
if (files.length === 0) {
fs.rmdirSync(codebuddyFullPath);
console.log(`Removed: ${codebuddyDirName}/`);
removed += 1;
}
} catch {
// Directory not empty
}
// Print summary
console.log('');
console.log('Uninstall complete!');
console.log('');
console.log('Summary:');
console.log(` Removed: ${removed} items`);
console.log(` Skipped: ${skipped} items (not found or user-modified)`);
console.log('');
if (fs.existsSync(codebuddyFullPath)) {
console.log(`Note: ${codebuddyDirName} directory still exists (contains user-added files)`);
}
}
// Run uninstaller
doUninstall().catch((error) => {
console.error(`Error: ${error.message}`);
process.exit(1);
});

View File

@@ -1,184 +0,0 @@
#!/bin/bash
#
# ECC CodeBuddy Uninstaller
# Uninstalls Everything Claude Code workflows from a CodeBuddy project.
#
# Usage:
# ./uninstall.sh # Uninstall from current directory
# ./uninstall.sh ~ # Uninstall globally from ~/.codebuddy/
#
set -euo pipefail
# Resolve the directory where this script lives
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
# CodeBuddy directory name
CODEBUDDY_DIR=".codebuddy"
resolve_path() {
python3 -c 'import os, sys; print(os.path.realpath(sys.argv[1]))' "$1"
}
is_valid_manifest_entry() {
local file_path="$1"
case "$file_path" in
""|/*|~*|*/../*|../*|*/..|..)
return 1
;;
esac
return 0
}
# Main uninstall function
do_uninstall() {
local target_dir="$PWD"
# Check if ~ was specified (or expanded to $HOME)
if [ "$#" -ge 1 ]; then
if [ "$1" = "~" ] || [ "$1" = "$HOME" ]; then
target_dir="$HOME"
fi
fi
# Check if we're already inside a .codebuddy directory
local current_dir_name="$(basename "$target_dir")"
local codebuddy_full_path
if [ "$current_dir_name" = ".codebuddy" ]; then
# Already inside the codebuddy directory, use it directly
codebuddy_full_path="$target_dir"
else
# Normal case: append CODEBUDDY_DIR to target_dir
codebuddy_full_path="$target_dir/$CODEBUDDY_DIR"
fi
echo "ECC CodeBuddy Uninstaller"
echo "=========================="
echo ""
echo "Target: $codebuddy_full_path/"
echo ""
if [ ! -d "$codebuddy_full_path" ]; then
echo "Error: $CODEBUDDY_DIR directory not found at $target_dir"
exit 1
fi
codebuddy_root_resolved="$(resolve_path "$codebuddy_full_path")"
# Manifest file path
MANIFEST="$codebuddy_full_path/.ecc-manifest"
if [ ! -f "$MANIFEST" ]; then
echo "Warning: No manifest file found (.ecc-manifest)"
echo ""
echo "This could mean:"
echo " 1. ECC was installed with an older version without manifest support"
echo " 2. The manifest file was manually deleted"
echo ""
read -p "Do you want to remove the entire $CODEBUDDY_DIR directory? (y/N) " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Uninstall cancelled."
exit 0
fi
rm -rf "$codebuddy_full_path"
echo "Uninstall complete!"
echo ""
echo "Removed: $codebuddy_full_path/"
exit 0
fi
echo "Found manifest file - will only remove files installed by ECC"
echo ""
read -p "Are you sure you want to uninstall ECC from $CODEBUDDY_DIR? (y/N) " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Uninstall cancelled."
exit 0
fi
# Counters
removed=0
skipped=0
# Read manifest and remove files
while IFS= read -r file_path; do
[ -z "$file_path" ] && continue
if ! is_valid_manifest_entry "$file_path"; then
echo "Skipped: $file_path (invalid manifest entry)"
skipped=$((skipped + 1))
continue
fi
full_path="$codebuddy_full_path/$file_path"
# Security check: ensure the path resolves inside the target directory.
# Use Python to compute a reliable relative path so symlinks cannot
# escape the boundary.
relative="$(python3 -c 'import os,sys; print(os.path.relpath(os.path.abspath(sys.argv[1]), sys.argv[2]))' "$full_path" "$codebuddy_root_resolved")"
case "$relative" in
../*|..)
echo "Skipped: $file_path (outside target directory)"
skipped=$((skipped + 1))
continue
;;
esac
if [ -L "$full_path" ] || [ -f "$full_path" ]; then
rm -f "$full_path"
echo "Removed: $file_path"
removed=$((removed + 1))
elif [ -d "$full_path" ]; then
# Only remove directory if it's empty
if [ -z "$(ls -A "$full_path" 2>/dev/null)" ]; then
rmdir "$full_path" 2>/dev/null || true
if [ ! -d "$full_path" ]; then
echo "Removed: $file_path/"
removed=$((removed + 1))
fi
else
echo "Skipped: $file_path/ (not empty - contains user files)"
skipped=$((skipped + 1))
fi
else
skipped=$((skipped + 1))
fi
done < "$MANIFEST"
while IFS= read -r empty_dir; do
[ "$empty_dir" = "$codebuddy_full_path" ] && continue
relative_dir="${empty_dir#$codebuddy_full_path/}"
rmdir "$empty_dir" 2>/dev/null || true
if [ ! -d "$empty_dir" ]; then
echo "Removed: $relative_dir/"
removed=$((removed + 1))
fi
done < <(find "$codebuddy_full_path" -depth -type d -empty 2>/dev/null | sort -r)
# Try to remove the main codebuddy directory if it's empty
if [ -d "$codebuddy_full_path" ] && [ -z "$(ls -A "$codebuddy_full_path" 2>/dev/null)" ]; then
rmdir "$codebuddy_full_path" 2>/dev/null || true
if [ ! -d "$codebuddy_full_path" ]; then
echo "Removed: $CODEBUDDY_DIR/"
removed=$((removed + 1))
fi
fi
echo ""
echo "Uninstall complete!"
echo ""
echo "Summary:"
echo " Removed: $removed items"
echo " Skipped: $skipped items (not found or user-modified)"
echo ""
if [ -d "$codebuddy_full_path" ]; then
echo "Note: $CODEBUDDY_DIR directory still exists (contains user-added files)"
fi
}
# Execute uninstall
do_uninstall "$@"

View File

@@ -1,54 +0,0 @@
# .codex-plugin — Codex Native Plugin for ECC
This directory contains the **Codex plugin manifest** for Everything Claude Code.
## Structure
```
.codex-plugin/
└── plugin.json — Codex plugin manifest (name, version, skills ref, MCP ref)
.mcp.json — MCP server configurations at plugin root (NOT inside .codex-plugin/)
```
## What This Provides
- **156 skills** from `./skills/` — reusable Codex workflows for TDD, security,
code review, architecture, and more
- **6 MCP servers** — GitHub, Context7, Exa, Memory, Playwright, Sequential Thinking
## Installation
Codex plugin support is currently in preview. Once generally available:
```bash
# Install from Codex CLI
codex plugin install affaan-m/everything-claude-code
# Or reference locally during development
codex plugin install ./
Run this from the repository root so `./` points to the repo root and `.mcp.json` resolves correctly.
```
The installed plugin registers under the short slug `ecc` so tool and command names
stay below provider length limits.
## MCP Servers Included
| Server | Purpose |
|---|---|
| `github` | GitHub API access |
| `context7` | Live documentation lookup |
| `exa` | Neural web search |
| `memory` | Persistent memory across sessions |
| `playwright` | Browser automation & E2E testing |
| `sequential-thinking` | Step-by-step reasoning |
## Notes
- The `skills/` directory at the repo root is shared between Claude Code (`.claude-plugin/`)
and Codex (`.codex-plugin/`) — same source of truth, no duplication
- ECC is moving to a skills-first workflow surface. Legacy `commands/` remain for
compatibility on harnesses that still expect slash-entry shims.
- MCP server credentials are inherited from the launching environment (env vars)
- This manifest does **not** override `~/.codex/config.toml` settings

View File

@@ -1,30 +0,0 @@
{
"name": "ecc",
"version": "2.0.0-rc.1",
"description": "Battle-tested Codex workflows — 156 shared ECC skills, production-ready MCP configs, and selective-install-aligned conventions for TDD, security scanning, code review, and autonomous development.",
"author": {
"name": "Affaan Mustafa",
"email": "me@affaanmustafa.com",
"url": "https://x.com/affaanmustafa"
},
"homepage": "https://ecc.tools",
"repository": "https://github.com/affaan-m/everything-claude-code",
"license": "MIT",
"keywords": ["codex", "agents", "skills", "tdd", "code-review", "security", "workflow", "automation"],
"skills": "./skills/",
"mcpServers": "./.mcp.json",
"interface": {
"displayName": "Everything Claude Code",
"shortDescription": "156 battle-tested ECC skills plus MCP configs for TDD, security, code review, and autonomous development.",
"longDescription": "Everything Claude Code (ECC) is a community-maintained collection of Codex-ready skills and MCP configs evolved over 10+ months of intensive daily use. It covers TDD workflows, security scanning, code review, architecture decisions, operator workflows, and more — all in one installable plugin.",
"developerName": "Affaan Mustafa",
"category": "Productivity",
"capabilities": ["Read", "Write"],
"websiteURL": "https://ecc.tools",
"defaultPrompt": [
"Use the tdd-workflow skill to write tests before implementation.",
"Use the security-review skill to scan for OWASP Top 10 vulnerabilities.",
"Use the verification-loop skill to verify correctness before shipping changes."
]
}
}

View File

@@ -46,15 +46,12 @@ Available skills:
Treat the project-local `.codex/config.toml` as the default Codex baseline for ECC. The current ECC baseline enables GitHub, Context7, Exa, Memory, Playwright, and Sequential Thinking; add heavier extras in `~/.codex/config.toml` only when a task actually needs them. Treat the project-local `.codex/config.toml` as the default Codex baseline for ECC. The current ECC baseline enables GitHub, Context7, Exa, Memory, Playwright, and Sequential Thinking; add heavier extras in `~/.codex/config.toml` only when a task actually needs them.
ECC's canonical Codex section name is `[mcp_servers.context7]`. The launcher package remains `@upstash/context7-mcp`; only the TOML section name is normalized for consistency with `codex mcp list` and the reference config.
### Automatic config.toml merging ### Automatic config.toml merging
The sync script (`scripts/sync-ecc-to-codex.sh`) uses a Node-based TOML parser to safely merge ECC MCP servers into `~/.codex/config.toml`: The sync script (`scripts/sync-ecc-to-codex.sh`) uses a Node-based TOML parser to safely merge ECC MCP servers into `~/.codex/config.toml`:
- **Add-only by default** — missing ECC servers are appended; existing servers are never modified or removed. - **Add-only by default** — missing ECC servers are appended; existing servers are never modified or removed.
- **7 managed servers** — Supabase, Playwright, Context7, Exa, GitHub, Memory, Sequential Thinking. - **7 managed servers** — Supabase, Playwright, Context7, Exa, GitHub, Memory, Sequential Thinking.
- **Canonical naming** — ECC manages Context7 as `[mcp_servers.context7]`; legacy `[mcp_servers.context7-mcp]` entries are treated as aliases during updates.
- **Package-manager aware** — uses the project's configured package manager (npm/pnpm/yarn/bun) instead of hardcoding `pnpm`. - **Package-manager aware** — uses the project's configured package manager (npm/pnpm/yarn/bun) instead of hardcoding `pnpm`.
- **Drift warnings** — if an existing server's config differs from the ECC recommendation, the script logs a warning. - **Drift warnings** — if an existing server's config differs from the ECC recommendation, the script logs a warning.
- **`--update-mcp`** — explicitly replaces all ECC-managed servers with the latest recommended config (safely removes subtables like `[mcp_servers.supabase.env]`). - **`--update-mcp`** — explicitly replaces all ECC-managed servers with the latest recommended config (safely removes subtables like `[mcp_servers.supabase.env]`).

View File

@@ -27,10 +27,7 @@ notify = [
"-sound", "default", "-sound", "default",
] ]
# Persistent instructions are appended to every prompt (additive, unlike # Prefer AGENTS.md and project-local .codex/AGENTS.md for instructions.
# model_instructions_file which replaces AGENTS.md).
persistent_instructions = "Follow project AGENTS.md guidelines. Use available MCP servers when they can help."
# model_instructions_file replaces built-in instructions instead of AGENTS.md, # model_instructions_file replaces built-in instructions instead of AGENTS.md,
# so leave it unset unless you intentionally want a single override file. # so leave it unset unless you intentionally want a single override file.
# model_instructions_file = "/absolute/path/to/instructions.md" # model_instructions_file = "/absolute/path/to/instructions.md"
@@ -41,14 +38,10 @@ persistent_instructions = "Follow project AGENTS.md guidelines. Use available MC
[mcp_servers.github] [mcp_servers.github]
command = "npx" command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"] args = ["-y", "@modelcontextprotocol/server-github"]
startup_timeout_sec = 30
[mcp_servers.context7] [mcp_servers.context7]
command = "npx" command = "npx"
# Canonical Codex section name is `context7`; the package itself remains
# `@upstash/context7-mcp`.
args = ["-y", "@upstash/context7-mcp@latest"] args = ["-y", "@upstash/context7-mcp@latest"]
startup_timeout_sec = 30
[mcp_servers.exa] [mcp_servers.exa]
url = "https://mcp.exa.ai/mcp" url = "https://mcp.exa.ai/mcp"
@@ -56,17 +49,14 @@ url = "https://mcp.exa.ai/mcp"
[mcp_servers.memory] [mcp_servers.memory]
command = "npx" command = "npx"
args = ["-y", "@modelcontextprotocol/server-memory"] args = ["-y", "@modelcontextprotocol/server-memory"]
startup_timeout_sec = 30
[mcp_servers.playwright] [mcp_servers.playwright]
command = "npx" command = "npx"
args = ["-y", "@playwright/mcp@latest", "--extension"] args = ["-y", "@playwright/mcp@latest", "--extension"]
startup_timeout_sec = 30
[mcp_servers.sequential-thinking] [mcp_servers.sequential-thinking]
command = "npx" command = "npx"
args = ["-y", "@modelcontextprotocol/server-sequential-thinking"] args = ["-y", "@modelcontextprotocol/server-sequential-thinking"]
startup_timeout_sec = 30
# Additional MCP servers (uncomment as needed): # Additional MCP servers (uncomment as needed):
# [mcp_servers.supabase] # [mcp_servers.supabase]
@@ -86,8 +76,7 @@ startup_timeout_sec = 30
# args = ["-y", "@cloudflare/mcp-server-cloudflare"] # args = ["-y", "@cloudflare/mcp-server-cloudflare"]
[features] [features]
# Codex multi-agent collaboration is stable and on by default in current builds. # Codex multi-agent support is experimental as of March 2026.
# Keep the explicit toggle here so the repo documents its expectation clearly.
multi_agent = true multi_agent = true
# Profiles — switch with `codex -p <name>` # Profiles — switch with `codex -p <name>`
@@ -102,8 +91,6 @@ sandbox_mode = "workspace-write"
web_search = "live" web_search = "live"
[agents] [agents]
# Multi-agent role limits and local role definitions.
# These map to `.codex/agents/*.toml` and mirror the repo's explorer/reviewer/docs workflow.
max_threads = 6 max_threads = 6
max_depth = 1 max_depth = 1

View File

@@ -1,5 +1,4 @@
{ {
"version": 1,
"hooks": { "hooks": {
"sessionStart": [ "sessionStart": [
{ {
@@ -38,7 +37,7 @@
{ {
"command": "node .cursor/hooks/after-file-edit.js", "command": "node .cursor/hooks/after-file-edit.js",
"event": "afterFileEdit", "event": "afterFileEdit",
"description": "Auto-format, TypeScript check, console.log warning, and frontend design-quality reminder" "description": "Auto-format, TypeScript check, console.log warning"
} }
], ],
"beforeMCPExecution": [ "beforeMCPExecution": [

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env node #!/usr/bin/env node
const { hookEnabled, readStdin, runExistingHook, transformToClaude } = require('./adapter'); const { readStdin, runExistingHook, transformToClaude } = require('./adapter');
readStdin().then(raw => { readStdin().then(raw => {
try { try {
const input = JSON.parse(raw); const input = JSON.parse(raw);
@@ -8,12 +8,10 @@ readStdin().then(raw => {
}); });
const claudeStr = JSON.stringify(claudeInput); const claudeStr = JSON.stringify(claudeInput);
// Accumulate edited paths for batch format+typecheck at stop time // Run format, typecheck, and console.log warning sequentially
runExistingHook('post-edit-accumulator.js', claudeStr); runExistingHook('post-edit-format.js', claudeStr);
runExistingHook('post-edit-typecheck.js', claudeStr);
runExistingHook('post-edit-console-warn.js', claudeStr); runExistingHook('post-edit-console-warn.js', claudeStr);
if (hookEnabled('post:edit:design-quality-check', ['standard', 'strict'])) {
runExistingHook('design-quality-check.js', claudeStr);
}
} catch {} } catch {}
process.stdout.write(raw); process.stdout.write(raw);
}).catch(() => process.exit(0)); }).catch(() => process.exit(0));

View File

@@ -1,48 +0,0 @@
# ECC for Gemini CLI
This file provides Gemini CLI with the baseline ECC workflow, review standards, and security checks for repositories that install the Gemini target.
## Overview
Everything Claude Code (ECC) is a cross-harness coding system with 36 specialized agents, 142 skills, and 68 commands.
Gemini support is currently focused on a strong project-local instruction layer via `.gemini/GEMINI.md`, plus the shared MCP catalog and package-manager setup assets shipped by the installer.
## Core Workflow
1. Plan before editing large features.
2. Prefer test-first changes for bug fixes and new functionality.
3. Review for security before shipping.
4. Keep changes self-contained, readable, and easy to revert.
## Coding Standards
- Prefer immutable updates over in-place mutation.
- Keep functions small and files focused.
- Validate user input at boundaries.
- Never hardcode secrets.
- Fail loudly with clear error messages instead of silently swallowing problems.
## Security Checklist
Before any commit:
- No hardcoded API keys, passwords, or tokens
- All external input validated
- Parameterized queries for database writes
- Sanitized HTML output where applicable
- Authz/authn checked for sensitive paths
- Error messages scrubbed of sensitive internals
## Delivery Standards
- Use conventional commits: `feat`, `fix`, `refactor`, `docs`, `test`, `chore`, `perf`, `ci`
- Run targeted verification for touched areas before shipping
- Prefer contained local implementations over adding new third-party runtime dependencies
## ECC Areas To Reuse
- `AGENTS.md` for repo-wide operating rules
- `skills/` for deep workflow guidance
- `commands/` for slash-command patterns worth adapting into prompts/macros
- `mcp-configs/` for shared connector baselines

View File

@@ -1,21 +0,0 @@
version: 2
updates:
- package-ecosystem: "npm"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 10
labels:
- "dependencies"
groups:
minor-and-patch:
update-types:
- "minor"
- "patch"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
labels:
- "dependencies"
- "ci"

View File

@@ -2,8 +2,7 @@ name: CI
on: on:
push: push:
branches: [main, 'release/**'] branches: [main]
tags: ['v*']
pull_request: pull_request:
branches: [main] branches: [main]
@@ -35,38 +34,23 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 uses: actions/checkout@v4
- name: Setup Node.js ${{ matrix.node }} - name: Setup Node.js ${{ matrix.node }}
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0 uses: actions/setup-node@v4
with: with:
node-version: ${{ matrix.node }} node-version: ${{ matrix.node }}
# Package manager setup # Package manager setup
- name: Setup pnpm - name: Setup pnpm
if: matrix.pm == 'pnpm' && matrix.node != '18.x' if: matrix.pm == 'pnpm'
uses: pnpm/action-setup@08c4be7e2e672a47d11bd04269e27e5f3e8529cb # v6.0.0 uses: pnpm/action-setup@v4
with: with:
# Keep an explicit pnpm major because this repo's packageManager is Yarn. version: latest
version: 10
- name: Setup pnpm (via Corepack)
if: matrix.pm == 'pnpm' && matrix.node == '18.x'
shell: bash
run: |
corepack enable
corepack prepare pnpm@9 --activate
- name: Setup Yarn (via Corepack)
if: matrix.pm == 'yarn'
shell: bash
run: |
corepack enable
corepack prepare yarn@stable --activate
- name: Setup Bun - name: Setup Bun
if: matrix.pm == 'bun' if: matrix.pm == 'bun'
uses: oven-sh/setup-bun@0c5077e51419868618aeaa5fe8019c62421857d6 # v2 uses: oven-sh/setup-bun@v2
# Cache configuration # Cache configuration
- name: Get npm cache directory - name: Get npm cache directory
@@ -77,7 +61,7 @@ jobs:
- name: Cache npm - name: Cache npm
if: matrix.pm == 'npm' if: matrix.pm == 'npm'
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4 uses: actions/cache@v4
with: with:
path: ${{ steps.npm-cache-dir.outputs.dir }} path: ${{ steps.npm-cache-dir.outputs.dir }}
key: ${{ runner.os }}-node-${{ matrix.node }}-npm-${{ hashFiles('**/package-lock.json') }} key: ${{ runner.os }}-node-${{ matrix.node }}-npm-${{ hashFiles('**/package-lock.json') }}
@@ -88,13 +72,11 @@ jobs:
if: matrix.pm == 'pnpm' if: matrix.pm == 'pnpm'
id: pnpm-cache-dir id: pnpm-cache-dir
shell: bash shell: bash
env:
COREPACK_ENABLE_STRICT: '0'
run: echo "dir=$(pnpm store path)" >> $GITHUB_OUTPUT run: echo "dir=$(pnpm store path)" >> $GITHUB_OUTPUT
- name: Cache pnpm - name: Cache pnpm
if: matrix.pm == 'pnpm' if: matrix.pm == 'pnpm'
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4 uses: actions/cache@v4
with: with:
path: ${{ steps.pnpm-cache-dir.outputs.dir }} path: ${{ steps.pnpm-cache-dir.outputs.dir }}
key: ${{ runner.os }}-node-${{ matrix.node }}-pnpm-${{ hashFiles('**/pnpm-lock.yaml') }} key: ${{ runner.os }}-node-${{ matrix.node }}-pnpm-${{ hashFiles('**/pnpm-lock.yaml') }}
@@ -115,7 +97,7 @@ jobs:
- name: Cache yarn - name: Cache yarn
if: matrix.pm == 'yarn' if: matrix.pm == 'yarn'
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4 uses: actions/cache@v4
with: with:
path: ${{ steps.yarn-cache-dir.outputs.dir }} path: ${{ steps.yarn-cache-dir.outputs.dir }}
key: ${{ runner.os }}-node-${{ matrix.node }}-yarn-${{ hashFiles('**/yarn.lock') }} key: ${{ runner.os }}-node-${{ matrix.node }}-yarn-${{ hashFiles('**/yarn.lock') }}
@@ -124,7 +106,7 @@ jobs:
- name: Cache bun - name: Cache bun
if: matrix.pm == 'bun' if: matrix.pm == 'bun'
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4 uses: actions/cache@v4
with: with:
path: ~/.bun/install/cache path: ~/.bun/install/cache
key: ${{ runner.os }}-bun-${{ hashFiles('**/bun.lockb') }} key: ${{ runner.os }}-bun-${{ hashFiles('**/bun.lockb') }}
@@ -132,21 +114,14 @@ jobs:
${{ runner.os }}-bun- ${{ runner.os }}-bun-
# Install dependencies # Install dependencies
# COREPACK_ENABLE_STRICT=0 allows pnpm to install even though
# package.json declares "packageManager": "yarn@..."
- name: Install dependencies - name: Install dependencies
shell: bash shell: bash
env:
COREPACK_ENABLE_STRICT: '0'
run: | run: |
case "${{ matrix.pm }}" in case "${{ matrix.pm }}" in
npm) npm ci ;; npm) npm ci ;;
# pnpm v10 can fail CI on ignored native build scripts pnpm) pnpm install ;;
# (for example msgpackr-extract) even though this repo is Yarn-native # --ignore-engines required for Node 18 compat with some devDependencies (e.g., markdownlint-cli)
# and pnpm is only exercised here as a compatibility lane. yarn) yarn install --ignore-engines ;;
pnpm) pnpm install --config.strict-dep-builds=false --no-frozen-lockfile ;;
# Yarn Berry (v4+) removed --ignore-engines; engine checking is no longer a core feature
yarn) yarn install ;;
bun) bun install ;; bun) bun install ;;
*) echo "Unsupported package manager: ${{ matrix.pm }}" && exit 1 ;; *) echo "Unsupported package manager: ${{ matrix.pm }}" && exit 1 ;;
esac esac
@@ -160,7 +135,7 @@ jobs:
# Upload test artifacts on failure # Upload test artifacts on failure
- name: Upload test artifacts - name: Upload test artifacts
if: failure() if: failure()
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 uses: actions/upload-artifact@v4
with: with:
name: test-results-${{ matrix.os }}-node${{ matrix.node }}-${{ matrix.pm }} name: test-results-${{ matrix.os }}-node${{ matrix.node }}-${{ matrix.pm }}
path: | path: |
@@ -174,10 +149,10 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 uses: actions/checkout@v4
- name: Setup Node.js - name: Setup Node.js
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0 uses: actions/setup-node@v4
with: with:
node-version: '20.x' node-version: '20.x'
@@ -200,14 +175,6 @@ jobs:
run: node scripts/ci/validate-skills.js run: node scripts/ci/validate-skills.js
continue-on-error: false continue-on-error: false
- name: Validate install manifests
run: node scripts/ci/validate-install-manifests.js
continue-on-error: false
- name: Validate workflow security
run: node scripts/ci/validate-workflow-security.js
continue-on-error: false
- name: Validate rules - name: Validate rules
run: node scripts/ci/validate-rules.js run: node scripts/ci/validate-rules.js
continue-on-error: false continue-on-error: false
@@ -216,10 +183,6 @@ jobs:
run: node scripts/ci/catalog.js --text run: node scripts/ci/catalog.js --text
continue-on-error: false continue-on-error: false
- name: Check unicode safety
run: node scripts/ci/check-unicode-safety.js
continue-on-error: false
security: security:
name: Security Scan name: Security Scan
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -227,10 +190,10 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 uses: actions/checkout@v4
- name: Setup Node.js - name: Setup Node.js
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0 uses: actions/setup-node@v4
with: with:
node-version: '20.x' node-version: '20.x'
@@ -245,10 +208,10 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 uses: actions/checkout@v4
- name: Setup Node.js - name: Setup Node.js
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0 uses: actions/setup-node@v4
with: with:
node-version: '20.x' node-version: '20.x'

View File

@@ -15,8 +15,8 @@ jobs:
name: Check Dependencies name: Check Dependencies
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 - uses: actions/checkout@v4
- uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0 - uses: actions/setup-node@v4
with: with:
node-version: '20.x' node-version: '20.x'
- name: Check for outdated packages - name: Check for outdated packages
@@ -26,8 +26,8 @@ jobs:
name: Security Audit name: Security Audit
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 - uses: actions/checkout@v4
- uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0 - uses: actions/setup-node@v4
with: with:
node-version: '20.x' node-version: '20.x'
- name: Run security audit - name: Run security audit
@@ -43,7 +43,7 @@ jobs:
name: Stale Issues/PRs name: Stale Issues/PRs
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0 - uses: actions/stale@v9
with: with:
stale-issue-message: 'This issue is stale due to inactivity.' stale-issue-message: 'This issue is stale due to inactivity.'
stale-pr-message: 'This PR is stale due to inactivity.' stale-pr-message: 'This PR is stale due to inactivity.'

View File

@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Update monthly metrics issue - name: Update monthly metrics issue
uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 uses: actions/github-script@v7
with: with:
script: | script: |
const owner = context.repo.owner; const owner = context.repo.owner;
@@ -30,10 +30,6 @@ jobs:
return match ? Number(match[1]) : null; return match ? Number(match[1]) : null;
} }
function escapeRegex(value) {
return value.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
}
function fmt(value) { function fmt(value) {
if (value === null || value === undefined) return "n/a"; if (value === null || value === undefined) return "n/a";
return Number(value).toLocaleString("en-US"); return Number(value).toLocaleString("en-US");
@@ -171,18 +167,15 @@ jobs:
} }
const currentBody = issue.body || ""; const currentBody = issue.body || "";
const rowPattern = new RegExp(`^\\| ${escapeRegex(monthKey)} \\|.*$`, "m"); if (currentBody.includes(`| ${monthKey} |`)) {
console.log(`Issue #${issue.number} already has snapshot row for ${monthKey}`);
let body; return;
if (rowPattern.test(currentBody)) {
body = currentBody.replace(rowPattern, row);
console.log(`Refreshed issue #${issue.number} snapshot row for ${monthKey}`);
} else {
body = currentBody.includes("| Month (UTC) |")
? `${currentBody.trimEnd()}\n${row}\n`
: `${intro}\n${row}\n`;
} }
const body = currentBody.includes("| Month (UTC) |")
? `${currentBody.trimEnd()}\n${row}\n`
: `${intro}\n${row}\n`;
await github.rest.issues.update({ await github.rest.issues.update({
owner, owner,
repo, repo,

View File

@@ -6,7 +6,6 @@ on:
permissions: permissions:
contents: write contents: write
id-token: write
jobs: jobs:
release: release:
@@ -15,59 +14,29 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Setup Node.js
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0
with:
node-version: '20.x'
registry-url: 'https://registry.npmjs.org'
- name: Install dependencies
run: npm ci
- name: Verify OpenCode package payload
run: node tests/scripts/build-opencode.test.js
- name: Validate version tag - name: Validate version tag
run: | run: |
if ! [[ "${REF_NAME}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+(-[0-9A-Za-z.-]+)?$ ]]; then if ! [[ "${{ github.ref_name }}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "Invalid version tag format. Expected vX.Y.Z or vX.Y.Z-prerelease" echo "Invalid version tag format. Expected vX.Y.Z"
exit 1 exit 1
fi fi
env: - name: Verify plugin.json version matches tag
REF_NAME: ${{ github.ref_name }}
- name: Verify package version matches tag
env: env:
TAG_NAME: ${{ github.ref_name }} TAG_NAME: ${{ github.ref_name }}
run: | run: |
TAG_VERSION="${TAG_NAME#v}" TAG_VERSION="${TAG_NAME#v}"
PACKAGE_VERSION=$(node -p "require('./package.json').version") PLUGIN_VERSION=$(grep -oE '"version": *"[^"]*"' .claude-plugin/plugin.json | grep -oE '[0-9]+\.[0-9]+\.[0-9]+')
if [ "$TAG_VERSION" != "$PACKAGE_VERSION" ]; then if [ "$TAG_VERSION" != "$PLUGIN_VERSION" ]; then
echo "::error::Tag version ($TAG_VERSION) does not match package.json version ($PACKAGE_VERSION)" echo "::error::Tag version ($TAG_VERSION) does not match plugin.json version ($PLUGIN_VERSION)"
echo "Run: ./scripts/release.sh $TAG_VERSION" echo "Run: ./scripts/release.sh $TAG_VERSION"
exit 1 exit 1
fi fi
- name: Verify release metadata stays in sync
run: node tests/plugin-manifest.test.js
- name: Check npm publish state
id: npm_publish_state
run: |
PACKAGE_NAME=$(node -p "require('./package.json').name")
PACKAGE_VERSION=$(node -p "require('./package.json').version")
NPM_DIST_TAG=$(node -p "require('./package.json').version.includes('-') ? 'next' : 'latest'")
if npm view "${PACKAGE_NAME}@${PACKAGE_VERSION}" version >/dev/null 2>&1; then
echo "already_published=true" >> "$GITHUB_OUTPUT"
else
echo "already_published=false" >> "$GITHUB_OUTPUT"
fi
echo "dist_tag=${NPM_DIST_TAG}" >> "$GITHUB_OUTPUT"
- name: Generate release highlights - name: Generate release highlights
id: highlights id: highlights
env: env:
@@ -88,21 +57,11 @@ jobs:
- Improved release-note generation and changelog hygiene - Improved release-note generation and changelog hygiene
### Notes ### Notes
- npm package: \`ecc-universal\`
- Claude marketplace/plugin identifier: \`everything-claude-code@everything-claude-code\`
- For migration tips and compatibility notes, see README and CHANGELOG. - For migration tips and compatibility notes, see README and CHANGELOG.
EOF EOF
- name: Create GitHub Release - name: Create GitHub Release
uses: softprops/action-gh-release@b4309332981a82ec1c5618f44dd2e27cc8bfbfda # v3.0.0 uses: softprops/action-gh-release@v2
with: with:
body_path: release_body.md body_path: release_body.md
generate_release_notes: true generate_release_notes: true
prerelease: ${{ contains(github.ref_name, '-') }}
make_latest: ${{ contains(github.ref_name, '-') && 'false' || 'true' }}
- name: Publish npm package
if: steps.npm_publish_state.outputs.already_published != 'true'
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
run: npm publish --access public --provenance --tag "${{ steps.npm_publish_state.outputs.dist_tag }}"

View File

@@ -12,24 +12,9 @@ on:
required: false required: false
type: boolean type: boolean
default: true default: true
secrets:
NPM_TOKEN:
required: false
workflow_dispatch:
inputs:
tag:
description: 'Version tag to release or republish (e.g., v2.0.0-rc.1)'
required: true
type: string
generate-notes:
description: 'Auto-generate release notes'
required: false
type: boolean
default: true
permissions: permissions:
contents: write contents: write
id-token: write
jobs: jobs:
release: release:
@@ -38,60 +23,17 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
ref: ${{ inputs.tag }}
- name: Setup Node.js
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0
with:
node-version: '20.x'
registry-url: 'https://registry.npmjs.org'
- name: Install dependencies
run: npm ci
- name: Verify OpenCode package payload
run: node tests/scripts/build-opencode.test.js
- name: Validate version tag - name: Validate version tag
env:
INPUT_TAG: ${{ inputs.tag }}
run: | run: |
if ! [[ "$INPUT_TAG" =~ ^v[0-9]+\.[0-9]+\.[0-9]+(-[0-9A-Za-z.-]+)?$ ]]; then if ! [[ "${{ inputs.tag }}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "Invalid version tag format. Expected vX.Y.Z or vX.Y.Z-prerelease" echo "Invalid version tag format. Expected vX.Y.Z"
exit 1 exit 1
fi fi
- name: Verify package version matches tag
env:
INPUT_TAG: ${{ inputs.tag }}
run: |
TAG_VERSION="${INPUT_TAG#v}"
PACKAGE_VERSION=$(node -p "require('./package.json').version")
if [ "$TAG_VERSION" != "$PACKAGE_VERSION" ]; then
echo "::error::Tag version ($TAG_VERSION) does not match package.json version ($PACKAGE_VERSION)"
echo "Run: ./scripts/release.sh $TAG_VERSION"
exit 1
fi
- name: Verify release metadata stays in sync
run: node tests/plugin-manifest.test.js
- name: Check npm publish state
id: npm_publish_state
run: |
PACKAGE_NAME=$(node -p "require('./package.json').name")
PACKAGE_VERSION=$(node -p "require('./package.json').version")
NPM_DIST_TAG=$(node -p "require('./package.json').version.includes('-') ? 'next' : 'latest'")
if npm view "${PACKAGE_NAME}@${PACKAGE_VERSION}" version >/dev/null 2>&1; then
echo "already_published=true" >> "$GITHUB_OUTPUT"
else
echo "already_published=false" >> "$GITHUB_OUTPUT"
fi
echo "dist_tag=${NPM_DIST_TAG}" >> "$GITHUB_OUTPUT"
- name: Generate release highlights - name: Generate release highlights
env: env:
TAG_NAME: ${{ inputs.tag }} TAG_NAME: ${{ inputs.tag }}
@@ -104,23 +46,11 @@ jobs:
- Harness reliability and cross-platform compatibility - Harness reliability and cross-platform compatibility
- Eval-driven quality improvements - Eval-driven quality improvements
- Better workflow and operator ergonomics - Better workflow and operator ergonomics
### Package Notes
- npm package: \`ecc-universal\`
- Claude marketplace/plugin identifier: \`everything-claude-code@everything-claude-code\`
EOF EOF
- name: Create GitHub Release - name: Create GitHub Release
uses: softprops/action-gh-release@b4309332981a82ec1c5618f44dd2e27cc8bfbfda # v3.0.0 uses: softprops/action-gh-release@v2
with: with:
tag_name: ${{ inputs.tag }} tag_name: ${{ inputs.tag }}
body_path: release_body.md body_path: release_body.md
generate_release_notes: ${{ inputs.generate-notes }} generate_release_notes: ${{ inputs.generate-notes }}
prerelease: ${{ contains(inputs.tag, '-') }}
make_latest: ${{ contains(inputs.tag, '-') && 'false' || 'true' }}
- name: Publish npm package
if: steps.npm_publish_state.outputs.already_published != 'true'
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
run: npm publish --access public --provenance --tag "${{ steps.npm_publish_state.outputs.dist_tag }}"

View File

@@ -27,37 +27,22 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 uses: actions/checkout@v4
- name: Setup Node.js - name: Setup Node.js
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0 uses: actions/setup-node@v4
with: with:
node-version: ${{ inputs.node-version }} node-version: ${{ inputs.node-version }}
- name: Setup pnpm - name: Setup pnpm
if: inputs.package-manager == 'pnpm' && inputs.node-version != '18.x' if: inputs.package-manager == 'pnpm'
uses: pnpm/action-setup@08c4be7e2e672a47d11bd04269e27e5f3e8529cb # v6.0.0 uses: pnpm/action-setup@v4
with: with:
# Keep an explicit pnpm major because this repo's packageManager is Yarn. version: latest
version: 10
- name: Setup pnpm (via Corepack)
if: inputs.package-manager == 'pnpm' && inputs.node-version == '18.x'
shell: bash
run: |
corepack enable
corepack prepare pnpm@9 --activate
- name: Setup Yarn (via Corepack)
if: inputs.package-manager == 'yarn'
shell: bash
run: |
corepack enable
corepack prepare yarn@stable --activate
- name: Setup Bun - name: Setup Bun
if: inputs.package-manager == 'bun' if: inputs.package-manager == 'bun'
uses: oven-sh/setup-bun@0c5077e51419868618aeaa5fe8019c62421857d6 # v2 uses: oven-sh/setup-bun@v2
- name: Get npm cache directory - name: Get npm cache directory
if: inputs.package-manager == 'npm' if: inputs.package-manager == 'npm'
@@ -67,7 +52,7 @@ jobs:
- name: Cache npm - name: Cache npm
if: inputs.package-manager == 'npm' if: inputs.package-manager == 'npm'
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4 uses: actions/cache@v4
with: with:
path: ${{ steps.npm-cache-dir.outputs.dir }} path: ${{ steps.npm-cache-dir.outputs.dir }}
key: ${{ runner.os }}-node-${{ inputs.node-version }}-npm-${{ hashFiles('**/package-lock.json') }} key: ${{ runner.os }}-node-${{ inputs.node-version }}-npm-${{ hashFiles('**/package-lock.json') }}
@@ -78,13 +63,11 @@ jobs:
if: inputs.package-manager == 'pnpm' if: inputs.package-manager == 'pnpm'
id: pnpm-cache-dir id: pnpm-cache-dir
shell: bash shell: bash
env:
COREPACK_ENABLE_STRICT: '0'
run: echo "dir=$(pnpm store path)" >> $GITHUB_OUTPUT run: echo "dir=$(pnpm store path)" >> $GITHUB_OUTPUT
- name: Cache pnpm - name: Cache pnpm
if: inputs.package-manager == 'pnpm' if: inputs.package-manager == 'pnpm'
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4 uses: actions/cache@v4
with: with:
path: ${{ steps.pnpm-cache-dir.outputs.dir }} path: ${{ steps.pnpm-cache-dir.outputs.dir }}
key: ${{ runner.os }}-node-${{ inputs.node-version }}-pnpm-${{ hashFiles('**/pnpm-lock.yaml') }} key: ${{ runner.os }}-node-${{ inputs.node-version }}-pnpm-${{ hashFiles('**/pnpm-lock.yaml') }}
@@ -105,7 +88,7 @@ jobs:
- name: Cache yarn - name: Cache yarn
if: inputs.package-manager == 'yarn' if: inputs.package-manager == 'yarn'
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4 uses: actions/cache@v4
with: with:
path: ${{ steps.yarn-cache-dir.outputs.dir }} path: ${{ steps.yarn-cache-dir.outputs.dir }}
key: ${{ runner.os }}-node-${{ inputs.node-version }}-yarn-${{ hashFiles('**/yarn.lock') }} key: ${{ runner.os }}-node-${{ inputs.node-version }}-yarn-${{ hashFiles('**/yarn.lock') }}
@@ -114,28 +97,20 @@ jobs:
- name: Cache bun - name: Cache bun
if: inputs.package-manager == 'bun' if: inputs.package-manager == 'bun'
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4 uses: actions/cache@v4
with: with:
path: ~/.bun/install/cache path: ~/.bun/install/cache
key: ${{ runner.os }}-bun-${{ hashFiles('**/bun.lockb') }} key: ${{ runner.os }}-bun-${{ hashFiles('**/bun.lockb') }}
restore-keys: | restore-keys: |
${{ runner.os }}-bun- ${{ runner.os }}-bun-
# COREPACK_ENABLE_STRICT=0 allows pnpm to install even though
# package.json declares "packageManager": "yarn@..."
- name: Install dependencies - name: Install dependencies
shell: bash shell: bash
env:
COREPACK_ENABLE_STRICT: '0'
run: | run: |
case "${{ inputs.package-manager }}" in case "${{ inputs.package-manager }}" in
npm) npm ci ;; npm) npm ci ;;
# pnpm v10 can fail CI on ignored native build scripts pnpm) pnpm install ;;
# (for example msgpackr-extract) even though this repo is Yarn-native yarn) yarn install --ignore-engines ;;
# and pnpm is only exercised here as a compatibility lane.
pnpm) pnpm install --config.strict-dep-builds=false --no-frozen-lockfile ;;
# Yarn Berry (v4+) removed --ignore-engines; engine checking is no longer a core feature
yarn) yarn install ;;
bun) bun install ;; bun) bun install ;;
*) echo "Unsupported package manager: ${{ inputs.package-manager }}" && exit 1 ;; *) echo "Unsupported package manager: ${{ inputs.package-manager }}" && exit 1 ;;
esac esac
@@ -147,7 +122,7 @@ jobs:
- name: Upload test artifacts - name: Upload test artifacts
if: failure() if: failure()
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 uses: actions/upload-artifact@v4
with: with:
name: test-results-${{ inputs.os }}-node${{ inputs.node-version }}-${{ inputs.package-manager }} name: test-results-${{ inputs.os }}-node${{ inputs.node-version }}-${{ inputs.package-manager }}
path: | path: |

View File

@@ -17,10 +17,10 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 uses: actions/checkout@v4
- name: Setup Node.js - name: Setup Node.js
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0 uses: actions/setup-node@v4
with: with:
node-version: ${{ inputs.node-version }} node-version: ${{ inputs.node-version }}
@@ -39,14 +39,5 @@ jobs:
- name: Validate skills - name: Validate skills
run: node scripts/ci/validate-skills.js run: node scripts/ci/validate-skills.js
- name: Validate install manifests
run: node scripts/ci/validate-install-manifests.js
- name: Validate workflow security
run: node scripts/ci/validate-workflow-security.js
- name: Validate rules - name: Validate rules
run: node scripts/ci/validate-rules.js run: node scripts/ci/validate-rules.js
- name: Check unicode safety
run: node scripts/ci/check-unicode-safety.js

6
.gitignore vendored
View File

@@ -75,9 +75,6 @@ examples/sessions/*.tmp
# Local drafts # Local drafts
marketing/ marketing/
.dmux/ .dmux/
.dmux-hooks/
.claude/worktrees/
.claude/scheduled_tasks.lock
# Temporary files # Temporary files
tmp/ tmp/
@@ -86,9 +83,6 @@ temp/
*.bak *.bak
*.backup *.backup
# Observer temp files (continuous-learning-v2)
.observer-tmp/
# Rust build artifacts # Rust build artifacts
ecc2/target/ ecc2/target/

View File

@@ -597,7 +597,7 @@ For more detailed information, see the `docs/` directory:
## Contributers ## Contributers
- Himanshu Sharma [@ihimanss](https://github.com/ihimanss) - Himanshu Sharma [@ihimanss](https://github.com/ihimanss)
- Sungmin Hong [@aws-hsungmin](https://github.com/aws-hsungmin) - Sungmin Hong [@aws-hsungmin](https://github.com/aws-hsungmin)

View File

@@ -1,143 +1,139 @@
#!/bin/bash #!/bin/bash
# #
# ECC Kiro Installer # ECC Kiro Installer
# Installs Everything Claude Code workflows into a Kiro project. # Installs Everything Claude Code workflows into a Kiro project.
# #
# Usage: # Usage:
# ./install.sh # Install to current directory # ./install.sh # Install to current directory
# ./install.sh /path/to/dir # Install to specific directory # ./install.sh /path/to/dir # Install to specific directory
# ./install.sh ~ # Install globally to ~/.kiro/ # ./install.sh ~ # Install globally to ~/.kiro/
# #
set -euo pipefail set -euo pipefail
# When globs match nothing, expand to empty list instead of the literal pattern # When globs match nothing, expand to empty list instead of the literal pattern
shopt -s nullglob shopt -s nullglob
# Resolve the directory where this script lives # Resolve the directory where this script lives (the repo root)
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
SOURCE_KIRO="$SCRIPT_DIR/.kiro"
# The script lives inside .kiro/, so SCRIPT_DIR *is* the source.
# If invoked from the repo root (e.g., .kiro/install.sh), SCRIPT_DIR already # Target directory: argument or current working directory
# points to the .kiro directory — no need to append /.kiro again. TARGET="${1:-.}"
SOURCE_KIRO="$SCRIPT_DIR"
# Expand ~ to $HOME
# Target directory: argument or current working directory if [ "$TARGET" = "~" ] || [[ "$TARGET" == "~/"* ]]; then
TARGET="${1:-.}" TARGET="${TARGET/#\~/$HOME}"
fi
# Expand ~ to $HOME
if [ "$TARGET" = "~" ] || [[ "$TARGET" == "~/"* ]]; then # Resolve to absolute path
TARGET="${TARGET/#\~/$HOME}" TARGET="$(cd "$TARGET" 2>/dev/null && pwd || echo "$TARGET")"
fi
echo "ECC Kiro Installer"
# Resolve to absolute path echo "=================="
TARGET="$(cd "$TARGET" 2>/dev/null && pwd || echo "$TARGET")" echo ""
echo "Source: $SOURCE_KIRO"
echo "ECC Kiro Installer" echo "Target: $TARGET/.kiro/"
echo "==================" echo ""
echo ""
echo "Source: $SOURCE_KIRO" # Subdirectories to create and populate
echo "Target: $TARGET/.kiro/" SUBDIRS="agents skills steering hooks scripts settings"
echo ""
# Create all required .kiro/ subdirectories
# Subdirectories to create and populate for dir in $SUBDIRS; do
SUBDIRS="agents skills steering hooks scripts settings" mkdir -p "$TARGET/.kiro/$dir"
done
# Create all required .kiro/ subdirectories
for dir in $SUBDIRS; do # Counters for summary
mkdir -p "$TARGET/.kiro/$dir" agents=0; skills=0; steering=0; hooks=0; scripts=0; settings=0
done
# Copy agents (JSON for CLI, Markdown for IDE)
# Counters for summary if [ -d "$SOURCE_KIRO/agents" ]; then
agents=0; skills=0; steering=0; hooks=0; scripts=0; settings=0 for f in "$SOURCE_KIRO/agents"/*.json "$SOURCE_KIRO/agents"/*.md; do
[ -f "$f" ] || continue
# Copy agents (JSON for CLI, Markdown for IDE) local_name=$(basename "$f")
if [ -d "$SOURCE_KIRO/agents" ]; then if [ ! -f "$TARGET/.kiro/agents/$local_name" ]; then
for f in "$SOURCE_KIRO/agents"/*.json "$SOURCE_KIRO/agents"/*.md; do cp "$f" "$TARGET/.kiro/agents/" 2>/dev/null || true
[ -f "$f" ] || continue agents=$((agents + 1))
local_name=$(basename "$f") fi
if [ ! -f "$TARGET/.kiro/agents/$local_name" ]; then done
cp "$f" "$TARGET/.kiro/agents/" 2>/dev/null || true fi
agents=$((agents + 1))
fi # Copy skills (directories with SKILL.md)
done if [ -d "$SOURCE_KIRO/skills" ]; then
fi for d in "$SOURCE_KIRO/skills"/*/; do
[ -d "$d" ] || continue
# Copy skills (directories with SKILL.md) skill_name="$(basename "$d")"
if [ -d "$SOURCE_KIRO/skills" ]; then if [ ! -d "$TARGET/.kiro/skills/$skill_name" ]; then
for d in "$SOURCE_KIRO/skills"/*/; do mkdir -p "$TARGET/.kiro/skills/$skill_name"
[ -d "$d" ] || continue cp "$d"* "$TARGET/.kiro/skills/$skill_name/" 2>/dev/null || true
skill_name="$(basename "$d")" skills=$((skills + 1))
if [ ! -d "$TARGET/.kiro/skills/$skill_name" ]; then fi
mkdir -p "$TARGET/.kiro/skills/$skill_name" done
cp "$d"* "$TARGET/.kiro/skills/$skill_name/" 2>/dev/null || true fi
skills=$((skills + 1))
fi # Copy steering files (markdown)
done if [ -d "$SOURCE_KIRO/steering" ]; then
fi for f in "$SOURCE_KIRO/steering"/*.md; do
local_name=$(basename "$f")
# Copy steering files (markdown) if [ ! -f "$TARGET/.kiro/steering/$local_name" ]; then
if [ -d "$SOURCE_KIRO/steering" ]; then cp "$f" "$TARGET/.kiro/steering/" 2>/dev/null || true
for f in "$SOURCE_KIRO/steering"/*.md; do steering=$((steering + 1))
local_name=$(basename "$f") fi
if [ ! -f "$TARGET/.kiro/steering/$local_name" ]; then done
cp "$f" "$TARGET/.kiro/steering/" 2>/dev/null || true fi
steering=$((steering + 1))
fi # Copy hooks (.kiro.hook files and README)
done if [ -d "$SOURCE_KIRO/hooks" ]; then
fi for f in "$SOURCE_KIRO/hooks"/*.kiro.hook "$SOURCE_KIRO/hooks"/*.md; do
[ -f "$f" ] || continue
# Copy hooks (.kiro.hook files and README) local_name=$(basename "$f")
if [ -d "$SOURCE_KIRO/hooks" ]; then if [ ! -f "$TARGET/.kiro/hooks/$local_name" ]; then
for f in "$SOURCE_KIRO/hooks"/*.kiro.hook "$SOURCE_KIRO/hooks"/*.md; do cp "$f" "$TARGET/.kiro/hooks/" 2>/dev/null || true
[ -f "$f" ] || continue hooks=$((hooks + 1))
local_name=$(basename "$f") fi
if [ ! -f "$TARGET/.kiro/hooks/$local_name" ]; then done
cp "$f" "$TARGET/.kiro/hooks/" 2>/dev/null || true fi
hooks=$((hooks + 1))
fi # Copy scripts (shell scripts) and make executable
done if [ -d "$SOURCE_KIRO/scripts" ]; then
fi for f in "$SOURCE_KIRO/scripts"/*.sh; do
local_name=$(basename "$f")
# Copy scripts (shell scripts) and make executable if [ ! -f "$TARGET/.kiro/scripts/$local_name" ]; then
if [ -d "$SOURCE_KIRO/scripts" ]; then cp "$f" "$TARGET/.kiro/scripts/" 2>/dev/null || true
for f in "$SOURCE_KIRO/scripts"/*.sh; do chmod +x "$TARGET/.kiro/scripts/$local_name" 2>/dev/null || true
local_name=$(basename "$f") scripts=$((scripts + 1))
if [ ! -f "$TARGET/.kiro/scripts/$local_name" ]; then fi
cp "$f" "$TARGET/.kiro/scripts/" 2>/dev/null || true done
chmod +x "$TARGET/.kiro/scripts/$local_name" 2>/dev/null || true fi
scripts=$((scripts + 1))
fi # Copy settings (example files)
done if [ -d "$SOURCE_KIRO/settings" ]; then
fi for f in "$SOURCE_KIRO/settings"/*; do
[ -f "$f" ] || continue
# Copy settings (example files) local_name=$(basename "$f")
if [ -d "$SOURCE_KIRO/settings" ]; then if [ ! -f "$TARGET/.kiro/settings/$local_name" ]; then
for f in "$SOURCE_KIRO/settings"/*; do cp "$f" "$TARGET/.kiro/settings/" 2>/dev/null || true
[ -f "$f" ] || continue settings=$((settings + 1))
local_name=$(basename "$f") fi
if [ ! -f "$TARGET/.kiro/settings/$local_name" ]; then done
cp "$f" "$TARGET/.kiro/settings/" 2>/dev/null || true fi
settings=$((settings + 1))
fi # Installation summary
done echo "Installation complete!"
fi echo ""
echo "Components installed:"
# Installation summary echo " Agents: $agents"
echo "Installation complete!" echo " Skills: $skills"
echo "" echo " Steering: $steering"
echo "Components installed:" echo " Hooks: $hooks"
echo " Agents: $agents" echo " Scripts: $scripts"
echo " Skills: $skills" echo " Settings: $settings"
echo " Steering: $steering" echo ""
echo " Hooks: $hooks" echo "Next steps:"
echo " Scripts: $scripts" echo " 1. Open your project in Kiro"
echo " Settings: $settings" echo " 2. Agents: Automatic in IDE, /agent swap in CLI"
echo "" echo " 3. Skills: Available via / menu in chat"
echo "Next steps:" echo " 4. Steering files with 'auto' inclusion load automatically"
echo " 1. Open your project in Kiro" echo " 5. Toggle hooks in the Agent Hooks panel"
echo " 2. Agents: Automatic in IDE, /agent swap in CLI" echo " 6. Copy desired MCP servers from .kiro/settings/mcp.json.example to .kiro/settings/mcp.json"
echo " 3. Skills: Available via / menu in chat"
echo " 4. Steering files with 'auto' inclusion load automatically"
echo " 5. Toggle hooks in the Agent Hooks panel"
echo " 6. Copy desired MCP servers from .kiro/settings/mcp.json.example to .kiro/settings/mcp.json"

View File

@@ -36,7 +36,7 @@ detect_pm() {
} }
PM=$(detect_pm) PM=$(detect_pm)
echo "Package manager: $PM" echo "📦 Package manager: $PM"
echo "" echo ""
# ── Helper: run a check ───────────────────────────────────── # ── Helper: run a check ─────────────────────────────────────

Some files were not shown because too many files have changed in this diff Show More