mirror of
https://github.com/affaan-m/everything-claude-code.git
synced 2026-04-02 15:13:28 +08:00
Compare commits
19 Commits
ecc-tools/
...
ecc-tools/
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
abc8765eb7 | ||
|
|
18aa035073 | ||
|
|
a34e41d966 | ||
|
|
7fec0e6d96 | ||
|
|
ad6470cc42 | ||
|
|
e39c4017d1 | ||
|
|
72ee05fa41 | ||
|
|
48c16aa9f8 | ||
|
|
525ae748e0 | ||
|
|
2afae8059f | ||
|
|
657b9d9622 | ||
|
|
5e0d66a04c | ||
|
|
9a35ab9a56 | ||
|
|
2f7f82ffe9 | ||
|
|
e76ff48345 | ||
|
|
31525854b5 | ||
|
|
8f63697113 | ||
|
|
9a6080f2e1 | ||
|
|
dba5ae779b |
97
.agents/skills/brand-voice/SKILL.md
Normal file
97
.agents/skills/brand-voice/SKILL.md
Normal file
@@ -0,0 +1,97 @@
|
||||
---
|
||||
name: brand-voice
|
||||
description: Build a source-derived writing style profile from real posts, essays, launch notes, docs, or site copy, then reuse that profile across content, outreach, and social workflows. Use when the user wants voice consistency without generic AI writing tropes.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Brand Voice
|
||||
|
||||
Build a durable voice profile from real source material, then use that profile everywhere instead of re-deriving style from scratch or defaulting to generic AI copy.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- the user wants content or outreach in a specific voice
|
||||
- writing for X, LinkedIn, email, launch posts, threads, or product updates
|
||||
- adapting a known author's tone across channels
|
||||
- the existing content lane needs a reusable style system instead of one-off mimicry
|
||||
|
||||
## Source Priority
|
||||
|
||||
Use the strongest real source set available, in this order:
|
||||
|
||||
1. recent original X posts and threads
|
||||
2. articles, essays, memos, launch notes, or newsletters
|
||||
3. real outbound emails or DMs that worked
|
||||
4. product docs, changelogs, README framing, and site copy
|
||||
|
||||
Do not use generic platform exemplars as source material.
|
||||
|
||||
## Collection Workflow
|
||||
|
||||
1. Gather 5 to 20 representative samples when available.
|
||||
2. Prefer recent material over old material unless the user says the older writing is more canonical.
|
||||
3. Separate "public launch voice" from "private working voice" if the source set clearly splits.
|
||||
4. If live X access is available, use `x-api` to pull recent original posts before drafting.
|
||||
5. If site copy matters, include the current ECC landing page and repo/plugin framing.
|
||||
|
||||
## What to Extract
|
||||
|
||||
- rhythm and sentence length
|
||||
- compression vs explanation
|
||||
- capitalization norms
|
||||
- parenthetical use
|
||||
- question frequency and purpose
|
||||
- how sharply claims are made
|
||||
- how often numbers, mechanisms, or receipts show up
|
||||
- how transitions work
|
||||
- what the author never does
|
||||
|
||||
## Output Contract
|
||||
|
||||
Produce a reusable `VOICE PROFILE` block that downstream skills can consume directly. Use the schema in [references/voice-profile-schema.md](references/voice-profile-schema.md).
|
||||
|
||||
Keep the profile structured and short enough to reuse in session context. The point is not literary criticism. The point is operational reuse.
|
||||
|
||||
## Affaan / ECC Defaults
|
||||
|
||||
If the user wants Affaan / ECC voice and live sources are thin, start here unless newer source material overrides it:
|
||||
|
||||
- direct, compressed, concrete
|
||||
- specifics, mechanisms, receipts, and numbers beat adjectives
|
||||
- parentheticals are for qualification, narrowing, or over-clarification
|
||||
- capitalization is conventional unless there is a real reason to break it
|
||||
- questions are rare and should not be used as bait
|
||||
- tone can be sharp, blunt, skeptical, or dry
|
||||
- transitions should feel earned, not smoothed over
|
||||
|
||||
## Hard Bans
|
||||
|
||||
Delete and rewrite any of these:
|
||||
|
||||
- fake curiosity hooks
|
||||
- "not X, just Y"
|
||||
- "no fluff"
|
||||
- forced lowercase
|
||||
- LinkedIn thought-leader cadence
|
||||
- bait questions
|
||||
- "Excited to share"
|
||||
- generic founder-journey filler
|
||||
- corny parentheticals
|
||||
|
||||
## Persistence Rules
|
||||
|
||||
- Reuse the latest confirmed `VOICE PROFILE` across related tasks in the same session.
|
||||
- If the user asks for a durable artifact, save the profile in the requested workspace location or memory surface.
|
||||
- Do not create repo-tracked files that store personal voice fingerprints unless the user explicitly asks for that.
|
||||
|
||||
## Downstream Use
|
||||
|
||||
Use this skill before or inside:
|
||||
|
||||
- `content-engine`
|
||||
- `crosspost`
|
||||
- `lead-intelligence`
|
||||
- article or launch writing
|
||||
- cold or warm outbound across X, LinkedIn, and email
|
||||
|
||||
If another skill already has a partial voice capture section, this skill is the canonical source of truth.
|
||||
@@ -0,0 +1,55 @@
|
||||
# Voice Profile Schema
|
||||
|
||||
Use this exact structure when building a reusable voice profile:
|
||||
|
||||
```text
|
||||
VOICE PROFILE
|
||||
=============
|
||||
Author:
|
||||
Goal:
|
||||
Confidence:
|
||||
|
||||
Source Set
|
||||
- source 1
|
||||
- source 2
|
||||
- source 3
|
||||
|
||||
Rhythm
|
||||
- short note on sentence length, pacing, and fragmentation
|
||||
|
||||
Compression
|
||||
- how dense or explanatory the writing is
|
||||
|
||||
Capitalization
|
||||
- conventional, mixed, or situational
|
||||
|
||||
Parentheticals
|
||||
- how they are used and how they are not used
|
||||
|
||||
Question Use
|
||||
- rare, frequent, rhetorical, direct, or mostly absent
|
||||
|
||||
Claim Style
|
||||
- how claims are framed, supported, and sharpened
|
||||
|
||||
Preferred Moves
|
||||
- concrete moves the author does use
|
||||
|
||||
Banned Moves
|
||||
- specific patterns the author does not use
|
||||
|
||||
CTA Rules
|
||||
- how, when, or whether to close with asks
|
||||
|
||||
Channel Notes
|
||||
- X:
|
||||
- LinkedIn:
|
||||
- Email:
|
||||
```
|
||||
|
||||
Guidelines:
|
||||
|
||||
- Keep the profile concrete and source-backed.
|
||||
- Use short bullets, not essay paragraphs.
|
||||
- Every banned move should be observable in the source set or explicitly requested by the user.
|
||||
- If the source set conflicts, call out the split instead of averaging it into mush.
|
||||
@@ -36,27 +36,24 @@ Before drafting, identify the source set:
|
||||
- prior posts from the same author
|
||||
|
||||
If the user wants a specific voice, build a voice profile from real examples before writing.
|
||||
Use `brand-voice` as the canonical workflow when voice consistency matters across more than one output.
|
||||
|
||||
## Voice Capture Workflow
|
||||
|
||||
Collect 5 to 20 examples when available. Good sources:
|
||||
- articles or essays
|
||||
- X posts or threads
|
||||
- docs or release notes
|
||||
- newsletters
|
||||
- previous launch posts
|
||||
Run `brand-voice` first when:
|
||||
|
||||
If live X access is available, use `x-api` to pull recent original posts before drafting. If not, use the examples already provided or present in the repo.
|
||||
- there are multiple downstream outputs
|
||||
- the user explicitly cares about writing style
|
||||
- the content is launch, outreach, or reputation-sensitive
|
||||
|
||||
Extract and write down:
|
||||
- sentence length and rhythm
|
||||
- how compressed or explanatory the writing is
|
||||
- whether capitalization is conventional, mixed, or situational
|
||||
- how parentheses are used
|
||||
- whether the writer uses fragments, lists, or abrupt pivots
|
||||
- how often the writer asks questions
|
||||
- how sharp, formal, opinionated, or dry the voice is
|
||||
- what the writer never does
|
||||
At minimum, produce a compact `VOICE PROFILE` covering:
|
||||
- rhythm
|
||||
- compression
|
||||
- capitalization
|
||||
- parenthetical use
|
||||
- question use
|
||||
- preferred moves
|
||||
- banned moves
|
||||
|
||||
Do not start drafting until the voice profile is clear enough to enforce.
|
||||
|
||||
@@ -148,3 +145,9 @@ Before delivering:
|
||||
- no fake engagement bait remains
|
||||
- no duplicated copy across platforms unless requested
|
||||
- any CTA is earned and user-approved
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for source-derived voice profiles
|
||||
- `crosspost` for platform-specific distribution
|
||||
- `x-api` for sourcing recent posts and publishing approved X output
|
||||
|
||||
@@ -37,6 +37,8 @@ Use `content-engine` first if the source still needs voice shaping.
|
||||
|
||||
### Step 2: Capture the Voice Fingerprint
|
||||
|
||||
Run `brand-voice` first if the source voice is not already captured in the current session.
|
||||
|
||||
Before adapting, note:
|
||||
- how blunt or explanatory the source is
|
||||
- whether the source uses fragments, lists, or longer transitions
|
||||
@@ -110,5 +112,6 @@ Before delivering:
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for reusable source-derived voice capture
|
||||
- `content-engine` for voice capture and source shaping
|
||||
- `x-api` for X publishing workflows
|
||||
|
||||
@@ -1,403 +1,165 @@
|
||||
---
|
||||
name: everything-claude-code-conventions
|
||||
description: Development conventions and patterns for everything-claude-code. JavaScript project with conventional commits.
|
||||
---
|
||||
```markdown
|
||||
# everything-claude-code Development Patterns
|
||||
|
||||
# Everything Claude Code Conventions
|
||||
|
||||
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-04-01
|
||||
> Auto-generated skill from repository analysis
|
||||
|
||||
## Overview
|
||||
|
||||
This skill teaches Claude the development patterns and conventions used in everything-claude-code.
|
||||
|
||||
## Tech Stack
|
||||
|
||||
- **Primary Language**: JavaScript
|
||||
- **Architecture**: hybrid module organization
|
||||
- **Test Location**: separate
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Activate this skill when:
|
||||
- Making changes to this repository
|
||||
- Adding new features following established patterns
|
||||
- Writing tests that match project conventions
|
||||
- Creating commits with proper message format
|
||||
|
||||
## Commit Conventions
|
||||
|
||||
Follow these commit message conventions based on 500 analyzed commits.
|
||||
|
||||
### Commit Style: Conventional Commits
|
||||
|
||||
### Prefixes Used
|
||||
|
||||
- `fix`
|
||||
- `feat`
|
||||
- `docs`
|
||||
- `chore`
|
||||
|
||||
### Message Guidelines
|
||||
|
||||
- Average message length: ~57 characters
|
||||
- Keep first line concise and descriptive
|
||||
- Use imperative mood ("Add feature" not "Added feature")
|
||||
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code ECC bundle (.claude/commands/add-new-skill.md)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
refactor: collapse legacy command bodies into skills
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
fix: dedupe managed hooks by semantic identity
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
docs: close bundle drift and sync plugin guidance
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
chore: ignore local orchestration artifacts
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code ECC bundle (.claude/commands/refactoring.md)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code ECC bundle (.claude/commands/feature-development.md)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code ECC bundle (.claude/enterprise/controls.md)
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### Project Structure: Single Package
|
||||
|
||||
This project uses **hybrid** module organization.
|
||||
|
||||
### Configuration Files
|
||||
|
||||
- `.github/workflows/ci.yml`
|
||||
- `.github/workflows/maintenance.yml`
|
||||
- `.github/workflows/monthly-metrics.yml`
|
||||
- `.github/workflows/release.yml`
|
||||
- `.github/workflows/reusable-release.yml`
|
||||
- `.github/workflows/reusable-test.yml`
|
||||
- `.github/workflows/reusable-validate.yml`
|
||||
- `.opencode/package.json`
|
||||
- `.opencode/tsconfig.json`
|
||||
- `.prettierrc`
|
||||
- `eslint.config.js`
|
||||
- `package.json`
|
||||
|
||||
### Guidelines
|
||||
|
||||
- This project uses a hybrid organization
|
||||
- Follow existing patterns when adding new code
|
||||
|
||||
## Code Style
|
||||
|
||||
### Language: JavaScript
|
||||
|
||||
### Naming Conventions
|
||||
|
||||
| Element | Convention |
|
||||
|---------|------------|
|
||||
| Files | camelCase |
|
||||
| Functions | camelCase |
|
||||
| Classes | PascalCase |
|
||||
| Constants | SCREAMING_SNAKE_CASE |
|
||||
|
||||
### Import Style: Relative Imports
|
||||
|
||||
### Export Style: Mixed Style
|
||||
|
||||
|
||||
*Preferred import style*
|
||||
|
||||
```typescript
|
||||
// Use relative imports
|
||||
import { Button } from '../components/Button'
|
||||
import { useAuth } from './hooks/useAuth'
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Test Framework
|
||||
|
||||
No specific test framework detected — use the repository's existing test patterns.
|
||||
|
||||
### File Pattern: `*.test.js`
|
||||
|
||||
### Test Types
|
||||
|
||||
- **Unit tests**: Test individual functions and components in isolation
|
||||
- **Integration tests**: Test interactions between multiple components/services
|
||||
|
||||
### Coverage
|
||||
|
||||
This project has coverage reporting configured. Aim for 80%+ coverage.
|
||||
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Error Handling Style: Try-Catch Blocks
|
||||
|
||||
|
||||
*Standard error handling pattern*
|
||||
|
||||
```typescript
|
||||
try {
|
||||
const result = await riskyOperation()
|
||||
return result
|
||||
} catch (error) {
|
||||
console.error('Operation failed:', error)
|
||||
throw new Error('User-friendly message')
|
||||
}
|
||||
```
|
||||
|
||||
## Common Workflows
|
||||
|
||||
These workflows were detected from analyzing commit patterns.
|
||||
|
||||
### Feature Development
|
||||
|
||||
Standard feature implementation workflow
|
||||
|
||||
**Frequency**: ~19 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Add feature implementation
|
||||
2. Add tests for feature
|
||||
3. Update documentation
|
||||
|
||||
**Files typically involved**:
|
||||
- `.opencode/*`
|
||||
- `.opencode/plugins/*`
|
||||
- `.opencode/plugins/lib/*`
|
||||
- `**/*.test.*`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
feat: install claude-hud plugin (jarrodwatts/claude-hud) (#1041)
|
||||
feat: add GAN-style generator-evaluator harness (#1029)
|
||||
feat(agents,skills): add opensource-pipeline — 3-agent workflow for safe public releases (#1036)
|
||||
```
|
||||
|
||||
### Refactoring
|
||||
|
||||
Code refactoring and cleanup workflow
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Ensure tests pass before refactor
|
||||
2. Refactor code structure
|
||||
3. Verify tests still pass
|
||||
|
||||
**Files typically involved**:
|
||||
- `src/**/*`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
refactor: collapse legacy command bodies into skills
|
||||
feat: add connected operator workflow skills
|
||||
feat: expand lead intelligence outreach channels
|
||||
```
|
||||
This skill teaches the core development patterns, coding conventions, and collaborative workflows used in the `everything-claude-code` JavaScript repository. The project is modular, skill-oriented, and emphasizes clear documentation, conventional commits, and extensibility via skills, agents, commands, and install targets. This guide will help you contribute effectively by following established patterns and using the right commands for common tasks.
|
||||
|
||||
## Coding Conventions
|
||||
|
||||
**File Naming**
|
||||
- Use `camelCase` for JavaScript files and directories.
|
||||
- Example: `mySkill.js`, `installTarget.js`
|
||||
|
||||
**Import Style**
|
||||
- Use relative imports.
|
||||
- Example:
|
||||
```js
|
||||
const helper = require('./helper');
|
||||
import { doSomething } from '../utils/doSomething';
|
||||
```
|
||||
|
||||
**Export Style**
|
||||
- Mixed: both CommonJS (`module.exports`) and ES6 (`export`) exports are used.
|
||||
- Example (CommonJS):
|
||||
```js
|
||||
module.exports = function myFunction() { ... };
|
||||
```
|
||||
- Example (ES6):
|
||||
```js
|
||||
export function myFunction() { ... }
|
||||
```
|
||||
|
||||
**Commit Messages**
|
||||
- Use [Conventional Commits](https://www.conventionalcommits.org/):
|
||||
- Prefixes: `fix`, `feat`, `docs`, `chore`
|
||||
- Example: `feat: add new agent pipeline for document processing`
|
||||
|
||||
## Workflows
|
||||
|
||||
### Add New Skill
|
||||
**Trigger:** When introducing a new skill (capability/module) to the platform
|
||||
**Command:** `/add-skill`
|
||||
|
||||
Adds a new skill to the repository, enabling new agent capabilities or workflows.
|
||||
1. Create a new `SKILL.md` under `skills/<skill-name>/` or `.agents/skills/<skill-name>/`.
|
||||
2. Optionally add related assets or references in the skill directory.
|
||||
3. Update documentation:
|
||||
- `AGENTS.md`
|
||||
- `README.md`
|
||||
- `README.zh-CN.md`
|
||||
- `docs/zh-CN/AGENTS.md`
|
||||
4. If the skill is installable, update:
|
||||
- `manifests/install-components.json` or
|
||||
- `manifests/install-modules.json`
|
||||
5. Optionally update `WORKING-CONTEXT.md`.
|
||||
|
||||
**Frequency**: ~4 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create a new SKILL.md file under skills/ or .agents/skills/ or .claude/skills/
|
||||
2. Optionally update documentation (AGENTS.md, README.md, WORKING-CONTEXT.md)
|
||||
3. Optionally add supporting files (e.g., manifests/install-modules.json)
|
||||
|
||||
**Files typically involved**:
|
||||
- `skills/*/SKILL.md`
|
||||
- `.agents/skills/*/SKILL.md`
|
||||
- `.claude/skills/*/SKILL.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
**Example directory structure:**
|
||||
```
|
||||
Create a new SKILL.md file under skills/ or .agents/skills/ or .claude/skills/
|
||||
Optionally update documentation (AGENTS.md, README.md, WORKING-CONTEXT.md)
|
||||
Optionally add supporting files (e.g., manifests/install-modules.json)
|
||||
skills/
|
||||
myNewSkill/
|
||||
SKILL.md
|
||||
index.js
|
||||
assets/
|
||||
```
|
||||
|
||||
### Add Or Update Agent
|
||||
|
||||
Adds or updates agent definitions and registers them in configuration files.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Add or update agent definition file (e.g., agents/*.md or .opencode/prompts/agents/*.txt)
|
||||
2. Update agent registry/configuration (e.g., .opencode/opencode.json, AGENTS.md)
|
||||
|
||||
**Files typically involved**:
|
||||
- `agents/*.md`
|
||||
- `.opencode/prompts/agents/*.txt`
|
||||
- `.opencode/opencode.json`
|
||||
- `AGENTS.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Add or update agent definition file (e.g., agents/*.md or .opencode/prompts/agents/*.txt)
|
||||
Update agent registry/configuration (e.g., .opencode/opencode.json, AGENTS.md)
|
||||
```
|
||||
|
||||
### Add Or Update Command
|
||||
|
||||
Adds or updates command workflow files for agentic operations.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create or update command markdown file under commands/
|
||||
2. Optionally update documentation or index files
|
||||
|
||||
**Files typically involved**:
|
||||
- `commands/*.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Create or update command markdown file under commands/
|
||||
Optionally update documentation or index files
|
||||
```
|
||||
|
||||
### Feature Or Skill Bundle
|
||||
|
||||
Adds a bundle of related features, skills, and documentation for a new workflow or capability.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Add multiple agent and/or skill files
|
||||
2. Add supporting commands and scripts
|
||||
3. Add or update documentation and examples
|
||||
|
||||
**Files typically involved**:
|
||||
- `agents/*.md`
|
||||
- `skills/*/SKILL.md`
|
||||
- `commands/*.md`
|
||||
- `scripts/*.sh`
|
||||
- `examples/*`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Add multiple agent and/or skill files
|
||||
Add supporting commands and scripts
|
||||
Add or update documentation and examples
|
||||
```
|
||||
|
||||
### Documentation Update
|
||||
|
||||
Updates documentation to reflect new features, workflows, or guidance.
|
||||
|
||||
**Frequency**: ~3 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Edit documentation files (README.md, AGENTS.md, WORKING-CONTEXT.md, docs/...)
|
||||
2. Optionally update or add new guidance files
|
||||
|
||||
**Files typically involved**:
|
||||
- `README.md`
|
||||
- `AGENTS.md`
|
||||
- `WORKING-CONTEXT.md`
|
||||
- `docs/**/*.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Edit documentation files (README.md, AGENTS.md, WORKING-CONTEXT.md, docs/...)
|
||||
Optionally update or add new guidance files
|
||||
```
|
||||
|
||||
### Dependency Or Schema Update
|
||||
|
||||
Updates dependencies or schema files, often in response to new features or external updates.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Edit package.json, yarn.lock, or other lockfiles
|
||||
2. Edit schema files under schemas/
|
||||
3. Optionally update related scripts or manifests
|
||||
|
||||
**Files typically involved**:
|
||||
- `package.json`
|
||||
- `yarn.lock`
|
||||
- `schemas/*.json`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Edit package.json, yarn.lock, or other lockfiles
|
||||
Edit schema files under schemas/
|
||||
Optionally update related scripts or manifests
|
||||
```
|
||||
|
||||
### Ci Cd Workflow Update
|
||||
|
||||
Updates CI/CD workflow files, usually for dependency bumps or workflow improvements.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Edit workflow files under .github/workflows/
|
||||
|
||||
**Files typically involved**:
|
||||
- `.github/workflows/*.yml`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Edit workflow files under .github/workflows/
|
||||
```
|
||||
|
||||
|
||||
## Best Practices
|
||||
|
||||
Based on analysis of the codebase, follow these practices:
|
||||
|
||||
### Do
|
||||
|
||||
- Use conventional commit format (feat:, fix:, etc.)
|
||||
- Follow *.test.js naming pattern
|
||||
- Use camelCase for file names
|
||||
- Prefer mixed exports
|
||||
|
||||
### Don't
|
||||
|
||||
- Don't write vague commit messages
|
||||
- Don't skip tests for new features
|
||||
- Don't deviate from established patterns without discussion
|
||||
|
||||
---
|
||||
|
||||
*This skill was auto-generated by [ECC Tools](https://ecc.tools). Review and customize as needed for your team.*
|
||||
### Add New Agent or Pipeline
|
||||
**Trigger:** When introducing a new agent or orchestrated workflow (pipeline)
|
||||
**Command:** `/add-agent-pipeline`
|
||||
|
||||
1. Create agent definition files under `agents/`.
|
||||
2. Create or update a `SKILL.md` for the orchestrator under `skills/<pipeline-name>/`.
|
||||
3. Update `AGENTS.md` and `README.md` to document the new agent(s)/pipeline.
|
||||
4. Optionally add supporting commands, scripts, or documentation.
|
||||
|
||||
---
|
||||
|
||||
### Add or Extend Command
|
||||
**Trigger:** When adding or enhancing CLI commands
|
||||
**Command:** `/add-command`
|
||||
|
||||
1. Create or update command markdown files under `commands/`.
|
||||
2. Incorporate review feedback and fixes as needed.
|
||||
3. Document new commands in `AGENTS.md` or other relevant docs.
|
||||
|
||||
---
|
||||
|
||||
### Add Install Target or Adapter
|
||||
**Trigger:** When supporting a new install target (plugin, IDE, platform)
|
||||
**Command:** `/add-install-target`
|
||||
|
||||
1. Create a new directory for the install target (e.g., `.codebuddy/`, `.gemini/`).
|
||||
2. Add install/uninstall scripts and documentation in the new directory.
|
||||
3. Update `manifests/install-modules.json` and relevant schemas.
|
||||
4. Update or add scripts in `scripts/lib/install-targets/<target>.js`.
|
||||
5. Update or add tests for the new install target.
|
||||
|
||||
**Example:**
|
||||
```
|
||||
.codebuddy/
|
||||
install.sh
|
||||
uninstall.sh
|
||||
README.md
|
||||
scripts/lib/install-targets/codebuddy.js
|
||||
tests/lib/install-targets.test.js
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Update or Harden Hooks
|
||||
**Trigger:** When improving, refactoring, or fixing hooks (e.g., CI, formatting, session management)
|
||||
**Command:** `/update-hook`
|
||||
|
||||
1. Update `hooks/hooks.json` to change hook configuration.
|
||||
2. Update or add scripts in `scripts/hooks/*.js` or `scripts/hooks/*.sh`.
|
||||
3. Update or add tests for the affected hooks.
|
||||
4. Optionally update related documentation.
|
||||
|
||||
---
|
||||
|
||||
### Documentation Sync and Guidance Update
|
||||
**Trigger:** When updating documentation to reflect new features, skills, or workflows
|
||||
**Command:** `/sync-docs`
|
||||
|
||||
1. Update `README.md`, `README.zh-CN.md`, and/or `AGENTS.md`.
|
||||
2. Update `docs/zh-CN/AGENTS.md` and `docs/zh-CN/README.md`.
|
||||
3. Update or add `WORKING-CONTEXT.md`.
|
||||
4. Optionally update `the-shortform-guide.md` or other guidance files.
|
||||
|
||||
---
|
||||
|
||||
### Dependency Bump via Dependabot
|
||||
**Trigger:** When a dependency update is triggered by Dependabot or similar automation
|
||||
**Command:** `/bump-dependency`
|
||||
|
||||
1. Update `package.json` and/or `yarn.lock` for npm dependencies.
|
||||
2. Update `.github/workflows/*.yml` for GitHub Actions dependencies.
|
||||
3. Commit with a standardized message and co-author.
|
||||
|
||||
---
|
||||
|
||||
## Testing Patterns
|
||||
|
||||
- Test files use the pattern `*.test.js`.
|
||||
- Testing framework is not explicitly specified; check test files for usage.
|
||||
- Place tests alongside or within a `tests/` directory, matching the structure of the code under test.
|
||||
- Example test file:
|
||||
```
|
||||
tests/lib/install-targets.test.js
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
| Command | Purpose |
|
||||
|--------------------|------------------------------------------------------------------|
|
||||
| /add-skill | Add a new skill, including docs and registration |
|
||||
| /add-agent-pipeline| Add a new agent or orchestrated pipeline |
|
||||
| /add-command | Add or extend a CLI command |
|
||||
| /add-install-target| Add support for a new install target or adapter |
|
||||
| /update-hook | Refactor or fix hooks and related scripts |
|
||||
| /sync-docs | Synchronize and update documentation across contexts/languages |
|
||||
| /bump-dependency | Automated dependency update via Dependabot or similar automation |
|
||||
```
|
||||
|
||||
@@ -19,7 +19,7 @@ Programmatic interaction with X (Twitter) for posting, reading, searching, and a
|
||||
|
||||
## Authentication
|
||||
|
||||
### OAuth 2.0 (App-Only / User Context)
|
||||
### OAuth 2.0 Bearer Token (App-Only)
|
||||
|
||||
Best for: read-heavy operations, search, public data.
|
||||
|
||||
@@ -46,25 +46,27 @@ tweets = resp.json()
|
||||
|
||||
### OAuth 1.0a (User Context)
|
||||
|
||||
Required for: posting tweets, managing account, DMs.
|
||||
Required for: posting tweets, managing account, DMs, and any write flow.
|
||||
|
||||
```bash
|
||||
# Environment setup — source before use
|
||||
export X_API_KEY="your-api-key"
|
||||
export X_API_SECRET="your-api-secret"
|
||||
export X_CONSUMER_KEY="your-consumer-key"
|
||||
export X_CONSUMER_SECRET="your-consumer-secret"
|
||||
export X_ACCESS_TOKEN="your-access-token"
|
||||
export X_ACCESS_SECRET="your-access-secret"
|
||||
export X_ACCESS_TOKEN_SECRET="your-access-token-secret"
|
||||
```
|
||||
|
||||
Legacy aliases such as `X_API_KEY`, `X_API_SECRET`, and `X_ACCESS_SECRET` may exist in older setups. Prefer the `X_CONSUMER_*` and `X_ACCESS_TOKEN_SECRET` names when documenting or wiring new flows.
|
||||
|
||||
```python
|
||||
import os
|
||||
from requests_oauthlib import OAuth1Session
|
||||
|
||||
oauth = OAuth1Session(
|
||||
os.environ["X_API_KEY"],
|
||||
client_secret=os.environ["X_API_SECRET"],
|
||||
os.environ["X_CONSUMER_KEY"],
|
||||
client_secret=os.environ["X_CONSUMER_SECRET"],
|
||||
resource_owner_key=os.environ["X_ACCESS_TOKEN"],
|
||||
resource_owner_secret=os.environ["X_ACCESS_SECRET"],
|
||||
resource_owner_secret=os.environ["X_ACCESS_TOKEN_SECRET"],
|
||||
)
|
||||
```
|
||||
|
||||
@@ -92,7 +94,6 @@ def post_thread(oauth, tweets: list[str]) -> list[str]:
|
||||
if reply_to:
|
||||
payload["reply"] = {"in_reply_to_tweet_id": reply_to}
|
||||
resp = oauth.post("https://api.x.com/2/tweets", json=payload)
|
||||
resp.raise_for_status()
|
||||
tweet_id = resp.json()["data"]["id"]
|
||||
ids.append(tweet_id)
|
||||
reply_to = tweet_id
|
||||
@@ -126,6 +127,21 @@ resp = requests.get(
|
||||
)
|
||||
```
|
||||
|
||||
### Pull Recent Original Posts for Voice Modeling
|
||||
|
||||
```python
|
||||
resp = requests.get(
|
||||
"https://api.x.com/2/tweets/search/recent",
|
||||
headers=headers,
|
||||
params={
|
||||
"query": "from:affaanmustafa -is:retweet -is:reply",
|
||||
"max_results": 25,
|
||||
"tweet.fields": "created_at,public_metrics",
|
||||
}
|
||||
)
|
||||
voice_samples = resp.json()
|
||||
```
|
||||
|
||||
### Get User by Username
|
||||
|
||||
```python
|
||||
@@ -155,17 +171,12 @@ resp = oauth.post(
|
||||
)
|
||||
```
|
||||
|
||||
## Rate Limits Reference
|
||||
## Rate Limits
|
||||
|
||||
| Endpoint | Limit | Window |
|
||||
|----------|-------|--------|
|
||||
| POST /2/tweets | 200 | 15 min |
|
||||
| GET /2/tweets/search/recent | 450 | 15 min |
|
||||
| GET /2/users/:id/tweets | 1500 | 15 min |
|
||||
| GET /2/users/by/username | 300 | 15 min |
|
||||
| POST media/upload | 415 | 15 min |
|
||||
|
||||
Always check `x-rate-limit-remaining` and `x-rate-limit-reset` headers.
|
||||
X API rate limits vary by endpoint, auth method, and account tier, and they change over time. Always:
|
||||
- Check the current X developer docs before hardcoding assumptions
|
||||
- Read `x-rate-limit-remaining` and `x-rate-limit-reset` headers at runtime
|
||||
- Back off automatically instead of relying on static tables in code
|
||||
|
||||
```python
|
||||
import time
|
||||
@@ -202,13 +213,18 @@ else:
|
||||
|
||||
## Integration with Content Engine
|
||||
|
||||
Use `content-engine` skill to generate platform-native content, then post via X API:
|
||||
1. Generate content with content-engine (X platform format)
|
||||
2. Validate length (280 chars for single tweet)
|
||||
3. Post via X API using patterns above
|
||||
4. Track engagement via public_metrics
|
||||
Use `brand-voice` plus `content-engine` to generate platform-native content, then post via X API:
|
||||
1. Pull recent original posts when voice matching matters
|
||||
2. Build or reuse a `VOICE PROFILE`
|
||||
3. Generate content with `content-engine` in X-native format
|
||||
4. Validate length and thread structure
|
||||
5. Return the draft for approval unless the user explicitly asked to post now
|
||||
6. Post via X API only after approval
|
||||
7. Track engagement via public_metrics
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` — Build a reusable voice profile from real X and site/source material
|
||||
- `content-engine` — Generate platform-native content for X
|
||||
- `crosspost` — Distribute content across X, LinkedIn, and other platforms
|
||||
- `connections-optimizer` — Reorganize the X graph before drafting network-driven outreach
|
||||
|
||||
@@ -10,13 +10,16 @@ Use this workflow when working on **add-new-skill** in `everything-claude-code`.
|
||||
|
||||
## Goal
|
||||
|
||||
Adds a new skill to the repository, enabling new agent capabilities or workflows.
|
||||
Adds a new skill to the system, including documentation and registration in relevant indexes.
|
||||
|
||||
## Common Files
|
||||
|
||||
- `skills/*/SKILL.md`
|
||||
- `.agents/skills/*/SKILL.md`
|
||||
- `.claude/skills/*/SKILL.md`
|
||||
- `AGENTS.md`
|
||||
- `README.md`
|
||||
- `README.zh-CN.md`
|
||||
- `docs/zh-CN/AGENTS.md`
|
||||
|
||||
## Suggested Sequence
|
||||
|
||||
@@ -27,9 +30,11 @@ Adds a new skill to the repository, enabling new agent capabilities or workflows
|
||||
|
||||
## Typical Commit Signals
|
||||
|
||||
- Create a new SKILL.md file under skills/ or .agents/skills/ or .claude/skills/
|
||||
- Optionally update documentation (AGENTS.md, README.md, WORKING-CONTEXT.md)
|
||||
- Optionally add supporting files (e.g., manifests/install-modules.json)
|
||||
- Create a new SKILL.md file under skills/<skill-name>/ or .agents/skills/<skill-name>/
|
||||
- Optionally add related assets or references under the skill directory
|
||||
- Update AGENTS.md, README.md, README.zh-CN.md, and docs/zh-CN/AGENTS.md to document the new skill
|
||||
- Update manifests/install-components.json or manifests/install-modules.json if the skill is installable
|
||||
- Optionally update WORKING-CONTEXT.md
|
||||
|
||||
## Notes
|
||||
|
||||
|
||||
@@ -18,6 +18,7 @@ Standard feature implementation workflow
|
||||
- `.opencode/plugins/*`
|
||||
- `.opencode/plugins/lib/*`
|
||||
- `**/*.test.*`
|
||||
- `**/api/**`
|
||||
|
||||
## Suggested Sequence
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
"version": "1.3",
|
||||
"schemaVersion": "1.0",
|
||||
"generatedBy": "ecc-tools",
|
||||
"generatedAt": "2026-04-01T22:44:14.561Z",
|
||||
"generatedAt": "2026-04-02T03:20:48.238Z",
|
||||
"repo": "https://github.com/affaan-m/everything-claude-code",
|
||||
"profiles": {
|
||||
"requested": "full",
|
||||
|
||||
@@ -10,5 +10,5 @@
|
||||
"javascript"
|
||||
],
|
||||
"suggestedBy": "ecc-tools-repo-analysis",
|
||||
"createdAt": "2026-04-01T22:51:05.478Z"
|
||||
"createdAt": "2026-04-02T03:21:37.492Z"
|
||||
}
|
||||
@@ -26,7 +26,7 @@ Generated by ECC Tools from repository history. Review before treating it as a h
|
||||
|
||||
- feature-development: Standard feature implementation workflow
|
||||
- refactoring: Code refactoring and cleanup workflow
|
||||
- add-new-skill: Adds a new skill to the repository, enabling new agent capabilities or workflows.
|
||||
- add-new-skill: Adds a new skill to the system, including documentation and registration in relevant indexes.
|
||||
|
||||
## Review Reminder
|
||||
|
||||
|
||||
@@ -1,403 +1,165 @@
|
||||
---
|
||||
name: everything-claude-code-conventions
|
||||
description: Development conventions and patterns for everything-claude-code. JavaScript project with conventional commits.
|
||||
---
|
||||
```markdown
|
||||
# everything-claude-code Development Patterns
|
||||
|
||||
# Everything Claude Code Conventions
|
||||
|
||||
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-04-01
|
||||
> Auto-generated skill from repository analysis
|
||||
|
||||
## Overview
|
||||
|
||||
This skill teaches Claude the development patterns and conventions used in everything-claude-code.
|
||||
|
||||
## Tech Stack
|
||||
|
||||
- **Primary Language**: JavaScript
|
||||
- **Architecture**: hybrid module organization
|
||||
- **Test Location**: separate
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Activate this skill when:
|
||||
- Making changes to this repository
|
||||
- Adding new features following established patterns
|
||||
- Writing tests that match project conventions
|
||||
- Creating commits with proper message format
|
||||
|
||||
## Commit Conventions
|
||||
|
||||
Follow these commit message conventions based on 500 analyzed commits.
|
||||
|
||||
### Commit Style: Conventional Commits
|
||||
|
||||
### Prefixes Used
|
||||
|
||||
- `fix`
|
||||
- `feat`
|
||||
- `docs`
|
||||
- `chore`
|
||||
|
||||
### Message Guidelines
|
||||
|
||||
- Average message length: ~57 characters
|
||||
- Keep first line concise and descriptive
|
||||
- Use imperative mood ("Add feature" not "Added feature")
|
||||
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code ECC bundle (.claude/commands/add-new-skill.md)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
refactor: collapse legacy command bodies into skills
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
fix: dedupe managed hooks by semantic identity
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
docs: close bundle drift and sync plugin guidance
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
chore: ignore local orchestration artifacts
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code ECC bundle (.claude/commands/refactoring.md)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code ECC bundle (.claude/commands/feature-development.md)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code ECC bundle (.claude/enterprise/controls.md)
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### Project Structure: Single Package
|
||||
|
||||
This project uses **hybrid** module organization.
|
||||
|
||||
### Configuration Files
|
||||
|
||||
- `.github/workflows/ci.yml`
|
||||
- `.github/workflows/maintenance.yml`
|
||||
- `.github/workflows/monthly-metrics.yml`
|
||||
- `.github/workflows/release.yml`
|
||||
- `.github/workflows/reusable-release.yml`
|
||||
- `.github/workflows/reusable-test.yml`
|
||||
- `.github/workflows/reusable-validate.yml`
|
||||
- `.opencode/package.json`
|
||||
- `.opencode/tsconfig.json`
|
||||
- `.prettierrc`
|
||||
- `eslint.config.js`
|
||||
- `package.json`
|
||||
|
||||
### Guidelines
|
||||
|
||||
- This project uses a hybrid organization
|
||||
- Follow existing patterns when adding new code
|
||||
|
||||
## Code Style
|
||||
|
||||
### Language: JavaScript
|
||||
|
||||
### Naming Conventions
|
||||
|
||||
| Element | Convention |
|
||||
|---------|------------|
|
||||
| Files | camelCase |
|
||||
| Functions | camelCase |
|
||||
| Classes | PascalCase |
|
||||
| Constants | SCREAMING_SNAKE_CASE |
|
||||
|
||||
### Import Style: Relative Imports
|
||||
|
||||
### Export Style: Mixed Style
|
||||
|
||||
|
||||
*Preferred import style*
|
||||
|
||||
```typescript
|
||||
// Use relative imports
|
||||
import { Button } from '../components/Button'
|
||||
import { useAuth } from './hooks/useAuth'
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Test Framework
|
||||
|
||||
No specific test framework detected — use the repository's existing test patterns.
|
||||
|
||||
### File Pattern: `*.test.js`
|
||||
|
||||
### Test Types
|
||||
|
||||
- **Unit tests**: Test individual functions and components in isolation
|
||||
- **Integration tests**: Test interactions between multiple components/services
|
||||
|
||||
### Coverage
|
||||
|
||||
This project has coverage reporting configured. Aim for 80%+ coverage.
|
||||
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Error Handling Style: Try-Catch Blocks
|
||||
|
||||
|
||||
*Standard error handling pattern*
|
||||
|
||||
```typescript
|
||||
try {
|
||||
const result = await riskyOperation()
|
||||
return result
|
||||
} catch (error) {
|
||||
console.error('Operation failed:', error)
|
||||
throw new Error('User-friendly message')
|
||||
}
|
||||
```
|
||||
|
||||
## Common Workflows
|
||||
|
||||
These workflows were detected from analyzing commit patterns.
|
||||
|
||||
### Feature Development
|
||||
|
||||
Standard feature implementation workflow
|
||||
|
||||
**Frequency**: ~19 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Add feature implementation
|
||||
2. Add tests for feature
|
||||
3. Update documentation
|
||||
|
||||
**Files typically involved**:
|
||||
- `.opencode/*`
|
||||
- `.opencode/plugins/*`
|
||||
- `.opencode/plugins/lib/*`
|
||||
- `**/*.test.*`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
feat: install claude-hud plugin (jarrodwatts/claude-hud) (#1041)
|
||||
feat: add GAN-style generator-evaluator harness (#1029)
|
||||
feat(agents,skills): add opensource-pipeline — 3-agent workflow for safe public releases (#1036)
|
||||
```
|
||||
|
||||
### Refactoring
|
||||
|
||||
Code refactoring and cleanup workflow
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Ensure tests pass before refactor
|
||||
2. Refactor code structure
|
||||
3. Verify tests still pass
|
||||
|
||||
**Files typically involved**:
|
||||
- `src/**/*`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
refactor: collapse legacy command bodies into skills
|
||||
feat: add connected operator workflow skills
|
||||
feat: expand lead intelligence outreach channels
|
||||
```
|
||||
This skill teaches the core development patterns, coding conventions, and collaborative workflows used in the `everything-claude-code` JavaScript repository. The project is modular, skill-oriented, and emphasizes clear documentation, conventional commits, and extensibility via skills, agents, commands, and install targets. This guide will help you contribute effectively by following established patterns and using the right commands for common tasks.
|
||||
|
||||
## Coding Conventions
|
||||
|
||||
**File Naming**
|
||||
- Use `camelCase` for JavaScript files and directories.
|
||||
- Example: `mySkill.js`, `installTarget.js`
|
||||
|
||||
**Import Style**
|
||||
- Use relative imports.
|
||||
- Example:
|
||||
```js
|
||||
const helper = require('./helper');
|
||||
import { doSomething } from '../utils/doSomething';
|
||||
```
|
||||
|
||||
**Export Style**
|
||||
- Mixed: both CommonJS (`module.exports`) and ES6 (`export`) exports are used.
|
||||
- Example (CommonJS):
|
||||
```js
|
||||
module.exports = function myFunction() { ... };
|
||||
```
|
||||
- Example (ES6):
|
||||
```js
|
||||
export function myFunction() { ... }
|
||||
```
|
||||
|
||||
**Commit Messages**
|
||||
- Use [Conventional Commits](https://www.conventionalcommits.org/):
|
||||
- Prefixes: `fix`, `feat`, `docs`, `chore`
|
||||
- Example: `feat: add new agent pipeline for document processing`
|
||||
|
||||
## Workflows
|
||||
|
||||
### Add New Skill
|
||||
**Trigger:** When introducing a new skill (capability/module) to the platform
|
||||
**Command:** `/add-skill`
|
||||
|
||||
Adds a new skill to the repository, enabling new agent capabilities or workflows.
|
||||
1. Create a new `SKILL.md` under `skills/<skill-name>/` or `.agents/skills/<skill-name>/`.
|
||||
2. Optionally add related assets or references in the skill directory.
|
||||
3. Update documentation:
|
||||
- `AGENTS.md`
|
||||
- `README.md`
|
||||
- `README.zh-CN.md`
|
||||
- `docs/zh-CN/AGENTS.md`
|
||||
4. If the skill is installable, update:
|
||||
- `manifests/install-components.json` or
|
||||
- `manifests/install-modules.json`
|
||||
5. Optionally update `WORKING-CONTEXT.md`.
|
||||
|
||||
**Frequency**: ~4 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create a new SKILL.md file under skills/ or .agents/skills/ or .claude/skills/
|
||||
2. Optionally update documentation (AGENTS.md, README.md, WORKING-CONTEXT.md)
|
||||
3. Optionally add supporting files (e.g., manifests/install-modules.json)
|
||||
|
||||
**Files typically involved**:
|
||||
- `skills/*/SKILL.md`
|
||||
- `.agents/skills/*/SKILL.md`
|
||||
- `.claude/skills/*/SKILL.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
**Example directory structure:**
|
||||
```
|
||||
Create a new SKILL.md file under skills/ or .agents/skills/ or .claude/skills/
|
||||
Optionally update documentation (AGENTS.md, README.md, WORKING-CONTEXT.md)
|
||||
Optionally add supporting files (e.g., manifests/install-modules.json)
|
||||
skills/
|
||||
myNewSkill/
|
||||
SKILL.md
|
||||
index.js
|
||||
assets/
|
||||
```
|
||||
|
||||
### Add Or Update Agent
|
||||
|
||||
Adds or updates agent definitions and registers them in configuration files.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Add or update agent definition file (e.g., agents/*.md or .opencode/prompts/agents/*.txt)
|
||||
2. Update agent registry/configuration (e.g., .opencode/opencode.json, AGENTS.md)
|
||||
|
||||
**Files typically involved**:
|
||||
- `agents/*.md`
|
||||
- `.opencode/prompts/agents/*.txt`
|
||||
- `.opencode/opencode.json`
|
||||
- `AGENTS.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Add or update agent definition file (e.g., agents/*.md or .opencode/prompts/agents/*.txt)
|
||||
Update agent registry/configuration (e.g., .opencode/opencode.json, AGENTS.md)
|
||||
```
|
||||
|
||||
### Add Or Update Command
|
||||
|
||||
Adds or updates command workflow files for agentic operations.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create or update command markdown file under commands/
|
||||
2. Optionally update documentation or index files
|
||||
|
||||
**Files typically involved**:
|
||||
- `commands/*.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Create or update command markdown file under commands/
|
||||
Optionally update documentation or index files
|
||||
```
|
||||
|
||||
### Feature Or Skill Bundle
|
||||
|
||||
Adds a bundle of related features, skills, and documentation for a new workflow or capability.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Add multiple agent and/or skill files
|
||||
2. Add supporting commands and scripts
|
||||
3. Add or update documentation and examples
|
||||
|
||||
**Files typically involved**:
|
||||
- `agents/*.md`
|
||||
- `skills/*/SKILL.md`
|
||||
- `commands/*.md`
|
||||
- `scripts/*.sh`
|
||||
- `examples/*`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Add multiple agent and/or skill files
|
||||
Add supporting commands and scripts
|
||||
Add or update documentation and examples
|
||||
```
|
||||
|
||||
### Documentation Update
|
||||
|
||||
Updates documentation to reflect new features, workflows, or guidance.
|
||||
|
||||
**Frequency**: ~3 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Edit documentation files (README.md, AGENTS.md, WORKING-CONTEXT.md, docs/...)
|
||||
2. Optionally update or add new guidance files
|
||||
|
||||
**Files typically involved**:
|
||||
- `README.md`
|
||||
- `AGENTS.md`
|
||||
- `WORKING-CONTEXT.md`
|
||||
- `docs/**/*.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Edit documentation files (README.md, AGENTS.md, WORKING-CONTEXT.md, docs/...)
|
||||
Optionally update or add new guidance files
|
||||
```
|
||||
|
||||
### Dependency Or Schema Update
|
||||
|
||||
Updates dependencies or schema files, often in response to new features or external updates.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Edit package.json, yarn.lock, or other lockfiles
|
||||
2. Edit schema files under schemas/
|
||||
3. Optionally update related scripts or manifests
|
||||
|
||||
**Files typically involved**:
|
||||
- `package.json`
|
||||
- `yarn.lock`
|
||||
- `schemas/*.json`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Edit package.json, yarn.lock, or other lockfiles
|
||||
Edit schema files under schemas/
|
||||
Optionally update related scripts or manifests
|
||||
```
|
||||
|
||||
### Ci Cd Workflow Update
|
||||
|
||||
Updates CI/CD workflow files, usually for dependency bumps or workflow improvements.
|
||||
|
||||
**Frequency**: ~2 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Edit workflow files under .github/workflows/
|
||||
|
||||
**Files typically involved**:
|
||||
- `.github/workflows/*.yml`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Edit workflow files under .github/workflows/
|
||||
```
|
||||
|
||||
|
||||
## Best Practices
|
||||
|
||||
Based on analysis of the codebase, follow these practices:
|
||||
|
||||
### Do
|
||||
|
||||
- Use conventional commit format (feat:, fix:, etc.)
|
||||
- Follow *.test.js naming pattern
|
||||
- Use camelCase for file names
|
||||
- Prefer mixed exports
|
||||
|
||||
### Don't
|
||||
|
||||
- Don't write vague commit messages
|
||||
- Don't skip tests for new features
|
||||
- Don't deviate from established patterns without discussion
|
||||
|
||||
---
|
||||
|
||||
*This skill was auto-generated by [ECC Tools](https://ecc.tools). Review and customize as needed for your team.*
|
||||
### Add New Agent or Pipeline
|
||||
**Trigger:** When introducing a new agent or orchestrated workflow (pipeline)
|
||||
**Command:** `/add-agent-pipeline`
|
||||
|
||||
1. Create agent definition files under `agents/`.
|
||||
2. Create or update a `SKILL.md` for the orchestrator under `skills/<pipeline-name>/`.
|
||||
3. Update `AGENTS.md` and `README.md` to document the new agent(s)/pipeline.
|
||||
4. Optionally add supporting commands, scripts, or documentation.
|
||||
|
||||
---
|
||||
|
||||
### Add or Extend Command
|
||||
**Trigger:** When adding or enhancing CLI commands
|
||||
**Command:** `/add-command`
|
||||
|
||||
1. Create or update command markdown files under `commands/`.
|
||||
2. Incorporate review feedback and fixes as needed.
|
||||
3. Document new commands in `AGENTS.md` or other relevant docs.
|
||||
|
||||
---
|
||||
|
||||
### Add Install Target or Adapter
|
||||
**Trigger:** When supporting a new install target (plugin, IDE, platform)
|
||||
**Command:** `/add-install-target`
|
||||
|
||||
1. Create a new directory for the install target (e.g., `.codebuddy/`, `.gemini/`).
|
||||
2. Add install/uninstall scripts and documentation in the new directory.
|
||||
3. Update `manifests/install-modules.json` and relevant schemas.
|
||||
4. Update or add scripts in `scripts/lib/install-targets/<target>.js`.
|
||||
5. Update or add tests for the new install target.
|
||||
|
||||
**Example:**
|
||||
```
|
||||
.codebuddy/
|
||||
install.sh
|
||||
uninstall.sh
|
||||
README.md
|
||||
scripts/lib/install-targets/codebuddy.js
|
||||
tests/lib/install-targets.test.js
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Update or Harden Hooks
|
||||
**Trigger:** When improving, refactoring, or fixing hooks (e.g., CI, formatting, session management)
|
||||
**Command:** `/update-hook`
|
||||
|
||||
1. Update `hooks/hooks.json` to change hook configuration.
|
||||
2. Update or add scripts in `scripts/hooks/*.js` or `scripts/hooks/*.sh`.
|
||||
3. Update or add tests for the affected hooks.
|
||||
4. Optionally update related documentation.
|
||||
|
||||
---
|
||||
|
||||
### Documentation Sync and Guidance Update
|
||||
**Trigger:** When updating documentation to reflect new features, skills, or workflows
|
||||
**Command:** `/sync-docs`
|
||||
|
||||
1. Update `README.md`, `README.zh-CN.md`, and/or `AGENTS.md`.
|
||||
2. Update `docs/zh-CN/AGENTS.md` and `docs/zh-CN/README.md`.
|
||||
3. Update or add `WORKING-CONTEXT.md`.
|
||||
4. Optionally update `the-shortform-guide.md` or other guidance files.
|
||||
|
||||
---
|
||||
|
||||
### Dependency Bump via Dependabot
|
||||
**Trigger:** When a dependency update is triggered by Dependabot or similar automation
|
||||
**Command:** `/bump-dependency`
|
||||
|
||||
1. Update `package.json` and/or `yarn.lock` for npm dependencies.
|
||||
2. Update `.github/workflows/*.yml` for GitHub Actions dependencies.
|
||||
3. Commit with a standardized message and co-author.
|
||||
|
||||
---
|
||||
|
||||
## Testing Patterns
|
||||
|
||||
- Test files use the pattern `*.test.js`.
|
||||
- Testing framework is not explicitly specified; check test files for usage.
|
||||
- Place tests alongside or within a `tests/` directory, matching the structure of the code under test.
|
||||
- Example test file:
|
||||
```
|
||||
tests/lib/install-targets.test.js
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
| Command | Purpose |
|
||||
|--------------------|------------------------------------------------------------------|
|
||||
| /add-skill | Add a new skill, including docs and registration |
|
||||
| /add-agent-pipeline| Add a new agent or orchestrated pipeline |
|
||||
| /add-command | Add or extend a CLI command |
|
||||
| /add-install-target| Add support for a new install target or adapter |
|
||||
| /update-hook | Refactor or fix hooks and related scripts |
|
||||
| /sync-docs | Synchronize and update documentation across contexts/languages |
|
||||
| /bump-dependency | Automated dependency update via Dependabot or similar automation |
|
||||
```
|
||||
|
||||
@@ -11,5 +11,5 @@
|
||||
".claude/commands/refactoring.md",
|
||||
".claude/commands/add-new-skill.md"
|
||||
],
|
||||
"updatedAt": "2026-04-01T22:44:14.561Z"
|
||||
"updatedAt": "2026-04-02T03:20:48.238Z"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
# Everything Claude Code (ECC) — Agent Instructions
|
||||
|
||||
This is a **production-ready AI coding plugin** providing 36 specialized agents, 147 skills, 68 commands, and automated hook workflows for software development.
|
||||
This is a **production-ready AI coding plugin** providing 36 specialized agents, 150 skills, 68 commands, and automated hook workflows for software development.
|
||||
|
||||
**Version:** 1.9.0
|
||||
|
||||
@@ -146,7 +146,7 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
|
||||
|
||||
```
|
||||
agents/ — 36 specialized subagents
|
||||
skills/ — 147 workflow skills and domain knowledge
|
||||
skills/ — 150 workflow skills and domain knowledge
|
||||
commands/ — 68 slash commands
|
||||
hooks/ — Trigger-based automations
|
||||
rules/ — Always-follow guidelines (common + per-language)
|
||||
|
||||
@@ -225,7 +225,7 @@ For manual install instructions see the README in the `rules/` folder. When copy
|
||||
/plugin list everything-claude-code@everything-claude-code
|
||||
```
|
||||
|
||||
**That's it!** You now have access to 36 agents, 147 skills, and 68 legacy command shims.
|
||||
**That's it!** You now have access to 36 agents, 150 skills, and 68 legacy command shims.
|
||||
|
||||
### Multi-model commands require additional setup
|
||||
|
||||
@@ -1120,7 +1120,7 @@ The configuration is automatically detected from `.opencode/opencode.json`.
|
||||
|---------|-------------|----------|--------|
|
||||
| Agents | PASS: 36 agents | PASS: 12 agents | **Claude Code leads** |
|
||||
| Commands | PASS: 68 commands | PASS: 31 commands | **Claude Code leads** |
|
||||
| Skills | PASS: 147 skills | PASS: 37 skills | **Claude Code leads** |
|
||||
| Skills | PASS: 150 skills | PASS: 37 skills | **Claude Code leads** |
|
||||
| Hooks | PASS: 8 event types | PASS: 11 events | **OpenCode has more!** |
|
||||
| Rules | PASS: 29 rules | PASS: 13 instructions | **Claude Code leads** |
|
||||
| MCP Servers | PASS: 14 servers | PASS: Full | **Full parity** |
|
||||
@@ -1229,7 +1229,7 @@ ECC is the **first plugin to maximize every major AI coding tool**. Here's how e
|
||||
|---------|------------|------------|-----------|----------|
|
||||
| **Agents** | 36 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
|
||||
| **Commands** | 68 | Shared | Instruction-based | 31 |
|
||||
| **Skills** | 147 | Shared | 10 (native format) | 37 |
|
||||
| **Skills** | 150 | Shared | 10 (native format) | 37 |
|
||||
| **Hook Events** | 8 types | 15 types | None yet | 11 types |
|
||||
| **Hook Scripts** | 20+ scripts | 16 scripts (DRY adapter) | N/A | Plugin hooks |
|
||||
| **Rules** | 34 (common + lang) | 34 (YAML frontmatter) | Instruction-based | 13 instructions |
|
||||
|
||||
@@ -106,7 +106,7 @@ cp -r everything-claude-code/rules/perl ~/.claude/rules/
|
||||
/plugin list everything-claude-code@everything-claude-code
|
||||
```
|
||||
|
||||
**完成!** 你现在可以使用 36 个代理、147 个技能和 68 个命令。
|
||||
**完成!** 你现在可以使用 36 个代理、150 个技能和 68 个命令。
|
||||
|
||||
### multi-* 命令需要额外配置
|
||||
|
||||
|
||||
@@ -36,6 +36,7 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
|
||||
- continue one-by-one audit of overlapping or low-signal skill content
|
||||
- move repo guidance and contribution flow to skills-first, leaving commands only as explicit compatibility shims
|
||||
- add operator skills that wrap connected surfaces instead of exposing only raw APIs or disconnected primitives
|
||||
- land the canonical voice system, network-optimization lane, and reusable Manim explainer lane
|
||||
- Security:
|
||||
- keep dependency posture clean
|
||||
- preserve self-contained hook and MCP behavior
|
||||
@@ -59,6 +60,10 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
|
||||
- Direct-port candidates landed after audit:
|
||||
- `#1078` hook-id dedupe for managed Claude hook reinstalls
|
||||
- `#844` ui-demo skill
|
||||
- `#1110` install-time Claude hook root resolution
|
||||
- `#1106` portable Codex Context7 key extraction
|
||||
- `#1107` Codex baseline merge and sample agent-role sync
|
||||
- `#1119` stale CI/lint cleanup that still contained safe low-risk fixes
|
||||
- Port or rebuild inside ECC after full audit:
|
||||
- `#894` Jira integration
|
||||
- `#814` + `#808` rebuild as a single consolidated notifications lane for Opencode and cross-harness surfaces
|
||||
@@ -97,3 +102,11 @@ Keep this file detailed for only the current sprint, blockers, and next actions.
|
||||
- 2026-04-01: Collapsed the obvious command/skill duplicates into thin legacy shims so `skills/` now hold the maintained bodies for NanoClaw, context-budget, DevFleet, docs lookup, E2E, evals, orchestration, prompt optimization, rules distillation, TDD, and verification.
|
||||
- 2026-04-01: Ported the self-contained core of `#844` directly into `main` as `skills/ui-demo/SKILL.md` and registered it under the `media-generation` install module instead of merging the PR wholesale.
|
||||
- 2026-04-01: Added the first connected-workflow operator lane as ECC-native skills instead of leaving the surface as raw plugins or APIs: `workspace-surface-audit`, `customer-billing-ops`, `project-flow-ops`, and `google-workspace-ops`. These are tracked under the new `operator-workflows` install module.
|
||||
- 2026-04-01: Direct-ported the real fix from the unresolved hook-path PR lane into the active installer. Claude installs now replace `${CLAUDE_PLUGIN_ROOT}` with the concrete install root in both `settings.json` and the copied `hooks/hooks.json`, which keeps PreToolUse/PostToolUse hooks working outside plugin-managed env injection.
|
||||
- 2026-04-01: Replaced the GNU-only `grep -P` parser in `scripts/sync-ecc-to-codex.sh` with a portable Node parser for Context7 key extraction. Added source-level regression coverage so BSD/macOS syncs do not drift back to non-portable parsing.
|
||||
- 2026-04-01: Targeted regression suite after the direct ports is green: `tests/scripts/install-apply.test.js`, `tests/scripts/sync-ecc-to-codex.test.js`, and `tests/scripts/codex-hooks.test.js`.
|
||||
- 2026-04-01: Ported the useful core of `#1107` directly into `main` as an add-only Codex baseline merge. `scripts/sync-ecc-to-codex.sh` now fills missing non-MCP defaults from `.codex/config.toml`, syncs sample agent role files into `~/.codex/agents`, and preserves user config instead of replacing it. Added regression coverage for sparse configs and implicit parent tables.
|
||||
- 2026-04-01: Ported the safe low-risk cleanup from `#1119` directly into `main` instead of keeping an obsolete CI PR open. This included `.mjs` eslint handling, stricter null checks, Windows home-dir coverage in bash-log tests, and longer Trae shell-test timeouts.
|
||||
- 2026-04-01: Added `brand-voice` as the canonical source-derived writing-style system and wired the content lane to treat it as the shared voice source of truth instead of duplicating partial style heuristics across skills.
|
||||
- 2026-04-01: Added `connections-optimizer` as the review-first social-graph reorganization workflow for X and LinkedIn, with explicit pruning modes, browser fallback expectations, and Apple Mail drafting guidance.
|
||||
- 2026-04-01: Added `manim-video` as the reusable technical explainer lane and seeded it with a starter network-graph scene so launch and systems animations do not depend on one-off scratch scripts.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Everything Claude Code (ECC) — 智能体指令
|
||||
|
||||
这是一个**生产就绪的 AI 编码插件**,提供 36 个专业代理、147 项技能、68 条命令以及自动化钩子工作流,用于软件开发。
|
||||
这是一个**生产就绪的 AI 编码插件**,提供 36 个专业代理、150 项技能、68 条命令以及自动化钩子工作流,用于软件开发。
|
||||
|
||||
**版本:** 1.9.0
|
||||
|
||||
@@ -147,7 +147,7 @@
|
||||
|
||||
```
|
||||
agents/ — 36 个专业子代理
|
||||
skills/ — 147 个工作流技能和领域知识
|
||||
skills/ — 150 个工作流技能和领域知识
|
||||
commands/ — 68 个斜杠命令
|
||||
hooks/ — 基于触发的自动化
|
||||
rules/ — 始终遵循的指导方针(通用 + 每种语言)
|
||||
|
||||
@@ -209,7 +209,7 @@ npx ecc-install typescript
|
||||
/plugin list everything-claude-code@everything-claude-code
|
||||
```
|
||||
|
||||
**搞定!** 你现在可以使用 36 个智能体、147 项技能和 68 个命令了。
|
||||
**搞定!** 你现在可以使用 36 个智能体、150 项技能和 68 个命令了。
|
||||
|
||||
***
|
||||
|
||||
@@ -1096,7 +1096,7 @@ opencode
|
||||
|---------|-------------|----------|--------|
|
||||
| 智能体 | PASS: 36 个 | PASS: 12 个 | **Claude Code 领先** |
|
||||
| 命令 | PASS: 68 个 | PASS: 31 个 | **Claude Code 领先** |
|
||||
| 技能 | PASS: 147 项 | PASS: 37 项 | **Claude Code 领先** |
|
||||
| 技能 | PASS: 150 项 | PASS: 37 项 | **Claude Code 领先** |
|
||||
| 钩子 | PASS: 8 种事件类型 | PASS: 11 种事件 | **OpenCode 更多!** |
|
||||
| 规则 | PASS: 29 条 | PASS: 13 条指令 | **Claude Code 领先** |
|
||||
| MCP 服务器 | PASS: 14 个 | PASS: 完整 | **完全对等** |
|
||||
@@ -1208,7 +1208,7 @@ ECC 是**第一个最大化利用每个主要 AI 编码工具的插件**。以
|
||||
|---------|------------|------------|-----------|----------|
|
||||
| **智能体** | 36 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 |
|
||||
| **命令** | 68 | 共享 | 基于指令 | 31 |
|
||||
| **技能** | 147 | 共享 | 10 (原生格式) | 37 |
|
||||
| **技能** | 150 | 共享 | 10 (原生格式) | 37 |
|
||||
| **钩子事件** | 8 种类型 | 15 种类型 | 暂无 | 11 种类型 |
|
||||
| **钩子脚本** | 20+ 个脚本 | 16 个脚本 (DRY 适配器) | N/A | 插件钩子 |
|
||||
| **规则** | 34 (通用 + 语言) | 34 (YAML 前页) | 基于指令 | 13 条指令 |
|
||||
|
||||
@@ -24,5 +24,11 @@ module.exports = [
|
||||
'no-undef': 'error',
|
||||
'eqeqeq': 'warn'
|
||||
}
|
||||
},
|
||||
{
|
||||
files: ['**/*.mjs'],
|
||||
languageOptions: {
|
||||
sourceType: 'module'
|
||||
}
|
||||
}
|
||||
];
|
||||
|
||||
@@ -140,7 +140,7 @@
|
||||
{
|
||||
"id": "capability:content",
|
||||
"family": "capability",
|
||||
"description": "Business, writing, market, and investor communication skills.",
|
||||
"description": "Business, writing, market, investor communication, and reusable voice-system skills.",
|
||||
"modules": [
|
||||
"business-content"
|
||||
]
|
||||
@@ -148,7 +148,7 @@
|
||||
{
|
||||
"id": "capability:operators",
|
||||
"family": "capability",
|
||||
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, and Google Workspace.",
|
||||
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, Google Workspace, and network optimization.",
|
||||
"modules": [
|
||||
"operator-workflows"
|
||||
]
|
||||
@@ -164,7 +164,7 @@
|
||||
{
|
||||
"id": "capability:media",
|
||||
"family": "capability",
|
||||
"description": "Media generation and AI-assisted editing skills.",
|
||||
"description": "Media generation, technical explainers, and AI-assisted editing skills.",
|
||||
"modules": [
|
||||
"media-generation"
|
||||
]
|
||||
|
||||
@@ -282,6 +282,7 @@
|
||||
"description": "Business, writing, market, and investor communication skills.",
|
||||
"paths": [
|
||||
"skills/article-writing",
|
||||
"skills/brand-voice",
|
||||
"skills/content-engine",
|
||||
"skills/investor-materials",
|
||||
"skills/investor-outreach",
|
||||
@@ -306,8 +307,9 @@
|
||||
{
|
||||
"id": "operator-workflows",
|
||||
"kind": "skills",
|
||||
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, and Google Workspace.",
|
||||
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, Google Workspace, and network optimization.",
|
||||
"paths": [
|
||||
"skills/connections-optimizer",
|
||||
"skills/customer-billing-ops",
|
||||
"skills/google-workspace-ops",
|
||||
"skills/project-flow-ops",
|
||||
@@ -354,9 +356,10 @@
|
||||
{
|
||||
"id": "media-generation",
|
||||
"kind": "skills",
|
||||
"description": "Media generation and AI-assisted editing skills.",
|
||||
"description": "Media generation, technical explainers, and AI-assisted editing skills.",
|
||||
"paths": [
|
||||
"skills/fal-ai-media",
|
||||
"skills/manim-video",
|
||||
"skills/remotion-video-creation",
|
||||
"skills/ui-demo",
|
||||
"skills/video-editing",
|
||||
|
||||
@@ -80,6 +80,7 @@
|
||||
"scripts/orchestrate-worktrees.js",
|
||||
"scripts/setup-package-manager.js",
|
||||
"scripts/skill-create-output.js",
|
||||
"scripts/codex/merge-codex-config.js",
|
||||
"scripts/codex/merge-mcp-config.js",
|
||||
"scripts/repair.js",
|
||||
"scripts/harness-audit.js",
|
||||
|
||||
317
scripts/codex/merge-codex-config.js
Normal file
317
scripts/codex/merge-codex-config.js
Normal file
@@ -0,0 +1,317 @@
|
||||
#!/usr/bin/env node
|
||||
'use strict';
|
||||
|
||||
/**
|
||||
* Merge the non-MCP Codex baseline from `.codex/config.toml` into a target
|
||||
* `config.toml` without overwriting existing user choices.
|
||||
*
|
||||
* Strategy: add-only.
|
||||
* - Missing root keys are inserted before the first TOML table.
|
||||
* - Missing table keys are appended to existing tables.
|
||||
* - Missing tables are appended to the end of the file.
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
let TOML;
|
||||
try {
|
||||
TOML = require('@iarna/toml');
|
||||
} catch {
|
||||
console.error('[ecc-codex] Missing dependency: @iarna/toml');
|
||||
console.error('[ecc-codex] Run: npm install (from the ECC repo root)');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const ROOT_KEYS = ['approval_policy', 'sandbox_mode', 'web_search', 'notify', 'persistent_instructions'];
|
||||
const TABLE_PATHS = [
|
||||
'features',
|
||||
'profiles.strict',
|
||||
'profiles.yolo',
|
||||
'agents',
|
||||
'agents.explorer',
|
||||
'agents.reviewer',
|
||||
'agents.docs_researcher',
|
||||
];
|
||||
const TOML_HEADER_RE = /^[ \t]*(?:\[[^[\]\n][^\]\n]*\]|\[\[[^[\]\n][^\]\n]*\]\])[ \t]*(?:#.*)?$/m;
|
||||
|
||||
function log(message) {
|
||||
console.log(`[ecc-codex] ${message}`);
|
||||
}
|
||||
|
||||
function warn(message) {
|
||||
console.warn(`[ecc-codex] WARNING: ${message}`);
|
||||
}
|
||||
|
||||
function getNested(obj, pathParts) {
|
||||
let current = obj;
|
||||
for (const part of pathParts) {
|
||||
if (!current || typeof current !== 'object' || !(part in current)) {
|
||||
return undefined;
|
||||
}
|
||||
current = current[part];
|
||||
}
|
||||
return current;
|
||||
}
|
||||
|
||||
function setNested(obj, pathParts, value) {
|
||||
let current = obj;
|
||||
for (let i = 0; i < pathParts.length - 1; i += 1) {
|
||||
const part = pathParts[i];
|
||||
if (!current[part] || typeof current[part] !== 'object' || Array.isArray(current[part])) {
|
||||
current[part] = {};
|
||||
}
|
||||
current = current[part];
|
||||
}
|
||||
current[pathParts[pathParts.length - 1]] = value;
|
||||
}
|
||||
|
||||
function findFirstTableIndex(raw) {
|
||||
const match = TOML_HEADER_RE.exec(raw);
|
||||
return match ? match.index : -1;
|
||||
}
|
||||
|
||||
function findTableRange(raw, tablePath) {
|
||||
const escaped = tablePath.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
const headerPattern = new RegExp(`^[ \\t]*\\[${escaped}\\][ \\t]*(?:#.*)?$`, 'm');
|
||||
const match = headerPattern.exec(raw);
|
||||
if (!match) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const headerEnd = raw.indexOf('\n', match.index);
|
||||
const bodyStart = headerEnd === -1 ? raw.length : headerEnd + 1;
|
||||
const nextHeaderRel = raw.slice(bodyStart).search(TOML_HEADER_RE);
|
||||
const bodyEnd = nextHeaderRel === -1 ? raw.length : bodyStart + nextHeaderRel;
|
||||
return { bodyStart, bodyEnd };
|
||||
}
|
||||
|
||||
function ensureTrailingNewline(text) {
|
||||
return text.endsWith('\n') ? text : `${text}\n`;
|
||||
}
|
||||
|
||||
function insertBeforeFirstTable(raw, block) {
|
||||
const normalizedBlock = ensureTrailingNewline(block.trimEnd());
|
||||
const firstTableIndex = findFirstTableIndex(raw);
|
||||
if (firstTableIndex === -1) {
|
||||
const prefix = raw.trimEnd();
|
||||
return prefix ? `${prefix}\n${normalizedBlock}` : normalizedBlock;
|
||||
}
|
||||
|
||||
const before = raw.slice(0, firstTableIndex).trimEnd();
|
||||
const after = raw.slice(firstTableIndex).replace(/^\n+/, '');
|
||||
return `${before}\n\n${normalizedBlock}\n${after}`;
|
||||
}
|
||||
|
||||
function appendBlock(raw, block) {
|
||||
const prefix = raw.trimEnd();
|
||||
const normalizedBlock = block.trimEnd();
|
||||
return prefix ? `${prefix}\n\n${normalizedBlock}\n` : `${normalizedBlock}\n`;
|
||||
}
|
||||
|
||||
function stringifyValue(value) {
|
||||
return TOML.stringify({ value }).trim().replace(/^value = /, '');
|
||||
}
|
||||
|
||||
function updateInlineTableKeys(raw, tablePath, missingKeys) {
|
||||
const pathParts = tablePath.split('.');
|
||||
if (pathParts.length < 2) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const parentPath = pathParts.slice(0, -1).join('.');
|
||||
const parentRange = findTableRange(raw, parentPath);
|
||||
if (!parentRange) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const tableKey = pathParts[pathParts.length - 1];
|
||||
const escapedKey = tableKey.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
const body = raw.slice(parentRange.bodyStart, parentRange.bodyEnd);
|
||||
const lines = body.split('\n');
|
||||
for (let index = 0; index < lines.length; index += 1) {
|
||||
const inlinePattern = new RegExp(`^(\\s*${escapedKey}\\s*=\\s*\\{)(.*?)(\\}\\s*(?:#.*)?)$`);
|
||||
const match = inlinePattern.exec(lines[index]);
|
||||
if (!match) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const additions = Object.entries(missingKeys)
|
||||
.map(([key, value]) => `${key} = ${stringifyValue(value)}`)
|
||||
.join(', ');
|
||||
const existingEntries = match[2].trim();
|
||||
const nextEntries = existingEntries ? `${existingEntries}, ${additions}` : additions;
|
||||
lines[index] = `${match[1]}${nextEntries}${match[3]}`;
|
||||
return `${raw.slice(0, parentRange.bodyStart)}${lines.join('\n')}${raw.slice(parentRange.bodyEnd)}`;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function appendImplicitTable(raw, tablePath, missingKeys) {
|
||||
const candidate = appendBlock(raw, stringifyTable(tablePath, missingKeys));
|
||||
try {
|
||||
TOML.parse(candidate);
|
||||
return candidate;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
function appendToTable(raw, tablePath, block, missingKeys = null) {
|
||||
const range = findTableRange(raw, tablePath);
|
||||
if (!range) {
|
||||
if (missingKeys) {
|
||||
const inlineUpdated = updateInlineTableKeys(raw, tablePath, missingKeys);
|
||||
if (inlineUpdated) {
|
||||
return inlineUpdated;
|
||||
}
|
||||
|
||||
const appendedTable = appendImplicitTable(raw, tablePath, missingKeys);
|
||||
if (appendedTable) {
|
||||
return appendedTable;
|
||||
}
|
||||
}
|
||||
warn(`Skipping missing keys for [${tablePath}] because it has no standalone header and could not be safely updated`);
|
||||
return raw;
|
||||
}
|
||||
|
||||
const before = raw.slice(0, range.bodyEnd).trimEnd();
|
||||
const after = raw.slice(range.bodyEnd).replace(/^\n*/, '\n');
|
||||
return `${before}\n${block.trimEnd()}\n${after}`;
|
||||
}
|
||||
|
||||
function stringifyRootKeys(keys) {
|
||||
return TOML.stringify(keys).trim();
|
||||
}
|
||||
|
||||
function stringifyTable(tablePath, value) {
|
||||
const scalarOnly = {};
|
||||
for (const [key, entryValue] of Object.entries(value)) {
|
||||
if (entryValue && typeof entryValue === 'object' && !Array.isArray(entryValue)) {
|
||||
continue;
|
||||
}
|
||||
scalarOnly[key] = entryValue;
|
||||
}
|
||||
|
||||
const snippet = {};
|
||||
setNested(snippet, tablePath.split('.'), scalarOnly);
|
||||
return TOML.stringify(snippet).trim();
|
||||
}
|
||||
|
||||
function stringifyTableKeys(tableValue) {
|
||||
const lines = [];
|
||||
for (const [key, value] of Object.entries(tableValue)) {
|
||||
if (value && typeof value === 'object' && !Array.isArray(value)) {
|
||||
continue;
|
||||
}
|
||||
lines.push(TOML.stringify({ [key]: value }).trim());
|
||||
}
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
function main() {
|
||||
const args = process.argv.slice(2);
|
||||
const configPath = args.find(arg => !arg.startsWith('-'));
|
||||
const dryRun = args.includes('--dry-run');
|
||||
|
||||
if (!configPath) {
|
||||
console.error('Usage: merge-codex-config.js <config.toml> [--dry-run]');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const referencePath = path.join(__dirname, '..', '..', '.codex', 'config.toml');
|
||||
if (!fs.existsSync(referencePath)) {
|
||||
console.error(`[ecc-codex] Reference config not found: ${referencePath}`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (!fs.existsSync(configPath)) {
|
||||
console.error(`[ecc-codex] Config file not found: ${configPath}`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const raw = fs.readFileSync(configPath, 'utf8');
|
||||
const referenceRaw = fs.readFileSync(referencePath, 'utf8');
|
||||
|
||||
let targetConfig;
|
||||
let referenceConfig;
|
||||
try {
|
||||
targetConfig = TOML.parse(raw);
|
||||
referenceConfig = TOML.parse(referenceRaw);
|
||||
} catch (error) {
|
||||
console.error(`[ecc-codex] Failed to parse TOML: ${error.message}`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const missingRootKeys = {};
|
||||
for (const key of ROOT_KEYS) {
|
||||
if (referenceConfig[key] !== undefined && targetConfig[key] === undefined) {
|
||||
missingRootKeys[key] = referenceConfig[key];
|
||||
}
|
||||
}
|
||||
|
||||
const missingTables = [];
|
||||
const missingTableKeys = [];
|
||||
for (const tablePath of TABLE_PATHS) {
|
||||
const pathParts = tablePath.split('.');
|
||||
const referenceValue = getNested(referenceConfig, pathParts);
|
||||
if (referenceValue === undefined) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const targetValue = getNested(targetConfig, pathParts);
|
||||
if (targetValue === undefined) {
|
||||
missingTables.push(tablePath);
|
||||
continue;
|
||||
}
|
||||
|
||||
const missingKeys = {};
|
||||
for (const [key, value] of Object.entries(referenceValue)) {
|
||||
if (value && typeof value === 'object' && !Array.isArray(value)) {
|
||||
continue;
|
||||
}
|
||||
if (targetValue[key] === undefined) {
|
||||
missingKeys[key] = value;
|
||||
}
|
||||
}
|
||||
|
||||
if (Object.keys(missingKeys).length > 0) {
|
||||
missingTableKeys.push({ tablePath, missingKeys });
|
||||
}
|
||||
}
|
||||
|
||||
if (
|
||||
Object.keys(missingRootKeys).length === 0 &&
|
||||
missingTables.length === 0 &&
|
||||
missingTableKeys.length === 0
|
||||
) {
|
||||
log('All baseline Codex settings already present. Nothing to do.');
|
||||
return;
|
||||
}
|
||||
|
||||
let nextRaw = raw;
|
||||
if (Object.keys(missingRootKeys).length > 0) {
|
||||
log(` [add-root] ${Object.keys(missingRootKeys).join(', ')}`);
|
||||
nextRaw = insertBeforeFirstTable(nextRaw, stringifyRootKeys(missingRootKeys));
|
||||
}
|
||||
|
||||
for (const { tablePath, missingKeys } of missingTableKeys) {
|
||||
log(` [add-keys] [${tablePath}] -> ${Object.keys(missingKeys).join(', ')}`);
|
||||
nextRaw = appendToTable(nextRaw, tablePath, stringifyTableKeys(missingKeys), missingKeys);
|
||||
}
|
||||
|
||||
for (const tablePath of missingTables) {
|
||||
log(` [add-table] [${tablePath}]`);
|
||||
nextRaw = appendBlock(nextRaw, stringifyTable(tablePath, getNested(referenceConfig, tablePath.split('.'))));
|
||||
}
|
||||
|
||||
if (dryRun) {
|
||||
log('Dry run — would write the merged Codex baseline.');
|
||||
return;
|
||||
}
|
||||
|
||||
fs.writeFileSync(configPath, nextRaw, 'utf8');
|
||||
log('Done. Baseline Codex settings merged.');
|
||||
}
|
||||
|
||||
main();
|
||||
@@ -67,7 +67,7 @@ function findFileIssues(filePath) {
|
||||
|
||||
try {
|
||||
const content = getStagedFileContent(filePath);
|
||||
if (content == null) {
|
||||
if (content === null || content === undefined) {
|
||||
return issues;
|
||||
}
|
||||
const lines = content.split('\n');
|
||||
|
||||
@@ -20,16 +20,43 @@ function readJsonObject(filePath, label) {
|
||||
return parsed;
|
||||
}
|
||||
|
||||
function buildLegacyHookSignature(entry) {
|
||||
function replacePluginRootPlaceholders(value, pluginRoot) {
|
||||
if (!pluginRoot) {
|
||||
return value;
|
||||
}
|
||||
|
||||
if (typeof value === 'string') {
|
||||
return value.split('${CLAUDE_PLUGIN_ROOT}').join(pluginRoot);
|
||||
}
|
||||
|
||||
if (Array.isArray(value)) {
|
||||
return value.map(item => replacePluginRootPlaceholders(item, pluginRoot));
|
||||
}
|
||||
|
||||
if (value && typeof value === 'object') {
|
||||
return Object.fromEntries(
|
||||
Object.entries(value).map(([key, nestedValue]) => [
|
||||
key,
|
||||
replacePluginRootPlaceholders(nestedValue, pluginRoot),
|
||||
])
|
||||
);
|
||||
}
|
||||
|
||||
return value;
|
||||
}
|
||||
|
||||
function buildLegacyHookSignature(entry, pluginRoot) {
|
||||
if (!entry || typeof entry !== 'object' || Array.isArray(entry)) {
|
||||
return null;
|
||||
}
|
||||
|
||||
if (typeof entry.matcher !== 'string' || !Array.isArray(entry.hooks)) {
|
||||
const normalizedEntry = replacePluginRootPlaceholders(entry, pluginRoot);
|
||||
|
||||
if (typeof normalizedEntry.matcher !== 'string' || !Array.isArray(normalizedEntry.hooks)) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const hookSignature = entry.hooks.map(hook => JSON.stringify({
|
||||
const hookSignature = normalizedEntry.hooks.map(hook => JSON.stringify({
|
||||
type: hook && typeof hook === 'object' ? hook.type : undefined,
|
||||
command: hook && typeof hook === 'object' ? hook.command : undefined,
|
||||
timeout: hook && typeof hook === 'object' ? hook.timeout : undefined,
|
||||
@@ -37,33 +64,35 @@ function buildLegacyHookSignature(entry) {
|
||||
}));
|
||||
|
||||
return JSON.stringify({
|
||||
matcher: entry.matcher,
|
||||
matcher: normalizedEntry.matcher,
|
||||
hooks: hookSignature,
|
||||
});
|
||||
}
|
||||
|
||||
function getHookEntryAliases(entry) {
|
||||
function getHookEntryAliases(entry, pluginRoot) {
|
||||
const aliases = [];
|
||||
|
||||
if (!entry || typeof entry !== 'object' || Array.isArray(entry)) {
|
||||
return aliases;
|
||||
}
|
||||
|
||||
if (typeof entry.id === 'string' && entry.id.trim().length > 0) {
|
||||
aliases.push(`id:${entry.id.trim()}`);
|
||||
const normalizedEntry = replacePluginRootPlaceholders(entry, pluginRoot);
|
||||
|
||||
if (typeof normalizedEntry.id === 'string' && normalizedEntry.id.trim().length > 0) {
|
||||
aliases.push(`id:${normalizedEntry.id.trim()}`);
|
||||
}
|
||||
|
||||
const legacySignature = buildLegacyHookSignature(entry);
|
||||
const legacySignature = buildLegacyHookSignature(normalizedEntry, pluginRoot);
|
||||
if (legacySignature) {
|
||||
aliases.push(`legacy:${legacySignature}`);
|
||||
}
|
||||
|
||||
aliases.push(`json:${JSON.stringify(entry)}`);
|
||||
aliases.push(`json:${JSON.stringify(normalizedEntry)}`);
|
||||
|
||||
return aliases;
|
||||
}
|
||||
|
||||
function mergeHookEntries(existingEntries, incomingEntries) {
|
||||
function mergeHookEntries(existingEntries, incomingEntries, pluginRoot) {
|
||||
const mergedEntries = [];
|
||||
const seenEntries = new Set();
|
||||
|
||||
@@ -76,7 +105,7 @@ function mergeHookEntries(existingEntries, incomingEntries) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const aliases = getHookEntryAliases(entry);
|
||||
const aliases = getHookEntryAliases(entry, pluginRoot);
|
||||
if (aliases.some(alias => seenEntries.has(alias))) {
|
||||
continue;
|
||||
}
|
||||
@@ -84,7 +113,7 @@ function mergeHookEntries(existingEntries, incomingEntries) {
|
||||
for (const alias of aliases) {
|
||||
seenEntries.add(alias);
|
||||
}
|
||||
mergedEntries.push(entry);
|
||||
mergedEntries.push(replacePluginRootPlaceholders(entry, pluginRoot));
|
||||
}
|
||||
|
||||
return mergedEntries;
|
||||
@@ -100,6 +129,7 @@ function buildMergedSettings(plan) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const pluginRoot = plan.targetRoot;
|
||||
const hooksDestinationPath = path.join(plan.targetRoot, 'hooks', 'hooks.json');
|
||||
const hooksSourcePath = findHooksSourcePath(plan, hooksDestinationPath) || hooksDestinationPath;
|
||||
if (!fs.existsSync(hooksSourcePath)) {
|
||||
@@ -107,7 +137,7 @@ function buildMergedSettings(plan) {
|
||||
}
|
||||
|
||||
const hooksConfig = readJsonObject(hooksSourcePath, 'hooks config');
|
||||
const incomingHooks = hooksConfig.hooks;
|
||||
const incomingHooks = replacePluginRootPlaceholders(hooksConfig.hooks, pluginRoot);
|
||||
if (!incomingHooks || typeof incomingHooks !== 'object' || Array.isArray(incomingHooks)) {
|
||||
throw new Error(`Invalid hooks config at ${hooksSourcePath}: expected "hooks" to be a JSON object`);
|
||||
}
|
||||
@@ -126,7 +156,7 @@ function buildMergedSettings(plan) {
|
||||
for (const [eventName, incomingEntries] of Object.entries(incomingHooks)) {
|
||||
const currentEntries = Array.isArray(existingHooks[eventName]) ? existingHooks[eventName] : [];
|
||||
const nextEntries = Array.isArray(incomingEntries) ? incomingEntries : [];
|
||||
mergedHooks[eventName] = mergeHookEntries(currentEntries, nextEntries);
|
||||
mergedHooks[eventName] = mergeHookEntries(currentEntries, nextEntries, pluginRoot);
|
||||
}
|
||||
|
||||
const mergedSettings = {
|
||||
@@ -137,6 +167,11 @@ function buildMergedSettings(plan) {
|
||||
return {
|
||||
settingsPath,
|
||||
mergedSettings,
|
||||
hooksDestinationPath,
|
||||
resolvedHooksConfig: {
|
||||
...hooksConfig,
|
||||
hooks: incomingHooks,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
@@ -149,6 +184,12 @@ function applyInstallPlan(plan) {
|
||||
}
|
||||
|
||||
if (mergedSettingsPlan) {
|
||||
fs.mkdirSync(path.dirname(mergedSettingsPlan.hooksDestinationPath), { recursive: true });
|
||||
fs.writeFileSync(
|
||||
mergedSettingsPlan.hooksDestinationPath,
|
||||
JSON.stringify(mergedSettingsPlan.resolvedHooksConfig, null, 2) + '\n',
|
||||
'utf8'
|
||||
);
|
||||
fs.mkdirSync(path.dirname(mergedSettingsPlan.settingsPath), { recursive: true });
|
||||
fs.writeFileSync(
|
||||
mergedSettingsPlan.settingsPath,
|
||||
|
||||
@@ -37,8 +37,8 @@ function createSkillObservation(input) {
|
||||
? input.skill.path.trim()
|
||||
: null;
|
||||
const success = Boolean(input.success);
|
||||
const error = input.error == null ? null : String(input.error);
|
||||
const feedback = input.feedback == null ? null : String(input.feedback);
|
||||
const error = input.error === null || input.error === undefined ? null : String(input.error);
|
||||
const feedback = input.feedback === null || input.feedback === undefined ? null : String(input.feedback);
|
||||
const variant = typeof input.variant === 'string' && input.variant.trim().length > 0
|
||||
? input.variant.trim()
|
||||
: 'baseline';
|
||||
|
||||
@@ -27,8 +27,11 @@ CONFIG_FILE="$CODEX_HOME/config.toml"
|
||||
AGENTS_FILE="$CODEX_HOME/AGENTS.md"
|
||||
AGENTS_ROOT_SRC="$REPO_ROOT/AGENTS.md"
|
||||
AGENTS_CODEX_SUPP_SRC="$REPO_ROOT/.codex/AGENTS.md"
|
||||
CODEX_AGENTS_SRC="$REPO_ROOT/.codex/agents"
|
||||
CODEX_AGENTS_DEST="$CODEX_HOME/agents"
|
||||
PROMPTS_SRC="$REPO_ROOT/commands"
|
||||
PROMPTS_DEST="$CODEX_HOME/prompts"
|
||||
BASELINE_MERGE_SCRIPT="$REPO_ROOT/scripts/codex/merge-codex-config.js"
|
||||
HOOKS_INSTALLER="$REPO_ROOT/scripts/codex/install-global-git-hooks.sh"
|
||||
SANITY_CHECKER="$REPO_ROOT/scripts/codex/check-codex-global-state.sh"
|
||||
CURSOR_RULES_DIR="$REPO_ROOT/.cursor/rules"
|
||||
@@ -106,7 +109,23 @@ extract_toml_value() {
|
||||
|
||||
extract_context7_key() {
|
||||
local file="$1"
|
||||
grep -oP -- '--key",[[:space:]]*"\K[^"]+' "$file" | head -n 1 || true
|
||||
node - "$file" <<'EOF'
|
||||
const fs = require('fs');
|
||||
|
||||
const filePath = process.argv[2];
|
||||
let source = '';
|
||||
|
||||
try {
|
||||
source = fs.readFileSync(filePath, 'utf8');
|
||||
} catch {
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
const match = source.match(/--key",\s*"([^"]+)"/);
|
||||
if (match && match[1]) {
|
||||
process.stdout.write(`${match[1]}\n`);
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
generate_prompt_file() {
|
||||
@@ -130,7 +149,9 @@ MCP_MERGE_SCRIPT="$REPO_ROOT/scripts/codex/merge-mcp-config.js"
|
||||
|
||||
require_path "$REPO_ROOT/AGENTS.md" "ECC AGENTS.md"
|
||||
require_path "$AGENTS_CODEX_SUPP_SRC" "ECC Codex AGENTS supplement"
|
||||
require_path "$CODEX_AGENTS_SRC" "ECC Codex agent roles"
|
||||
require_path "$PROMPTS_SRC" "ECC commands directory"
|
||||
require_path "$BASELINE_MERGE_SCRIPT" "ECC Codex baseline merge script"
|
||||
require_path "$HOOKS_INSTALLER" "ECC global git hooks installer"
|
||||
require_path "$SANITY_CHECKER" "ECC global sanity checker"
|
||||
require_path "$CURSOR_RULES_DIR" "ECC Cursor rules directory"
|
||||
@@ -231,6 +252,26 @@ else
|
||||
fi
|
||||
fi
|
||||
|
||||
log "Merging ECC Codex baseline into $CONFIG_FILE (add-only, preserving user config)"
|
||||
if [[ "$MODE" == "dry-run" ]]; then
|
||||
node "$BASELINE_MERGE_SCRIPT" "$CONFIG_FILE" --dry-run
|
||||
else
|
||||
node "$BASELINE_MERGE_SCRIPT" "$CONFIG_FILE"
|
||||
fi
|
||||
|
||||
log "Syncing sample Codex agent role files"
|
||||
run_or_echo mkdir -p "$CODEX_AGENTS_DEST"
|
||||
for agent_file in "$CODEX_AGENTS_SRC"/*.toml; do
|
||||
[[ -f "$agent_file" ]] || continue
|
||||
agent_name="$(basename "$agent_file")"
|
||||
dest="$CODEX_AGENTS_DEST/$agent_name"
|
||||
if [[ -e "$dest" ]]; then
|
||||
log "Keeping existing Codex agent role file: $dest"
|
||||
else
|
||||
run_or_echo cp "$agent_file" "$dest"
|
||||
fi
|
||||
done
|
||||
|
||||
# Skills are NOT synced here — Codex CLI reads directly from
|
||||
# ~/.agents/skills/ (installed by ECC installer / npx skills).
|
||||
# Copying into ~/.codex/skills/ was unnecessary.
|
||||
|
||||
97
skills/brand-voice/SKILL.md
Normal file
97
skills/brand-voice/SKILL.md
Normal file
@@ -0,0 +1,97 @@
|
||||
---
|
||||
name: brand-voice
|
||||
description: Build a source-derived writing style profile from real posts, essays, launch notes, docs, or site copy, then reuse that profile across content, outreach, and social workflows. Use when the user wants voice consistency without generic AI writing tropes.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Brand Voice
|
||||
|
||||
Build a durable voice profile from real source material, then use that profile everywhere instead of re-deriving style from scratch or defaulting to generic AI copy.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- the user wants content or outreach in a specific voice
|
||||
- writing for X, LinkedIn, email, launch posts, threads, or product updates
|
||||
- adapting a known author's tone across channels
|
||||
- the existing content lane needs a reusable style system instead of one-off mimicry
|
||||
|
||||
## Source Priority
|
||||
|
||||
Use the strongest real source set available, in this order:
|
||||
|
||||
1. recent original X posts and threads
|
||||
2. articles, essays, memos, launch notes, or newsletters
|
||||
3. real outbound emails or DMs that worked
|
||||
4. product docs, changelogs, README framing, and site copy
|
||||
|
||||
Do not use generic platform exemplars as source material.
|
||||
|
||||
## Collection Workflow
|
||||
|
||||
1. Gather 5 to 20 representative samples when available.
|
||||
2. Prefer recent material over old material unless the user says the older writing is more canonical.
|
||||
3. Separate "public launch voice" from "private working voice" if the source set clearly splits.
|
||||
4. If live X access is available, use `x-api` to pull recent original posts before drafting.
|
||||
5. If site copy matters, include the current ECC landing page and repo/plugin framing.
|
||||
|
||||
## What to Extract
|
||||
|
||||
- rhythm and sentence length
|
||||
- compression vs explanation
|
||||
- capitalization norms
|
||||
- parenthetical use
|
||||
- question frequency and purpose
|
||||
- how sharply claims are made
|
||||
- how often numbers, mechanisms, or receipts show up
|
||||
- how transitions work
|
||||
- what the author never does
|
||||
|
||||
## Output Contract
|
||||
|
||||
Produce a reusable `VOICE PROFILE` block that downstream skills can consume directly. Use the schema in [references/voice-profile-schema.md](references/voice-profile-schema.md).
|
||||
|
||||
Keep the profile structured and short enough to reuse in session context. The point is not literary criticism. The point is operational reuse.
|
||||
|
||||
## Affaan / ECC Defaults
|
||||
|
||||
If the user wants Affaan / ECC voice and live sources are thin, start here unless newer source material overrides it:
|
||||
|
||||
- direct, compressed, concrete
|
||||
- specifics, mechanisms, receipts, and numbers beat adjectives
|
||||
- parentheticals are for qualification, narrowing, or over-clarification
|
||||
- capitalization is conventional unless there is a real reason to break it
|
||||
- questions are rare and should not be used as bait
|
||||
- tone can be sharp, blunt, skeptical, or dry
|
||||
- transitions should feel earned, not smoothed over
|
||||
|
||||
## Hard Bans
|
||||
|
||||
Delete and rewrite any of these:
|
||||
|
||||
- fake curiosity hooks
|
||||
- "not X, just Y"
|
||||
- "no fluff"
|
||||
- forced lowercase
|
||||
- LinkedIn thought-leader cadence
|
||||
- bait questions
|
||||
- "Excited to share"
|
||||
- generic founder-journey filler
|
||||
- corny parentheticals
|
||||
|
||||
## Persistence Rules
|
||||
|
||||
- Reuse the latest confirmed `VOICE PROFILE` across related tasks in the same session.
|
||||
- If the user asks for a durable artifact, save the profile in the requested workspace location or memory surface.
|
||||
- Do not create repo-tracked files that store personal voice fingerprints unless the user explicitly asks for that.
|
||||
|
||||
## Downstream Use
|
||||
|
||||
Use this skill before or inside:
|
||||
|
||||
- `content-engine`
|
||||
- `crosspost`
|
||||
- `lead-intelligence`
|
||||
- article or launch writing
|
||||
- cold or warm outbound across X, LinkedIn, and email
|
||||
|
||||
If another skill already has a partial voice capture section, this skill is the canonical source of truth.
|
||||
55
skills/brand-voice/references/voice-profile-schema.md
Normal file
55
skills/brand-voice/references/voice-profile-schema.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Voice Profile Schema
|
||||
|
||||
Use this exact structure when building a reusable voice profile:
|
||||
|
||||
```text
|
||||
VOICE PROFILE
|
||||
=============
|
||||
Author:
|
||||
Goal:
|
||||
Confidence:
|
||||
|
||||
Source Set
|
||||
- source 1
|
||||
- source 2
|
||||
- source 3
|
||||
|
||||
Rhythm
|
||||
- short note on sentence length, pacing, and fragmentation
|
||||
|
||||
Compression
|
||||
- how dense or explanatory the writing is
|
||||
|
||||
Capitalization
|
||||
- conventional, mixed, or situational
|
||||
|
||||
Parentheticals
|
||||
- how they are used and how they are not used
|
||||
|
||||
Question Use
|
||||
- rare, frequent, rhetorical, direct, or mostly absent
|
||||
|
||||
Claim Style
|
||||
- how claims are framed, supported, and sharpened
|
||||
|
||||
Preferred Moves
|
||||
- concrete moves the author does use
|
||||
|
||||
Banned Moves
|
||||
- specific patterns the author does not use
|
||||
|
||||
CTA Rules
|
||||
- how, when, or whether to close with asks
|
||||
|
||||
Channel Notes
|
||||
- X:
|
||||
- LinkedIn:
|
||||
- Email:
|
||||
```
|
||||
|
||||
Guidelines:
|
||||
|
||||
- Keep the profile concrete and source-backed.
|
||||
- Use short bullets, not essay paragraphs.
|
||||
- Every banned move should be observable in the source set or explicitly requested by the user.
|
||||
- If the source set conflicts, call out the split instead of averaging it into mush.
|
||||
@@ -8,8 +8,7 @@
|
||||
* exit 0: success exit 1: no projects
|
||||
*/
|
||||
|
||||
import { readProjects, loadContext, today, CONTEXTS_DIR } from './shared.mjs';
|
||||
import { renderListTable } from './shared.mjs';
|
||||
import { readProjects, loadContext, today, renderListTable } from './shared.mjs';
|
||||
|
||||
const cwd = process.env.PWD || process.cwd();
|
||||
const projects = readProjects();
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
* exit 0: success exit 1: error
|
||||
*/
|
||||
|
||||
import { readFileSync, writeFileSync, existsSync, renameSync } from 'fs';
|
||||
import { readFileSync, existsSync, renameSync } from 'fs';
|
||||
import { resolve } from 'path';
|
||||
import { readProjects, writeProjects, saveContext, today, shortId, CONTEXTS_DIR } from './shared.mjs';
|
||||
|
||||
@@ -112,7 +112,11 @@ for (const [projectPath, info] of Object.entries(projects)) {
|
||||
const contextMd = existsSync(contextMdPath) ? readFileSync(contextMdPath, 'utf8') : '';
|
||||
let meta = {};
|
||||
if (existsSync(metaPath)) {
|
||||
try { meta = JSON.parse(readFileSync(metaPath, 'utf8')); } catch {}
|
||||
try {
|
||||
meta = JSON.parse(readFileSync(metaPath, 'utf8'));
|
||||
} catch (e) {
|
||||
console.warn(` ! ${contextDir}: invalid meta.json, continuing with defaults (${e.message})`);
|
||||
}
|
||||
}
|
||||
|
||||
// Extract fields from CONTEXT.md
|
||||
|
||||
@@ -20,8 +20,8 @@ import { readFileSync, mkdirSync, writeFileSync } from 'fs';
|
||||
import { resolve } from 'path';
|
||||
import {
|
||||
readProjects, writeProjects, loadContext, saveContext,
|
||||
today, shortId, gitSummary, nativeMemoryDir, encodeProjectPath,
|
||||
CONTEXTS_DIR, CURRENT_SESSION,
|
||||
today, shortId, gitSummary, nativeMemoryDir,
|
||||
CURRENT_SESSION,
|
||||
} from './shared.mjs';
|
||||
|
||||
const isInit = process.argv.includes('--init');
|
||||
|
||||
@@ -5,8 +5,8 @@
|
||||
* No external dependencies. Node.js stdlib only.
|
||||
*/
|
||||
|
||||
import { readFileSync, writeFileSync, existsSync, mkdirSync, readdirSync } from 'fs';
|
||||
import { resolve, basename } from 'path';
|
||||
import { readFileSync, writeFileSync, existsSync, mkdirSync } from 'fs';
|
||||
import { resolve } from 'path';
|
||||
import { homedir } from 'os';
|
||||
import { spawnSync } from 'child_process';
|
||||
import { randomBytes } from 'crypto';
|
||||
@@ -270,7 +270,7 @@ export function renderContextMd(ctx) {
|
||||
}
|
||||
|
||||
/** Render the bordered briefing box used by /ck:resume */
|
||||
export function renderBriefingBox(ctx, meta = {}) {
|
||||
export function renderBriefingBox(ctx, _meta = {}) {
|
||||
const latest = ctx.sessions?.[ctx.sessions.length - 1] || {};
|
||||
const W = 57;
|
||||
const pad = (str, w) => {
|
||||
@@ -344,7 +344,7 @@ export function renderInfoBlock(ctx) {
|
||||
}
|
||||
|
||||
/** Render ASCII list table used by /ck:list */
|
||||
export function renderListTable(entries, cwd, todayStr) {
|
||||
export function renderListTable(entries, cwd, _todayStr) {
|
||||
// entries: [{name, contextDir, path, context, lastUpdated}]
|
||||
// Sorted alphabetically by contextDir before calling
|
||||
const rows = entries.map((e, i) => {
|
||||
|
||||
187
skills/connections-optimizer/SKILL.md
Normal file
187
skills/connections-optimizer/SKILL.md
Normal file
@@ -0,0 +1,187 @@
|
||||
---
|
||||
name: connections-optimizer
|
||||
description: Reorganize the user's X and LinkedIn network with review-first pruning, add/follow recommendations, and channel-specific warm outreach drafted in the user's real voice. Use when the user wants to clean up following lists, grow toward current priorities, or rebalance a social graph around higher-signal relationships.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Connections Optimizer
|
||||
|
||||
Reorganize the user's network instead of treating outbound as a one-way prospecting list.
|
||||
|
||||
This skill handles:
|
||||
|
||||
- X following cleanup and expansion
|
||||
- LinkedIn follow and connection analysis
|
||||
- review-first prune queues
|
||||
- add and follow recommendations
|
||||
- warm-path identification
|
||||
- Apple Mail, X DM, and LinkedIn draft generation in the user's real voice
|
||||
|
||||
## When to Activate
|
||||
|
||||
- the user wants to prune their X following
|
||||
- the user wants to rebalance who they follow or stay connected to
|
||||
- the user says "clean up my network", "who should I unfollow", "who should I follow", "who should I reconnect with"
|
||||
- outreach quality depends on network structure, not just cold list generation
|
||||
|
||||
## Required Inputs
|
||||
|
||||
Collect or infer:
|
||||
|
||||
- current priorities and active work
|
||||
- target roles, industries, geos, or ecosystems
|
||||
- platform selection: X, LinkedIn, or both
|
||||
- do-not-touch list
|
||||
- mode: `light-pass`, `default`, or `aggressive`
|
||||
|
||||
If the user does not specify a mode, use `default`.
|
||||
|
||||
## Tool Requirements
|
||||
|
||||
### Preferred
|
||||
|
||||
- `x-api` for X graph inspection and recent activity
|
||||
- `lead-intelligence` for target discovery and warm-path ranking
|
||||
- Exa / deep research for person and company enrichment
|
||||
- `brand-voice` before drafting outbound
|
||||
|
||||
### Fallbacks
|
||||
|
||||
- browser control for LinkedIn analysis and drafting
|
||||
- browser control for X if API coverage is constrained
|
||||
- Apple Mail or Mail.app drafting via desktop automation when email is the right channel
|
||||
|
||||
## Safety Defaults
|
||||
|
||||
- default is review-first, never blind auto-pruning
|
||||
- X: prune only accounts the user follows, never followers
|
||||
- LinkedIn: treat 1st-degree connection removal as manual-review-first
|
||||
- do not auto-send DMs, invites, or emails
|
||||
- emit a ranked action plan and drafts before any apply step
|
||||
|
||||
## Platform Rules
|
||||
|
||||
### X
|
||||
|
||||
- mutuals are stickier than one-way follows
|
||||
- non-follow-backs can be pruned more aggressively
|
||||
- heavily inactive or disappeared accounts should surface quickly
|
||||
- engagement, signal quality, and bridge value matter more than raw follower count
|
||||
|
||||
### LinkedIn
|
||||
|
||||
- API-first if the user actually has LinkedIn API access
|
||||
- browser workflow must work when API access is missing
|
||||
- distinguish outbound follows from accepted 1st-degree connections
|
||||
- outbound follows can be pruned more freely
|
||||
- accepted 1st-degree connections should default to review, not auto-remove
|
||||
|
||||
## Modes
|
||||
|
||||
### `light-pass`
|
||||
|
||||
- prune only high-confidence low-value one-way follows
|
||||
- surface the rest for review
|
||||
- generate a small add/follow list
|
||||
|
||||
### `default`
|
||||
|
||||
- balanced prune queue
|
||||
- balanced keep list
|
||||
- ranked add/follow queue
|
||||
- draft warm intros or direct outreach where useful
|
||||
|
||||
### `aggressive`
|
||||
|
||||
- larger prune queue
|
||||
- lower tolerance for stale non-follow-backs
|
||||
- still review-gated before apply
|
||||
|
||||
## Scoring Model
|
||||
|
||||
Use these positive signals:
|
||||
|
||||
- reciprocity
|
||||
- recent activity
|
||||
- alignment to current priorities
|
||||
- network bridge value
|
||||
- role relevance
|
||||
- real engagement history
|
||||
- recent presence and responsiveness
|
||||
|
||||
Use these negative signals:
|
||||
|
||||
- disappeared or abandoned account
|
||||
- stale one-way follow
|
||||
- off-priority topic cluster
|
||||
- low-value noise
|
||||
- repeated non-response
|
||||
- no follow-back when many better replacements exist
|
||||
|
||||
Mutuals and real warm-path bridges should be penalized less aggressively than one-way follows.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Capture priorities, do-not-touch constraints, and selected platforms.
|
||||
2. Pull the current following / connection inventory.
|
||||
3. Score prune candidates with explicit reasons.
|
||||
4. Score keep candidates with explicit reasons.
|
||||
5. Use `lead-intelligence` plus research surfaces to rank expansion candidates.
|
||||
6. Match the right channel:
|
||||
- X DM for warm, fast social touch points
|
||||
- LinkedIn message for professional graph adjacency
|
||||
- Apple Mail draft for higher-context intros or outreach
|
||||
7. Run `brand-voice` before drafting messages.
|
||||
8. Return a review pack before any apply step.
|
||||
|
||||
## Review Pack Format
|
||||
|
||||
```text
|
||||
CONNECTIONS OPTIMIZER REPORT
|
||||
============================
|
||||
|
||||
Mode:
|
||||
Platforms:
|
||||
Priority Set:
|
||||
|
||||
Prune Queue
|
||||
- handle / profile
|
||||
reason:
|
||||
confidence:
|
||||
action:
|
||||
|
||||
Review Queue
|
||||
- handle / profile
|
||||
reason:
|
||||
risk:
|
||||
|
||||
Keep / Protect
|
||||
- handle / profile
|
||||
bridge value:
|
||||
|
||||
Add / Follow Targets
|
||||
- person
|
||||
why now:
|
||||
warm path:
|
||||
preferred channel:
|
||||
|
||||
Drafts
|
||||
- X DM:
|
||||
- LinkedIn:
|
||||
- Apple Mail:
|
||||
```
|
||||
|
||||
## Outbound Rules
|
||||
|
||||
- Default email path is Apple Mail / Mail.app draft creation.
|
||||
- Do not send automatically.
|
||||
- Choose the channel based on warmth, relevance, and context depth.
|
||||
- Do not force a DM when an email or no outreach is the right move.
|
||||
- Drafts should sound like the user, not like automated sales copy.
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for the reusable voice profile
|
||||
- `lead-intelligence` for weighted target and warm-path discovery
|
||||
- `x-api` for X graph access, drafting, and optional apply flows
|
||||
- `content-engine` when the user also wants public launch content around network moves
|
||||
@@ -36,27 +36,24 @@ Before drafting, identify the source set:
|
||||
- prior posts from the same author
|
||||
|
||||
If the user wants a specific voice, build a voice profile from real examples before writing.
|
||||
Use `brand-voice` as the canonical workflow when voice consistency matters across more than one output.
|
||||
|
||||
## Voice Capture Workflow
|
||||
|
||||
Collect 5 to 20 examples when available. Good sources:
|
||||
- articles or essays
|
||||
- X posts or threads
|
||||
- docs or release notes
|
||||
- newsletters
|
||||
- previous launch posts
|
||||
Run `brand-voice` first when:
|
||||
|
||||
If live X access is available, use `x-api` to pull recent original posts before drafting. If not, use the examples already provided or present in the repo.
|
||||
- there are multiple downstream outputs
|
||||
- the user explicitly cares about writing style
|
||||
- the content is launch, outreach, or reputation-sensitive
|
||||
|
||||
Extract and write down:
|
||||
- sentence length and rhythm
|
||||
- how compressed or explanatory the writing is
|
||||
- whether capitalization is conventional, mixed, or situational
|
||||
- how parentheses are used
|
||||
- whether the writer uses fragments, lists, or abrupt pivots
|
||||
- how often the writer asks questions
|
||||
- how sharp, formal, opinionated, or dry the voice is
|
||||
- what the writer never does
|
||||
At minimum, produce a compact `VOICE PROFILE` covering:
|
||||
- rhythm
|
||||
- compression
|
||||
- capitalization
|
||||
- parenthetical use
|
||||
- question use
|
||||
- preferred moves
|
||||
- banned moves
|
||||
|
||||
Do not start drafting until the voice profile is clear enough to enforce.
|
||||
|
||||
@@ -148,3 +145,9 @@ Before delivering:
|
||||
- no fake engagement bait remains
|
||||
- no duplicated copy across platforms unless requested
|
||||
- any CTA is earned and user-approved
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for source-derived voice profiles
|
||||
- `crosspost` for platform-specific distribution
|
||||
- `x-api` for sourcing recent posts and publishing approved X output
|
||||
|
||||
@@ -37,6 +37,8 @@ Use `content-engine` first if the source still needs voice shaping.
|
||||
|
||||
### Step 2: Capture the Voice Fingerprint
|
||||
|
||||
Run `brand-voice` first if the source voice is not already captured in the current session.
|
||||
|
||||
Before adapting, note:
|
||||
- how blunt or explanatory the source is
|
||||
- whether the source uses fragments, lists, or longer transitions
|
||||
@@ -110,5 +112,6 @@ Before delivering:
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for reusable source-derived voice capture
|
||||
- `content-engine` for voice capture and source shaping
|
||||
- `x-api` for X publishing workflows
|
||||
|
||||
@@ -21,7 +21,7 @@ Agent-powered lead intelligence pipeline that finds, scores, and reaches high-va
|
||||
|
||||
### Required
|
||||
- **Exa MCP** — Deep web search for people, companies, and signals (`web_search_exa`)
|
||||
- **X API** — Follower/following graph, mutual analysis, recent activity (`X_BEARER_TOKEN`, `X_ACCESS_TOKEN`)
|
||||
- **X API** — Follower/following graph, mutual analysis, recent activity (`X_BEARER_TOKEN`, plus write-context credentials such as `X_CONSUMER_KEY`, `X_CONSUMER_SECRET`, `X_ACCESS_TOKEN`, `X_ACCESS_TOKEN_SECRET`)
|
||||
|
||||
### Optional (enhance results)
|
||||
- **LinkedIn** — Direct API if available, otherwise browser control for search, profile inspection, and drafting
|
||||
@@ -43,36 +43,10 @@ Agent-powered lead intelligence pipeline that finds, scores, and reaches high-va
|
||||
|
||||
Do not draft outbound from generic sales copy.
|
||||
|
||||
Before writing a message, build a voice profile from real source material. Prefer:
|
||||
|
||||
- recent X posts and threads
|
||||
- published articles, memos, or launch notes
|
||||
- prior outreach emails that actually worked
|
||||
- docs, changelogs, or product writing if those are the strongest signals
|
||||
Run `brand-voice` first whenever the user's voice matters. Reuse its `VOICE PROFILE` instead of re-deriving style ad hoc inside this skill.
|
||||
|
||||
If live X access is available, pull recent original posts before drafting. If not, use supplied examples or the best repo/site material available.
|
||||
|
||||
Extract:
|
||||
|
||||
- sentence length and rhythm
|
||||
- how compressed or explanatory the writing is
|
||||
- how parentheses are used
|
||||
- whether capitalization is conventional or situational
|
||||
- how often questions are used
|
||||
- phrases or transitions the author never uses
|
||||
|
||||
For Affaan / ECC style specifically:
|
||||
|
||||
- direct, compressed, concrete
|
||||
- strong preference for specifics, mechanisms, and receipts
|
||||
- parentheticals are for qualification or over-clarification, not jokes
|
||||
- lowercase is optional, not mandatory
|
||||
- no fake curiosity hooks
|
||||
- no "not X, just Y"
|
||||
- no "no fluff"
|
||||
- no LinkedIn thought-leader cadence
|
||||
- no bait question at the end
|
||||
|
||||
## Stage 1: Signal Scoring
|
||||
|
||||
Search for high-signal people in target verticals. Assign a weight to each based on:
|
||||
@@ -333,8 +307,8 @@ Users should set these environment variables:
|
||||
export X_BEARER_TOKEN="..."
|
||||
export X_ACCESS_TOKEN="..."
|
||||
export X_ACCESS_TOKEN_SECRET="..."
|
||||
export X_API_KEY="..."
|
||||
export X_API_SECRET="..."
|
||||
export X_CONSUMER_KEY="..."
|
||||
export X_CONSUMER_SECRET="..."
|
||||
export EXA_API_KEY="..."
|
||||
|
||||
# Optional
|
||||
@@ -364,3 +338,8 @@ Agent workflow:
|
||||
|
||||
Output: Ranked list with warm paths, voice profile summary, and channel-specific outreach drafts or drafts-in-app
|
||||
```
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for canonical voice capture
|
||||
- `connections-optimizer` for review-first network pruning and expansion before outreach
|
||||
|
||||
89
skills/manim-video/SKILL.md
Normal file
89
skills/manim-video/SKILL.md
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
name: manim-video
|
||||
description: Build reusable Manim explainers for technical concepts, graphs, system diagrams, and product walkthroughs, then hand off to the wider ECC video stack if needed. Use when the user wants a clean animated explainer rather than a generic talking-head script.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Manim Video
|
||||
|
||||
Use Manim for technical explainers where motion, structure, and clarity matter more than photorealism.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- the user wants a technical explainer animation
|
||||
- the concept is a graph, workflow, architecture, metric progression, or system diagram
|
||||
- the user wants a short product or launch explainer for X or a landing page
|
||||
- the visual should feel precise instead of generically cinematic
|
||||
|
||||
## Tool Requirements
|
||||
|
||||
- `manim` CLI for scene rendering
|
||||
- `ffmpeg` for post-processing if needed
|
||||
- `video-editing` for final assembly or polish
|
||||
- `remotion-video-creation` when the final package needs composited UI, captions, or additional motion layers
|
||||
|
||||
## Default Output
|
||||
|
||||
- short 16:9 MP4
|
||||
- one thumbnail or poster frame
|
||||
- storyboard plus scene plan
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Define the core visual thesis in one sentence.
|
||||
2. Break the concept into 3 to 6 scenes.
|
||||
3. Decide what each scene proves.
|
||||
4. Write the scene outline before writing Manim code.
|
||||
5. Render the smallest working version first.
|
||||
6. Tighten typography, spacing, color, and pacing after the render works.
|
||||
7. Hand off to the wider video stack only if it adds value.
|
||||
|
||||
## Scene Planning Rules
|
||||
|
||||
- each scene should prove one thing
|
||||
- avoid overstuffed diagrams
|
||||
- prefer progressive reveal over full-screen clutter
|
||||
- use motion to explain state change, not just to keep the screen busy
|
||||
- title cards should be short and loaded with meaning
|
||||
|
||||
## Network Graph Default
|
||||
|
||||
For social-graph and network-optimization explainers:
|
||||
|
||||
- show the current graph before showing the optimized graph
|
||||
- distinguish low-signal follow clutter from high-signal bridges
|
||||
- highlight warm-path nodes and target clusters
|
||||
- if useful, add a final scene showing the self-improvement lineage that informed the skill
|
||||
|
||||
## Render Conventions
|
||||
|
||||
- default to 16:9 landscape unless the user asks for vertical
|
||||
- start with a low-quality smoke test render
|
||||
- only push to higher quality after composition and timing are stable
|
||||
- export one clean thumbnail frame that reads at social size
|
||||
|
||||
## Reusable Starter
|
||||
|
||||
Use [assets/network_graph_scene.py](assets/network_graph_scene.py) as a starting point for network-graph explainers.
|
||||
|
||||
Example smoke test:
|
||||
|
||||
```bash
|
||||
manim -ql assets/network_graph_scene.py NetworkGraphExplainer
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
Return:
|
||||
|
||||
- core visual thesis
|
||||
- storyboard
|
||||
- scene outline
|
||||
- render plan
|
||||
- any follow-on polish recommendations
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `video-editing` for final polish
|
||||
- `remotion-video-creation` for motion-heavy post-processing or compositing
|
||||
- `content-engine` when the animation is part of a broader launch
|
||||
52
skills/manim-video/assets/network_graph_scene.py
Normal file
52
skills/manim-video/assets/network_graph_scene.py
Normal file
@@ -0,0 +1,52 @@
|
||||
from manim import DOWN, LEFT, RIGHT, UP, Circle, Create, FadeIn, FadeOut, Scene, Text, VGroup, CurvedArrow
|
||||
|
||||
|
||||
class NetworkGraphExplainer(Scene):
|
||||
def construct(self):
|
||||
title = Text("Connections Optimizer", font_size=40).to_edge(UP)
|
||||
subtitle = Text("Prune low-signal follows. Strengthen warm paths.", font_size=20).next_to(title, DOWN)
|
||||
|
||||
you = Circle(radius=0.45, color="#4F8EF7").shift(LEFT * 4 + DOWN * 0.5)
|
||||
you_label = Text("You", font_size=22).move_to(you.get_center())
|
||||
|
||||
stale_a = Circle(radius=0.32, color="#7A7A7A").shift(LEFT * 1.6 + UP * 1.2)
|
||||
stale_b = Circle(radius=0.32, color="#7A7A7A").shift(LEFT * 1.2 + DOWN * 1.4)
|
||||
bridge = Circle(radius=0.38, color="#21A179").shift(RIGHT * 0.2 + UP * 0.2)
|
||||
target = Circle(radius=0.42, color="#FF9F1C").shift(RIGHT * 3.2 + UP * 0.7)
|
||||
new_target = Circle(radius=0.42, color="#FF9F1C").shift(RIGHT * 3.0 + DOWN * 1.4)
|
||||
|
||||
stale_a_label = Text("stale", font_size=18).move_to(stale_a.get_center())
|
||||
stale_b_label = Text("noise", font_size=18).move_to(stale_b.get_center())
|
||||
bridge_label = Text("bridge", font_size=18).move_to(bridge.get_center())
|
||||
target_label = Text("target", font_size=18).move_to(target.get_center())
|
||||
new_target_label = Text("add", font_size=18).move_to(new_target.get_center())
|
||||
|
||||
edge_stale_a = CurvedArrow(you.get_right(), stale_a.get_left(), angle=0.2, color="#7A7A7A")
|
||||
edge_stale_b = CurvedArrow(you.get_right(), stale_b.get_left(), angle=-0.2, color="#7A7A7A")
|
||||
edge_bridge = CurvedArrow(you.get_right(), bridge.get_left(), angle=0.0, color="#21A179")
|
||||
edge_target = CurvedArrow(bridge.get_right(), target.get_left(), angle=0.1, color="#21A179")
|
||||
edge_new_target = CurvedArrow(bridge.get_right(), new_target.get_left(), angle=-0.12, color="#21A179")
|
||||
|
||||
self.play(FadeIn(title), FadeIn(subtitle))
|
||||
self.play(
|
||||
Create(you),
|
||||
FadeIn(you_label),
|
||||
Create(stale_a),
|
||||
Create(stale_b),
|
||||
Create(bridge),
|
||||
Create(target),
|
||||
FadeIn(stale_a_label),
|
||||
FadeIn(stale_b_label),
|
||||
FadeIn(bridge_label),
|
||||
FadeIn(target_label),
|
||||
)
|
||||
self.play(Create(edge_stale_a), Create(edge_stale_b), Create(edge_bridge), Create(edge_target))
|
||||
|
||||
optimize = Text("Optimize the graph", font_size=24).to_edge(DOWN)
|
||||
self.play(FadeIn(optimize))
|
||||
self.play(FadeOut(stale_a), FadeOut(stale_b), FadeOut(stale_a_label), FadeOut(stale_b_label), FadeOut(edge_stale_a), FadeOut(edge_stale_b))
|
||||
self.play(Create(new_target), FadeIn(new_target_label), Create(edge_new_target))
|
||||
|
||||
final_group = VGroup(you, you_label, bridge, bridge_label, target, target_label, new_target, new_target_label)
|
||||
self.play(final_group.animate.shift(UP * 0.1))
|
||||
self.wait(1)
|
||||
@@ -46,25 +46,27 @@ tweets = resp.json()
|
||||
|
||||
### OAuth 1.0a (User Context)
|
||||
|
||||
Required for: posting tweets, managing account, DMs.
|
||||
Required for: posting tweets, managing account, DMs, and any write flow.
|
||||
|
||||
```bash
|
||||
# Environment setup — source before use
|
||||
export X_API_KEY="your-api-key"
|
||||
export X_API_SECRET="your-api-secret"
|
||||
export X_CONSUMER_KEY="your-consumer-key"
|
||||
export X_CONSUMER_SECRET="your-consumer-secret"
|
||||
export X_ACCESS_TOKEN="your-access-token"
|
||||
export X_ACCESS_SECRET="your-access-secret"
|
||||
export X_ACCESS_TOKEN_SECRET="your-access-token-secret"
|
||||
```
|
||||
|
||||
Legacy aliases such as `X_API_KEY`, `X_API_SECRET`, and `X_ACCESS_SECRET` may exist in older setups. Prefer the `X_CONSUMER_*` and `X_ACCESS_TOKEN_SECRET` names when documenting or wiring new flows.
|
||||
|
||||
```python
|
||||
import os
|
||||
from requests_oauthlib import OAuth1Session
|
||||
|
||||
oauth = OAuth1Session(
|
||||
os.environ["X_API_KEY"],
|
||||
client_secret=os.environ["X_API_SECRET"],
|
||||
os.environ["X_CONSUMER_KEY"],
|
||||
client_secret=os.environ["X_CONSUMER_SECRET"],
|
||||
resource_owner_key=os.environ["X_ACCESS_TOKEN"],
|
||||
resource_owner_secret=os.environ["X_ACCESS_SECRET"],
|
||||
resource_owner_secret=os.environ["X_ACCESS_TOKEN_SECRET"],
|
||||
)
|
||||
```
|
||||
|
||||
@@ -125,6 +127,21 @@ resp = requests.get(
|
||||
)
|
||||
```
|
||||
|
||||
### Pull Recent Original Posts for Voice Modeling
|
||||
|
||||
```python
|
||||
resp = requests.get(
|
||||
"https://api.x.com/2/tweets/search/recent",
|
||||
headers=headers,
|
||||
params={
|
||||
"query": "from:affaanmustafa -is:retweet -is:reply",
|
||||
"max_results": 25,
|
||||
"tweet.fields": "created_at,public_metrics",
|
||||
}
|
||||
)
|
||||
voice_samples = resp.json()
|
||||
```
|
||||
|
||||
### Get User by Username
|
||||
|
||||
```python
|
||||
@@ -196,13 +213,18 @@ else:
|
||||
|
||||
## Integration with Content Engine
|
||||
|
||||
Use `content-engine` skill to generate platform-native content, then post via X API:
|
||||
1. Generate content with content-engine (X platform format)
|
||||
2. Validate length (280 chars for single tweet)
|
||||
3. Post via X API using patterns above
|
||||
4. Track engagement via public_metrics
|
||||
Use `brand-voice` plus `content-engine` to generate platform-native content, then post via X API:
|
||||
1. Pull recent original posts when voice matching matters
|
||||
2. Build or reuse a `VOICE PROFILE`
|
||||
3. Generate content with `content-engine` in X-native format
|
||||
4. Validate length and thread structure
|
||||
5. Return the draft for approval unless the user explicitly asked to post now
|
||||
6. Post via X API only after approval
|
||||
7. Track engagement via public_metrics
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` — Build a reusable voice profile from real X and site/source material
|
||||
- `content-engine` — Generate platform-native content for X
|
||||
- `crosspost` — Distribute content across X, LinkedIn, and other platforms
|
||||
- `connections-optimizer` — Reorganize the X graph before drafting network-driven outreach
|
||||
|
||||
@@ -24,7 +24,7 @@ async function runTests() {
|
||||
let store
|
||||
try {
|
||||
store = await import(pathToFileURL(storePath).href)
|
||||
} catch (err) {
|
||||
} catch (_err) {
|
||||
console.log('\n[warn] Skipping: build .opencode first (cd .opencode && npm run build)\n')
|
||||
process.exit(0)
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ const fs = require('fs');
|
||||
const os = require('os');
|
||||
const path = require('path');
|
||||
const { spawnSync } = require('child_process');
|
||||
const TOML = require('@iarna/toml');
|
||||
|
||||
const repoRoot = path.join(__dirname, '..', '..');
|
||||
const installScript = path.join(repoRoot, 'scripts', 'codex', 'install-global-git-hooks.sh');
|
||||
@@ -93,29 +94,16 @@ if (os.platform() === 'win32') {
|
||||
else failed++;
|
||||
|
||||
if (
|
||||
test('sync preserves baseline config and accepts the legacy context7 MCP section', () => {
|
||||
test('sync installs the missing Codex baseline and accepts the legacy context7 MCP section', () => {
|
||||
const homeDir = createTempDir('codex-sync-home-');
|
||||
const codexDir = path.join(homeDir, '.codex');
|
||||
const configPath = path.join(codexDir, 'config.toml');
|
||||
const agentsPath = path.join(codexDir, 'AGENTS.md');
|
||||
const config = [
|
||||
'approval_policy = "on-request"',
|
||||
'sandbox_mode = "workspace-write"',
|
||||
'web_search = "live"',
|
||||
'persistent_instructions = ""',
|
||||
'',
|
||||
'[features]',
|
||||
'multi_agent = true',
|
||||
'',
|
||||
'[profiles.strict]',
|
||||
'approval_policy = "on-request"',
|
||||
'sandbox_mode = "read-only"',
|
||||
'web_search = "cached"',
|
||||
'',
|
||||
'[profiles.yolo]',
|
||||
'approval_policy = "never"',
|
||||
'sandbox_mode = "workspace-write"',
|
||||
'web_search = "live"',
|
||||
'[agents]',
|
||||
'explorer = { description = "Read-only codebase explorer for gathering evidence before changes are proposed." }',
|
||||
'',
|
||||
'[mcp_servers.context7]',
|
||||
'command = "npx"',
|
||||
@@ -147,13 +135,63 @@ if (
|
||||
assert.match(syncedAgents, /^# Codex Supplement \(From ECC \.codex\/AGENTS\.md\)/m);
|
||||
|
||||
const syncedConfig = fs.readFileSync(configPath, 'utf8');
|
||||
assert.match(syncedConfig, /^multi_agent\s*=\s*true$/m);
|
||||
assert.match(syncedConfig, /^\[profiles\.strict\]$/m);
|
||||
assert.match(syncedConfig, /^\[profiles\.yolo\]$/m);
|
||||
assert.match(syncedConfig, /^\[mcp_servers\.github\]$/m);
|
||||
assert.match(syncedConfig, /^\[mcp_servers\.memory\]$/m);
|
||||
assert.match(syncedConfig, /^\[mcp_servers\.sequential-thinking\]$/m);
|
||||
assert.match(syncedConfig, /^\[mcp_servers\.context7\]$/m);
|
||||
const parsedConfig = TOML.parse(syncedConfig);
|
||||
assert.strictEqual(parsedConfig.approval_policy, 'on-request');
|
||||
assert.strictEqual(parsedConfig.sandbox_mode, 'workspace-write');
|
||||
assert.strictEqual(parsedConfig.web_search, 'live');
|
||||
assert.ok(!Object.prototype.hasOwnProperty.call(parsedConfig, 'multi_agent'));
|
||||
assert.ok(parsedConfig.features);
|
||||
assert.strictEqual(parsedConfig.features.multi_agent, true);
|
||||
assert.ok(parsedConfig.profiles);
|
||||
assert.strictEqual(parsedConfig.profiles.strict.approval_policy, 'on-request');
|
||||
assert.strictEqual(parsedConfig.profiles.yolo.approval_policy, 'never');
|
||||
assert.ok(parsedConfig.agents);
|
||||
assert.strictEqual(parsedConfig.agents.max_threads, 6);
|
||||
assert.strictEqual(parsedConfig.agents.max_depth, 1);
|
||||
assert.strictEqual(parsedConfig.agents.explorer.config_file, 'agents/explorer.toml');
|
||||
assert.strictEqual(parsedConfig.agents.reviewer.config_file, 'agents/reviewer.toml');
|
||||
assert.strictEqual(parsedConfig.agents.docs_researcher.config_file, 'agents/docs-researcher.toml');
|
||||
assert.ok(parsedConfig.mcp_servers.exa);
|
||||
assert.ok(parsedConfig.mcp_servers.github);
|
||||
assert.ok(parsedConfig.mcp_servers.memory);
|
||||
assert.ok(parsedConfig.mcp_servers['sequential-thinking']);
|
||||
assert.ok(parsedConfig.mcp_servers.context7);
|
||||
|
||||
for (const roleFile of ['explorer.toml', 'reviewer.toml', 'docs-researcher.toml']) {
|
||||
assert.ok(fs.existsSync(path.join(codexDir, 'agents', roleFile)));
|
||||
}
|
||||
} finally {
|
||||
cleanup(homeDir);
|
||||
}
|
||||
})
|
||||
)
|
||||
passed++;
|
||||
else failed++;
|
||||
|
||||
if (
|
||||
test('sync adds parent-table keys when the target only declares an implicit parent table', () => {
|
||||
const homeDir = createTempDir('codex-sync-implicit-parent-home-');
|
||||
const codexDir = path.join(homeDir, '.codex');
|
||||
const configPath = path.join(codexDir, 'config.toml');
|
||||
const config = [
|
||||
'persistent_instructions = ""',
|
||||
'',
|
||||
'[agents.explorer]',
|
||||
'description = "Read-only codebase explorer for gathering evidence before changes are proposed."',
|
||||
'',
|
||||
].join('\n');
|
||||
|
||||
try {
|
||||
fs.mkdirSync(codexDir, { recursive: true });
|
||||
fs.writeFileSync(configPath, config);
|
||||
|
||||
const syncResult = runBash(syncScript, [], makeHermeticCodexEnv(homeDir, codexDir));
|
||||
assert.strictEqual(syncResult.status, 0, `${syncResult.stdout}\n${syncResult.stderr}`);
|
||||
|
||||
const parsedConfig = TOML.parse(fs.readFileSync(configPath, 'utf8'));
|
||||
assert.strictEqual(parsedConfig.agents.max_threads, 6);
|
||||
assert.strictEqual(parsedConfig.agents.max_depth, 1);
|
||||
assert.strictEqual(parsedConfig.agents.explorer.config_file, 'agents/explorer.toml');
|
||||
} finally {
|
||||
cleanup(homeDir);
|
||||
}
|
||||
|
||||
@@ -353,6 +353,45 @@ function runTests() {
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('resolves CLAUDE_PLUGIN_ROOT placeholders in installed claude hooks', () => {
|
||||
const homeDir = createTempDir('install-apply-home-');
|
||||
const projectDir = createTempDir('install-apply-project-');
|
||||
|
||||
try {
|
||||
const result = run(['--profile', 'core'], { cwd: projectDir, homeDir });
|
||||
assert.strictEqual(result.code, 0, result.stderr);
|
||||
|
||||
const claudeRoot = path.join(homeDir, '.claude');
|
||||
const settings = readJson(path.join(claudeRoot, 'settings.json'));
|
||||
const installedHooks = readJson(path.join(claudeRoot, 'hooks', 'hooks.json'));
|
||||
|
||||
const autoTmuxEntry = settings.hooks.PreToolUse.find(entry => entry.id === 'pre:bash:auto-tmux-dev');
|
||||
assert.ok(autoTmuxEntry, 'settings.json should include the auto tmux hook');
|
||||
assert.ok(
|
||||
autoTmuxEntry.hooks[0].command.includes(path.join(claudeRoot, 'scripts', 'hooks', 'auto-tmux-dev.js')),
|
||||
'settings.json should use the installed Claude root for hook commands'
|
||||
);
|
||||
assert.ok(
|
||||
!autoTmuxEntry.hooks[0].command.includes('${CLAUDE_PLUGIN_ROOT}'),
|
||||
'settings.json should not retain CLAUDE_PLUGIN_ROOT placeholders after install'
|
||||
);
|
||||
|
||||
const installedAutoTmuxEntry = installedHooks.hooks.PreToolUse.find(entry => entry.id === 'pre:bash:auto-tmux-dev');
|
||||
assert.ok(installedAutoTmuxEntry, 'hooks/hooks.json should include the auto tmux hook');
|
||||
assert.ok(
|
||||
installedAutoTmuxEntry.hooks[0].command.includes(path.join(claudeRoot, 'scripts', 'hooks', 'auto-tmux-dev.js')),
|
||||
'hooks/hooks.json should use the installed Claude root for hook commands'
|
||||
);
|
||||
assert.ok(
|
||||
!installedAutoTmuxEntry.hooks[0].command.includes('${CLAUDE_PLUGIN_ROOT}'),
|
||||
'hooks/hooks.json should not retain CLAUDE_PLUGIN_ROOT placeholders after install'
|
||||
);
|
||||
} finally {
|
||||
cleanup(homeDir);
|
||||
cleanup(projectDir);
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('preserves existing settings fields and hook entries when merging hooks', () => {
|
||||
const homeDir = createTempDir('install-apply-home-');
|
||||
const projectDir = createTempDir('install-apply-project-');
|
||||
|
||||
@@ -26,6 +26,7 @@ function runHook(mode, payload, homeDir) {
|
||||
env: {
|
||||
...process.env,
|
||||
HOME: homeDir,
|
||||
USERPROFILE: homeDir,
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
@@ -74,6 +74,15 @@ function runTests() {
|
||||
assert.ok(!source.includes('run_or_echo cp -R "$skill_dir" "$dest"'), 'skill sync cp should be removed');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('sync script avoids GNU-only grep -P parsing', () => {
|
||||
assert.ok(!source.includes('grep -oP'), 'sync-ecc-to-codex.sh should remain portable across BSD and GNU environments');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('extract_context7_key uses a portable parser', () => {
|
||||
assert.ok(source.includes('extract_context7_key() {'), 'Expected extract_context7_key helper');
|
||||
assert.ok(source.includes('node - "$file"'), 'extract_context7_key should use Node-based parsing');
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
}
|
||||
|
||||
@@ -29,7 +29,7 @@ function runInstall(options = {}) {
|
||||
},
|
||||
encoding: 'utf8',
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
timeout: 20000,
|
||||
timeout: 60000,
|
||||
});
|
||||
}
|
||||
|
||||
@@ -43,7 +43,7 @@ function runUninstall(options = {}) {
|
||||
encoding: 'utf8',
|
||||
input: options.input || 'y\n',
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
timeout: 20000,
|
||||
timeout: 60000,
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user