Compare commits

..

20 Commits

Author SHA1 Message Date
ecc-tools[bot]
32ec196022 feat: add everything-claude-code-conventions ECC bundle (.claude/commands/add-new-skill.md) 2026-04-02 13:44:18 +00:00
ecc-tools[bot]
e974e598cb feat: add everything-claude-code-conventions ECC bundle (.claude/commands/refactoring.md) 2026-04-02 13:44:17 +00:00
ecc-tools[bot]
a9eaa717c0 feat: add everything-claude-code-conventions ECC bundle (.claude/commands/feature-development.md) 2026-04-02 13:44:16 +00:00
ecc-tools[bot]
e426ff3cb1 feat: add everything-claude-code-conventions ECC bundle (.claude/enterprise/controls.md) 2026-04-02 13:44:15 +00:00
ecc-tools[bot]
423f80fe66 feat: add everything-claude-code-conventions ECC bundle (.claude/team/everything-claude-code-team-config.json) 2026-04-02 13:44:14 +00:00
ecc-tools[bot]
a329a8069f feat: add everything-claude-code-conventions ECC bundle (.claude/research/everything-claude-code-research-playbook.md) 2026-04-02 13:44:13 +00:00
ecc-tools[bot]
b1103a45ae feat: add everything-claude-code-conventions ECC bundle (.claude/rules/everything-claude-code-guardrails.md) 2026-04-02 13:44:13 +00:00
ecc-tools[bot]
b174e8753f feat: add everything-claude-code-conventions ECC bundle (.codex/agents/docs-researcher.toml) 2026-04-02 13:44:11 +00:00
ecc-tools[bot]
71c882e446 feat: add everything-claude-code-conventions ECC bundle (.codex/agents/reviewer.toml) 2026-04-02 13:44:10 +00:00
ecc-tools[bot]
d1bdaadd69 feat: add everything-claude-code-conventions ECC bundle (.codex/agents/explorer.toml) 2026-04-02 13:44:09 +00:00
ecc-tools[bot]
d76b775462 feat: add everything-claude-code-conventions ECC bundle (.claude/identity.json) 2026-04-02 13:44:08 +00:00
ecc-tools[bot]
6b9860be1b feat: add everything-claude-code-conventions ECC bundle (.agents/skills/everything-claude-code/agents/openai.yaml) 2026-04-02 13:44:07 +00:00
ecc-tools[bot]
bdf76f66f7 feat: add everything-claude-code-conventions ECC bundle (.agents/skills/everything-claude-code/SKILL.md) 2026-04-02 13:44:06 +00:00
ecc-tools[bot]
cc45228eda feat: add everything-claude-code-conventions ECC bundle (.claude/skills/everything-claude-code/SKILL.md) 2026-04-02 13:44:05 +00:00
ecc-tools[bot]
b7c5651f6f feat: add everything-claude-code-conventions ECC bundle (.claude/ecc-tools.json) 2026-04-02 13:44:04 +00:00
Affaan Mustafa
bf3fd69d2c refactor: extract social graph ranking core 2026-04-02 02:51:24 -07:00
Affaan Mustafa
31525854b5 feat(skills): add brand voice and network ops lanes 2026-04-01 19:41:03 -07:00
Affaan Mustafa
8f63697113 fix: port safe ci cleanup from backlog 2026-04-01 16:09:54 -07:00
Affaan Mustafa
9a6080f2e1 feat: sync the codex baseline and agent roles 2026-04-01 16:08:03 -07:00
Affaan Mustafa
dba5ae779b fix: harden install and codex sync portability 2026-04-01 16:04:56 -07:00
48 changed files with 2389 additions and 522 deletions

View File

@@ -0,0 +1,97 @@
---
name: brand-voice
description: Build a source-derived writing style profile from real posts, essays, launch notes, docs, or site copy, then reuse that profile across content, outreach, and social workflows. Use when the user wants voice consistency without generic AI writing tropes.
origin: ECC
---
# Brand Voice
Build a durable voice profile from real source material, then use that profile everywhere instead of re-deriving style from scratch or defaulting to generic AI copy.
## When to Activate
- the user wants content or outreach in a specific voice
- writing for X, LinkedIn, email, launch posts, threads, or product updates
- adapting a known author's tone across channels
- the existing content lane needs a reusable style system instead of one-off mimicry
## Source Priority
Use the strongest real source set available, in this order:
1. recent original X posts and threads
2. articles, essays, memos, launch notes, or newsletters
3. real outbound emails or DMs that worked
4. product docs, changelogs, README framing, and site copy
Do not use generic platform exemplars as source material.
## Collection Workflow
1. Gather 5 to 20 representative samples when available.
2. Prefer recent material over old material unless the user says the older writing is more canonical.
3. Separate "public launch voice" from "private working voice" if the source set clearly splits.
4. If live X access is available, use `x-api` to pull recent original posts before drafting.
5. If site copy matters, include the current ECC landing page and repo/plugin framing.
## What to Extract
- rhythm and sentence length
- compression vs explanation
- capitalization norms
- parenthetical use
- question frequency and purpose
- how sharply claims are made
- how often numbers, mechanisms, or receipts show up
- how transitions work
- what the author never does
## Output Contract
Produce a reusable `VOICE PROFILE` block that downstream skills can consume directly. Use the schema in [references/voice-profile-schema.md](references/voice-profile-schema.md).
Keep the profile structured and short enough to reuse in session context. The point is not literary criticism. The point is operational reuse.
## Affaan / ECC Defaults
If the user wants Affaan / ECC voice and live sources are thin, start here unless newer source material overrides it:
- direct, compressed, concrete
- specifics, mechanisms, receipts, and numbers beat adjectives
- parentheticals are for qualification, narrowing, or over-clarification
- capitalization is conventional unless there is a real reason to break it
- questions are rare and should not be used as bait
- tone can be sharp, blunt, skeptical, or dry
- transitions should feel earned, not smoothed over
## Hard Bans
Delete and rewrite any of these:
- fake curiosity hooks
- "not X, just Y"
- "no fluff"
- forced lowercase
- LinkedIn thought-leader cadence
- bait questions
- "Excited to share"
- generic founder-journey filler
- corny parentheticals
## Persistence Rules
- Reuse the latest confirmed `VOICE PROFILE` across related tasks in the same session.
- If the user asks for a durable artifact, save the profile in the requested workspace location or memory surface.
- Do not create repo-tracked files that store personal voice fingerprints unless the user explicitly asks for that.
## Downstream Use
Use this skill before or inside:
- `content-engine`
- `crosspost`
- `lead-intelligence`
- article or launch writing
- cold or warm outbound across X, LinkedIn, and email
If another skill already has a partial voice capture section, this skill is the canonical source of truth.

View File

@@ -0,0 +1,55 @@
# Voice Profile Schema
Use this exact structure when building a reusable voice profile:
```text
VOICE PROFILE
=============
Author:
Goal:
Confidence:
Source Set
- source 1
- source 2
- source 3
Rhythm
- short note on sentence length, pacing, and fragmentation
Compression
- how dense or explanatory the writing is
Capitalization
- conventional, mixed, or situational
Parentheticals
- how they are used and how they are not used
Question Use
- rare, frequent, rhetorical, direct, or mostly absent
Claim Style
- how claims are framed, supported, and sharpened
Preferred Moves
- concrete moves the author does use
Banned Moves
- specific patterns the author does not use
CTA Rules
- how, when, or whether to close with asks
Channel Notes
- X:
- LinkedIn:
- Email:
```
Guidelines:
- Keep the profile concrete and source-backed.
- Use short bullets, not essay paragraphs.
- Every banned move should be observable in the source set or explicitly requested by the user.
- If the source set conflicts, call out the split instead of averaging it into mush.

View File

@@ -36,27 +36,24 @@ Before drafting, identify the source set:
- prior posts from the same author
If the user wants a specific voice, build a voice profile from real examples before writing.
Use `brand-voice` as the canonical workflow when voice consistency matters across more than one output.
## Voice Capture Workflow
Collect 5 to 20 examples when available. Good sources:
- articles or essays
- X posts or threads
- docs or release notes
- newsletters
- previous launch posts
Run `brand-voice` first when:
If live X access is available, use `x-api` to pull recent original posts before drafting. If not, use the examples already provided or present in the repo.
- there are multiple downstream outputs
- the user explicitly cares about writing style
- the content is launch, outreach, or reputation-sensitive
Extract and write down:
- sentence length and rhythm
- how compressed or explanatory the writing is
- whether capitalization is conventional, mixed, or situational
- how parentheses are used
- whether the writer uses fragments, lists, or abrupt pivots
- how often the writer asks questions
- how sharp, formal, opinionated, or dry the voice is
- what the writer never does
At minimum, produce a compact `VOICE PROFILE` covering:
- rhythm
- compression
- capitalization
- parenthetical use
- question use
- preferred moves
- banned moves
Do not start drafting until the voice profile is clear enough to enforce.
@@ -148,3 +145,9 @@ Before delivering:
- no fake engagement bait remains
- no duplicated copy across platforms unless requested
- any CTA is earned and user-approved
## Related Skills
- `brand-voice` for source-derived voice profiles
- `crosspost` for platform-specific distribution
- `x-api` for sourcing recent posts and publishing approved X output

View File

@@ -37,6 +37,8 @@ Use `content-engine` first if the source still needs voice shaping.
### Step 2: Capture the Voice Fingerprint
Run `brand-voice` first if the source voice is not already captured in the current session.
Before adapting, note:
- how blunt or explanatory the source is
- whether the source uses fragments, lists, or longer transitions
@@ -110,5 +112,6 @@ Before delivering:
## Related Skills
- `brand-voice` for reusable source-derived voice capture
- `content-engine` for voice capture and source shaping
- `x-api` for X publishing workflows

View File

@@ -1,175 +1,452 @@
```markdown
# everything-claude-code Development Patterns
---
name: everything-claude-code-conventions
description: Development conventions and patterns for everything-claude-code. JavaScript project with conventional commits.
---
> Auto-generated skill from repository analysis
# Everything Claude Code Conventions
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-04-02
## Overview
This skill teaches the core development patterns, coding conventions, and common workflows used in the `everything-claude-code` repository. The codebase is primarily JavaScript, with no major framework, and is organized around modular skills, agent definitions, command workflows, and extensible install targets. The repository emphasizes clear documentation, conventional commits, and maintainable architecture for agent and workflow development.
This skill teaches Claude the development patterns and conventions used in everything-claude-code.
## Coding Conventions
## Tech Stack
- **File Naming:**
Use `camelCase` for JavaScript files and directories.
*Example:*
```
scripts/lib/installTargets.js
agentPrompts.md
```
- **Primary Language**: JavaScript
- **Architecture**: hybrid module organization
- **Test Location**: separate
- **Import Style:**
Use **relative imports** for modules.
*Example:*
```js
const installTarget = require('../lib/installTargets');
```
## When to Use This Skill
- **Export Style:**
Both CommonJS (`module.exports`) and ES module (`export`) styles are present.
*Example (CommonJS):*
```js
module.exports = function mySkill() { ... };
```
*Example (ES Module):*
```js
export function mySkill() { ... }
```
Activate this skill when:
- Making changes to this repository
- Adding new features following established patterns
- Writing tests that match project conventions
- Creating commits with proper message format
- **Commit Messages:**
Use [Conventional Commits](https://www.conventionalcommits.org/).
Prefixes: `fix`, `feat`, `docs`, `chore`
*Example:*
```
feat: add Gemini install target support
fix: correct agent prompt registration logic
```
## Commit Conventions
- **Documentation:**
Each skill or agent should have a `SKILL.md` or `.md` documentation file explaining its purpose and usage.
Follow these commit message conventions based on 500 analyzed commits.
## Workflows
### Commit Style: Conventional Commits
### Prefixes Used
- `fix`
- `feat`
- `docs`
- `chore`
### Message Guidelines
- Average message length: ~56 characters
- Keep first line concise and descriptive
- Use imperative mood ("Add feature" not "Added feature")
*Commit message example*
```text
feat(skills): add argus-dispatch — multi-model task dispatcher
```
*Commit message example*
```text
refactor: extract social graph ranking core
```
*Commit message example*
```text
fix: port safe ci cleanup from backlog
```
*Commit message example*
```text
docs: close bundle drift and sync plugin guidance
```
*Commit message example*
```text
chore: ignore local orchestration artifacts
```
*Commit message example*
```text
feat(skills): add brand voice and network ops lanes
```
*Commit message example*
```text
feat: sync the codex baseline and agent roles
```
*Commit message example*
```text
fix: harden install and codex sync portability
```
## Architecture
### Project Structure: Single Package
This project uses **hybrid** module organization.
### Configuration Files
- `.github/workflows/ci.yml`
- `.github/workflows/maintenance.yml`
- `.github/workflows/monthly-metrics.yml`
- `.github/workflows/release.yml`
- `.github/workflows/reusable-release.yml`
- `.github/workflows/reusable-test.yml`
- `.github/workflows/reusable-validate.yml`
- `.opencode/package.json`
- `.opencode/tsconfig.json`
- `.prettierrc`
- `eslint.config.js`
- `package.json`
### Guidelines
- This project uses a hybrid organization
- Follow existing patterns when adding new code
## Code Style
### Language: JavaScript
### Naming Conventions
| Element | Convention |
|---------|------------|
| Files | camelCase |
| Functions | camelCase |
| Classes | PascalCase |
| Constants | SCREAMING_SNAKE_CASE |
### Import Style: Relative Imports
### Export Style: Mixed Style
*Preferred import style*
```typescript
// Use relative imports
import { Button } from '../components/Button'
import { useAuth } from './hooks/useAuth'
```
## Testing
### Test Framework
No specific test framework detected — use the repository's existing test patterns.
### File Pattern: `*.test.js`
### Test Types
- **Unit tests**: Test individual functions and components in isolation
- **Integration tests**: Test interactions between multiple components/services
### Coverage
This project has coverage reporting configured. Aim for 80%+ coverage.
## Error Handling
### Error Handling Style: Try-Catch Blocks
*Standard error handling pattern*
```typescript
try {
const result = await riskyOperation()
return result
} catch (error) {
console.error('Operation failed:', error)
throw new Error('User-friendly message')
}
```
## Common Workflows
These workflows were detected from analyzing commit patterns.
### Feature Development
Standard feature implementation workflow
**Frequency**: ~14 times per month
**Steps**:
1. Add feature implementation
2. Add tests for feature
3. Update documentation
**Files typically involved**:
- `.opencode/*`
- `.opencode/plugins/*`
- `.opencode/plugins/lib/*`
- `**/*.test.*`
- `**/api/**`
**Example commit sequence**:
```
feat(team-builder): use `claude agents` command for agent discovery (#1021)
fix: extract inline SessionStart bootstrap to separate file (#1035)
feat: add hexagonal architecture SKILL. (#1034)
```
### Refactoring
Code refactoring and cleanup workflow
**Frequency**: ~3 times per month
**Steps**:
1. Ensure tests pass before refactor
2. Refactor code structure
3. Verify tests still pass
**Files typically involved**:
- `src/**/*`
**Example commit sequence**:
```
refactor: collapse legacy command bodies into skills
feat: add connected operator workflow skills
feat: expand lead intelligence outreach channels
```
### Add New Skill
**Trigger:** When introducing a new skill for agents or workflows
**Command:** `/add-skill`
1. Create a new `SKILL.md` file under `skills/`, `.agents/skills/`, or `.claude/skills/`.
2. Document the skill's purpose, usage, and configuration.
3. Optionally update manifests or documentation if the skill is significant.
Adds a new AI agent skill to the codebase, including documentation and registration.
*Example:*
**Frequency**: ~3 times per month
**Steps**:
1. Create a new SKILL.md file in skills/<skill-name>/ or .agents/skills/<skill-name>/
2. Optionally add supporting scripts or references under the skill directory
3. Update AGENTS.md and/or README.md to document the new skill
4. Update docs/zh-CN/AGENTS.md and docs/zh-CN/README.md for Chinese documentation
5. Update manifests/install-modules.json or install-components.json to register the skill
**Files typically involved**:
- `skills/*/SKILL.md`
- `.agents/skills/*/SKILL.md`
- `AGENTS.md`
- `README.md`
- `README.zh-CN.md`
- `docs/zh-CN/AGENTS.md`
- `docs/zh-CN/README.md`
- `manifests/install-modules.json`
- `manifests/install-components.json`
**Example commit sequence**:
```
skills/myNewSkill/SKILL.md
Create a new SKILL.md file in skills/<skill-name>/ or .agents/skills/<skill-name>/
Optionally add supporting scripts or references under the skill directory
Update AGENTS.md and/or README.md to document the new skill
Update docs/zh-CN/AGENTS.md and docs/zh-CN/README.md for Chinese documentation
Update manifests/install-modules.json or install-components.json to register the skill
```
### Add New Agent Or Pipeline
Adds a new agent or multi-agent workflow pipeline, including agent definitions and orchestration skills.
**Frequency**: ~2 times per month
**Steps**:
1. Create one or more agent definition files in agents/
2. Create or update a SKILL.md in skills/<pipeline-name>/ to orchestrate or document the workflow
3. Optionally add supporting commands, scripts, or examples
4. Update AGENTS.md and/or README.md to document the new pipeline
**Files typically involved**:
- `agents/*.md`
- `skills/*/SKILL.md`
- `commands/*.md`
- `scripts/*.sh`
- `examples/*/README.md`
- `AGENTS.md`
- `README.md`
**Example commit sequence**:
```
Create one or more agent definition files in agents/
Create or update a SKILL.md in skills/<pipeline-name>/ to orchestrate or document the workflow
Optionally add supporting commands, scripts, or examples
Update AGENTS.md and/or README.md to document the new pipeline
```
### Add Or Update Command Workflow
Adds or extends a CLI command for agent workflows, often with review feedback iterations.
**Frequency**: ~2 times per month
**Steps**:
1. Create or update one or more command markdown files in commands/
2. Iterate with fixes based on review feedback (improving YAML frontmatter, usage, output, etc.)
3. Optionally update AGENTS.md or documentation to reference the new command
**Files typically involved**:
- `commands/*.md`
- `AGENTS.md`
- `README.md`
**Example commit sequence**:
```
Create or update one or more command markdown files in commands/
Iterate with fixes based on review feedback (improving YAML frontmatter, usage, output, etc.)
Optionally update AGENTS.md or documentation to reference the new command
```
### Add Install Target Or Adapter
Adds support for a new install target (e.g., Gemini, CodeBuddy), including scripts, schemas, and manifest updates.
**Frequency**: ~2 times per month
**Steps**:
1. Add a new directory for the install target (e.g., .gemini/, .codebuddy/)
2. Add install/uninstall scripts and README(s)
3. Update schemas/ecc-install-config.schema.json and/or install-modules.schema.json
4. Update manifests/install-modules.json
5. Update scripts/lib/install-manifests.js and scripts/lib/install-targets/<target>.js
6. Add or update tests for the new install target
**Files typically involved**:
- `.<target>/*`
- `schemas/ecc-install-config.schema.json`
- `schemas/install-modules.schema.json`
- `manifests/install-modules.json`
- `scripts/lib/install-manifests.js`
- `scripts/lib/install-targets/*.js`
- `tests/lib/install-targets.test.js`
**Example commit sequence**:
```
Add a new directory for the install target (e.g., .gemini/, .codebuddy/)
Add install/uninstall scripts and README(s)
Update schemas/ecc-install-config.schema.json and/or install-modules.schema.json
Update manifests/install-modules.json
Update scripts/lib/install-manifests.js and scripts/lib/install-targets/<target>.js
Add or update tests for the new install target
```
### Update Hooks Or Automation
Refactors or fixes hooks and automation scripts for CI, formatting, or agent workflow integration.
**Frequency**: ~3 times per month
**Steps**:
1. Edit hooks/hooks.json to update hook configuration
2. Edit or add scripts/hooks/*.js or scripts/hooks/*.sh for hook logic
3. Update or add tests for the hooks in tests/hooks/*.test.js
4. Optionally update related scripts or documentation
**Files typically involved**:
- `hooks/hooks.json`
- `scripts/hooks/*.js`
- `scripts/hooks/*.sh`
- `tests/hooks/*.test.js`
**Example commit sequence**:
```
Edit hooks/hooks.json to update hook configuration
Edit or add scripts/hooks/*.js or scripts/hooks/*.sh for hook logic
Update or add tests for the hooks in tests/hooks/*.test.js
Optionally update related scripts or documentation
```
### Documentation Sync And Localization
Updates documentation and ensures Chinese and English docs are in sync, including AGENTS.md and README files.
**Frequency**: ~3 times per month
**Steps**:
1. Edit AGENTS.md, README.md, README.zh-CN.md
2. Edit docs/zh-CN/AGENTS.md, docs/zh-CN/README.md
3. Optionally update skills/*/SKILL.md and .agents/skills/*/SKILL.md for doc improvements
4. Edit WORKING-CONTEXT.md or the-shortform-guide.md as needed
**Files typically involved**:
- `AGENTS.md`
- `README.md`
- `README.zh-CN.md`
- `docs/zh-CN/AGENTS.md`
- `docs/zh-CN/README.md`
- `skills/*/SKILL.md`
- `.agents/skills/*/SKILL.md`
- `WORKING-CONTEXT.md`
- `the-shortform-guide.md`
**Example commit sequence**:
```
Edit AGENTS.md, README.md, README.zh-CN.md
Edit docs/zh-CN/AGENTS.md, docs/zh-CN/README.md
Optionally update skills/*/SKILL.md and .agents/skills/*/SKILL.md for doc improvements
Edit WORKING-CONTEXT.md or the-shortform-guide.md as needed
```
### Dependency Or Ci Update
Updates dependencies or CI workflow files, often via automated bots like dependabot.
**Frequency**: ~4 times per month
**Steps**:
1. Edit .github/workflows/*.yml to update actions or workflow steps
2. Edit package.json, yarn.lock, or package-lock.json for dependency updates
3. Optionally update related scripts or lockfiles
**Files typically involved**:
- `.github/workflows/*.yml`
- `package.json`
- `yarn.lock`
- `package-lock.json`
**Example commit sequence**:
```
Edit .github/workflows/*.yml to update actions or workflow steps
Edit package.json, yarn.lock, or package-lock.json for dependency updates
Optionally update related scripts or lockfiles
```
## Best Practices
Based on analysis of the codebase, follow these practices:
### Do
- Use conventional commit format (feat:, fix:, etc.)
- Follow *.test.js naming pattern
- Use camelCase for file names
- Prefer mixed exports
### Don't
- Don't write vague commit messages
- Don't skip tests for new features
- Don't deviate from established patterns without discussion
---
### Add New Agent or Agent Prompt
**Trigger:** When introducing a new agent persona or capability
**Command:** `/add-agent`
1. Create a new agent definition (e.g., `agents/myAgent.md` or `.opencode/prompts/agents/myAgent.txt`).
2. Register the agent in the relevant configuration file (e.g., `.opencode/opencode.json`).
3. Update `AGENTS.md` or related documentation.
*Example:*
```
agents/supportAgent.md
```
---
### Add or Update Command Workflow
**Trigger:** When introducing or improving a workflow command
**Command:** `/add-command`
1. Create or update a command file under `commands/` (e.g., `commands/review.md`).
2. Optionally update related documentation or scripts.
3. Address review feedback and refine command logic.
---
### Add Install Target or Adapter
**Trigger:** When supporting a new platform or integration
**Command:** `/add-install-target`
1. Add new install scripts and documentation under a dot-directory (e.g., `.gemini/`, `.codebuddy/`).
2. Update `manifests/install-modules.json` and relevant schema files.
3. Implement or update `scripts/lib/install-targets/*.js` for the new target.
4. Add or update tests for the install target.
*Example:*
```
.gemini/install.sh
scripts/lib/install-targets/gemini.js
tests/lib/install-targets.test.js
```
---
### Refactor or Collapse Commands to Skills
**Trigger:** When modernizing command logic under the skills architecture
**Command:** `/refactor-to-skill`
1. Update or remove `commands/*.md` files.
2. Add or update `skills/*/SKILL.md` files.
3. Update documentation (`README.md`, `AGENTS.md`, `WORKING-CONTEXT.md`, etc.).
4. Update manifests if required.
---
### Documentation and Guidance Update
**Trigger:** When clarifying, updating, or adding documentation
**Command:** `/update-docs`
1. Edit or create documentation files (`README.md`, `WORKING-CONTEXT.md`, `docs/*.md`, etc.).
2. Sync or update guidance in multiple language versions if needed.
3. Optionally update related configuration or manifest files.
---
### CI/CD Workflow Update
**Trigger:** When improving or fixing CI/CD processes
**Command:** `/update-ci`
1. Edit `.github/workflows/*.yml` files.
2. Update lockfiles (`package-lock.json`, `yarn.lock`) or validation scripts.
3. Update or add test files for CI/CD logic.
---
## Testing Patterns
- **Test Files:**
Test files follow the pattern `*.test.js`.
- **Framework:**
No specific testing framework detected; structure your tests in standard Node.js or your preferred test runner.
*Example:*
```
tests/lib/install-targets.test.js
```
- **Test Example:**
```js
// install-targets.test.js
const installTarget = require('../../scripts/lib/install-targets/gemini');
test('should install Gemini target', () => {
expect(installTarget()).toBe(true);
});
```
## Commands
| Command | Purpose |
|--------------------|----------------------------------------------------------------|
| /add-skill | Add a new skill module and document it |
| /add-agent | Add a new agent definition or prompt |
| /add-command | Add or update a command workflow |
| /add-install-target| Add a new install target or adapter |
| /refactor-to-skill | Refactor commands into skill-based modules |
| /update-docs | Update documentation or guidance files |
| /update-ci | Update CI/CD workflows, lockfiles, or validation scripts |
```
*This skill was auto-generated by [ECC Tools](https://ecc.tools). Review and customize as needed for your team.*

View File

@@ -19,7 +19,7 @@ Programmatic interaction with X (Twitter) for posting, reading, searching, and a
## Authentication
### OAuth 2.0 (App-Only / User Context)
### OAuth 2.0 Bearer Token (App-Only)
Best for: read-heavy operations, search, public data.
@@ -46,25 +46,27 @@ tweets = resp.json()
### OAuth 1.0a (User Context)
Required for: posting tweets, managing account, DMs.
Required for: posting tweets, managing account, DMs, and any write flow.
```bash
# Environment setup — source before use
export X_API_KEY="your-api-key"
export X_API_SECRET="your-api-secret"
export X_CONSUMER_KEY="your-consumer-key"
export X_CONSUMER_SECRET="your-consumer-secret"
export X_ACCESS_TOKEN="your-access-token"
export X_ACCESS_SECRET="your-access-secret"
export X_ACCESS_TOKEN_SECRET="your-access-token-secret"
```
Legacy aliases such as `X_API_KEY`, `X_API_SECRET`, and `X_ACCESS_SECRET` may exist in older setups. Prefer the `X_CONSUMER_*` and `X_ACCESS_TOKEN_SECRET` names when documenting or wiring new flows.
```python
import os
from requests_oauthlib import OAuth1Session
oauth = OAuth1Session(
os.environ["X_API_KEY"],
client_secret=os.environ["X_API_SECRET"],
os.environ["X_CONSUMER_KEY"],
client_secret=os.environ["X_CONSUMER_SECRET"],
resource_owner_key=os.environ["X_ACCESS_TOKEN"],
resource_owner_secret=os.environ["X_ACCESS_SECRET"],
resource_owner_secret=os.environ["X_ACCESS_TOKEN_SECRET"],
)
```
@@ -92,7 +94,6 @@ def post_thread(oauth, tweets: list[str]) -> list[str]:
if reply_to:
payload["reply"] = {"in_reply_to_tweet_id": reply_to}
resp = oauth.post("https://api.x.com/2/tweets", json=payload)
resp.raise_for_status()
tweet_id = resp.json()["data"]["id"]
ids.append(tweet_id)
reply_to = tweet_id
@@ -126,6 +127,21 @@ resp = requests.get(
)
```
### Pull Recent Original Posts for Voice Modeling
```python
resp = requests.get(
"https://api.x.com/2/tweets/search/recent",
headers=headers,
params={
"query": "from:affaanmustafa -is:retweet -is:reply",
"max_results": 25,
"tweet.fields": "created_at,public_metrics",
}
)
voice_samples = resp.json()
```
### Get User by Username
```python
@@ -155,17 +171,12 @@ resp = oauth.post(
)
```
## Rate Limits Reference
## Rate Limits
| Endpoint | Limit | Window |
|----------|-------|--------|
| POST /2/tweets | 200 | 15 min |
| GET /2/tweets/search/recent | 450 | 15 min |
| GET /2/users/:id/tweets | 1500 | 15 min |
| GET /2/users/by/username | 300 | 15 min |
| POST media/upload | 415 | 15 min |
Always check `x-rate-limit-remaining` and `x-rate-limit-reset` headers.
X API rate limits vary by endpoint, auth method, and account tier, and they change over time. Always:
- Check the current X developer docs before hardcoding assumptions
- Read `x-rate-limit-remaining` and `x-rate-limit-reset` headers at runtime
- Back off automatically instead of relying on static tables in code
```python
import time
@@ -202,13 +213,18 @@ else:
## Integration with Content Engine
Use `content-engine` skill to generate platform-native content, then post via X API:
1. Generate content with content-engine (X platform format)
2. Validate length (280 chars for single tweet)
3. Post via X API using patterns above
4. Track engagement via public_metrics
Use `brand-voice` plus `content-engine` to generate platform-native content, then post via X API:
1. Pull recent original posts when voice matching matters
2. Build or reuse a `VOICE PROFILE`
3. Generate content with `content-engine` in X-native format
4. Validate length and thread structure
5. Return the draft for approval unless the user explicitly asked to post now
6. Post via X API only after approval
7. Track engagement via public_metrics
## Related Skills
- `brand-voice` — Build a reusable voice profile from real X and site/source material
- `content-engine` — Generate platform-native content for X
- `crosspost` — Distribute content across X, LinkedIn, and other platforms
- `connections-optimizer` — Reorganize the X graph before drafting network-driven outreach

View File

@@ -10,13 +10,16 @@ Use this workflow when working on **add-new-skill** in `everything-claude-code`.
## Goal
Adds a new skill to the codebase, typically as a new agent capability or workflow module.
Adds a new AI agent skill to the codebase, including documentation and registration.
## Common Files
- `skills/*/SKILL.md`
- `.agents/skills/*/SKILL.md`
- `.claude/skills/*/SKILL.md`
- `AGENTS.md`
- `README.md`
- `README.zh-CN.md`
- `docs/zh-CN/AGENTS.md`
## Suggested Sequence
@@ -27,9 +30,11 @@ Adds a new skill to the codebase, typically as a new agent capability or workflo
## Typical Commit Signals
- Create a new SKILL.md file under skills/ or .agents/skills/ or .claude/skills/
- Document the skill's purpose, usage, and configuration.
- Optionally update manifests or documentation if the skill is significant.
- Create a new SKILL.md file in skills/<skill-name>/ or .agents/skills/<skill-name>/
- Optionally add supporting scripts or references under the skill directory
- Update AGENTS.md and/or README.md to document the new skill
- Update docs/zh-CN/AGENTS.md and docs/zh-CN/README.md for Chinese documentation
- Update manifests/install-modules.json or install-components.json to register the skill
## Notes

View File

@@ -18,6 +18,7 @@ Standard feature implementation workflow
- `.opencode/plugins/*`
- `.opencode/plugins/lib/*`
- `**/*.test.*`
- `**/api/**`
## Suggested Sequence

View File

@@ -2,7 +2,7 @@
"version": "1.3",
"schemaVersion": "1.0",
"generatedBy": "ecc-tools",
"generatedAt": "2026-04-01T22:51:43.152Z",
"generatedAt": "2026-04-02T13:23:38.785Z",
"repo": "https://github.com/affaan-m/everything-claude-code",
"profiles": {
"requested": "full",

View File

@@ -10,5 +10,5 @@
"javascript"
],
"suggestedBy": "ecc-tools-repo-analysis",
"createdAt": "2026-04-01T22:52:48.730Z"
"createdAt": "2026-04-02T13:44:01.125Z"
}

View File

@@ -26,7 +26,7 @@ Generated by ECC Tools from repository history. Review before treating it as a h
- feature-development: Standard feature implementation workflow
- refactoring: Code refactoring and cleanup workflow
- add-new-skill: Adds a new skill to the codebase, typically as a new agent capability or workflow module.
- add-new-skill: Adds a new AI agent skill to the codebase, including documentation and registration.
## Review Reminder

View File

@@ -1,175 +1,452 @@
```markdown
# everything-claude-code Development Patterns
---
name: everything-claude-code-conventions
description: Development conventions and patterns for everything-claude-code. JavaScript project with conventional commits.
---
> Auto-generated skill from repository analysis
# Everything Claude Code Conventions
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-04-02
## Overview
This skill teaches the core development patterns, coding conventions, and common workflows used in the `everything-claude-code` repository. The codebase is primarily JavaScript, with no major framework, and is organized around modular skills, agent definitions, command workflows, and extensible install targets. The repository emphasizes clear documentation, conventional commits, and maintainable architecture for agent and workflow development.
This skill teaches Claude the development patterns and conventions used in everything-claude-code.
## Coding Conventions
## Tech Stack
- **File Naming:**
Use `camelCase` for JavaScript files and directories.
*Example:*
```
scripts/lib/installTargets.js
agentPrompts.md
```
- **Primary Language**: JavaScript
- **Architecture**: hybrid module organization
- **Test Location**: separate
- **Import Style:**
Use **relative imports** for modules.
*Example:*
```js
const installTarget = require('../lib/installTargets');
```
## When to Use This Skill
- **Export Style:**
Both CommonJS (`module.exports`) and ES module (`export`) styles are present.
*Example (CommonJS):*
```js
module.exports = function mySkill() { ... };
```
*Example (ES Module):*
```js
export function mySkill() { ... }
```
Activate this skill when:
- Making changes to this repository
- Adding new features following established patterns
- Writing tests that match project conventions
- Creating commits with proper message format
- **Commit Messages:**
Use [Conventional Commits](https://www.conventionalcommits.org/).
Prefixes: `fix`, `feat`, `docs`, `chore`
*Example:*
```
feat: add Gemini install target support
fix: correct agent prompt registration logic
```
## Commit Conventions
- **Documentation:**
Each skill or agent should have a `SKILL.md` or `.md` documentation file explaining its purpose and usage.
Follow these commit message conventions based on 500 analyzed commits.
## Workflows
### Commit Style: Conventional Commits
### Prefixes Used
- `fix`
- `feat`
- `docs`
- `chore`
### Message Guidelines
- Average message length: ~56 characters
- Keep first line concise and descriptive
- Use imperative mood ("Add feature" not "Added feature")
*Commit message example*
```text
feat(skills): add argus-dispatch — multi-model task dispatcher
```
*Commit message example*
```text
refactor: extract social graph ranking core
```
*Commit message example*
```text
fix: port safe ci cleanup from backlog
```
*Commit message example*
```text
docs: close bundle drift and sync plugin guidance
```
*Commit message example*
```text
chore: ignore local orchestration artifacts
```
*Commit message example*
```text
feat(skills): add brand voice and network ops lanes
```
*Commit message example*
```text
feat: sync the codex baseline and agent roles
```
*Commit message example*
```text
fix: harden install and codex sync portability
```
## Architecture
### Project Structure: Single Package
This project uses **hybrid** module organization.
### Configuration Files
- `.github/workflows/ci.yml`
- `.github/workflows/maintenance.yml`
- `.github/workflows/monthly-metrics.yml`
- `.github/workflows/release.yml`
- `.github/workflows/reusable-release.yml`
- `.github/workflows/reusable-test.yml`
- `.github/workflows/reusable-validate.yml`
- `.opencode/package.json`
- `.opencode/tsconfig.json`
- `.prettierrc`
- `eslint.config.js`
- `package.json`
### Guidelines
- This project uses a hybrid organization
- Follow existing patterns when adding new code
## Code Style
### Language: JavaScript
### Naming Conventions
| Element | Convention |
|---------|------------|
| Files | camelCase |
| Functions | camelCase |
| Classes | PascalCase |
| Constants | SCREAMING_SNAKE_CASE |
### Import Style: Relative Imports
### Export Style: Mixed Style
*Preferred import style*
```typescript
// Use relative imports
import { Button } from '../components/Button'
import { useAuth } from './hooks/useAuth'
```
## Testing
### Test Framework
No specific test framework detected — use the repository's existing test patterns.
### File Pattern: `*.test.js`
### Test Types
- **Unit tests**: Test individual functions and components in isolation
- **Integration tests**: Test interactions between multiple components/services
### Coverage
This project has coverage reporting configured. Aim for 80%+ coverage.
## Error Handling
### Error Handling Style: Try-Catch Blocks
*Standard error handling pattern*
```typescript
try {
const result = await riskyOperation()
return result
} catch (error) {
console.error('Operation failed:', error)
throw new Error('User-friendly message')
}
```
## Common Workflows
These workflows were detected from analyzing commit patterns.
### Feature Development
Standard feature implementation workflow
**Frequency**: ~14 times per month
**Steps**:
1. Add feature implementation
2. Add tests for feature
3. Update documentation
**Files typically involved**:
- `.opencode/*`
- `.opencode/plugins/*`
- `.opencode/plugins/lib/*`
- `**/*.test.*`
- `**/api/**`
**Example commit sequence**:
```
feat(team-builder): use `claude agents` command for agent discovery (#1021)
fix: extract inline SessionStart bootstrap to separate file (#1035)
feat: add hexagonal architecture SKILL. (#1034)
```
### Refactoring
Code refactoring and cleanup workflow
**Frequency**: ~3 times per month
**Steps**:
1. Ensure tests pass before refactor
2. Refactor code structure
3. Verify tests still pass
**Files typically involved**:
- `src/**/*`
**Example commit sequence**:
```
refactor: collapse legacy command bodies into skills
feat: add connected operator workflow skills
feat: expand lead intelligence outreach channels
```
### Add New Skill
**Trigger:** When introducing a new skill for agents or workflows
**Command:** `/add-skill`
1. Create a new `SKILL.md` file under `skills/`, `.agents/skills/`, or `.claude/skills/`.
2. Document the skill's purpose, usage, and configuration.
3. Optionally update manifests or documentation if the skill is significant.
Adds a new AI agent skill to the codebase, including documentation and registration.
*Example:*
**Frequency**: ~3 times per month
**Steps**:
1. Create a new SKILL.md file in skills/<skill-name>/ or .agents/skills/<skill-name>/
2. Optionally add supporting scripts or references under the skill directory
3. Update AGENTS.md and/or README.md to document the new skill
4. Update docs/zh-CN/AGENTS.md and docs/zh-CN/README.md for Chinese documentation
5. Update manifests/install-modules.json or install-components.json to register the skill
**Files typically involved**:
- `skills/*/SKILL.md`
- `.agents/skills/*/SKILL.md`
- `AGENTS.md`
- `README.md`
- `README.zh-CN.md`
- `docs/zh-CN/AGENTS.md`
- `docs/zh-CN/README.md`
- `manifests/install-modules.json`
- `manifests/install-components.json`
**Example commit sequence**:
```
skills/myNewSkill/SKILL.md
Create a new SKILL.md file in skills/<skill-name>/ or .agents/skills/<skill-name>/
Optionally add supporting scripts or references under the skill directory
Update AGENTS.md and/or README.md to document the new skill
Update docs/zh-CN/AGENTS.md and docs/zh-CN/README.md for Chinese documentation
Update manifests/install-modules.json or install-components.json to register the skill
```
### Add New Agent Or Pipeline
Adds a new agent or multi-agent workflow pipeline, including agent definitions and orchestration skills.
**Frequency**: ~2 times per month
**Steps**:
1. Create one or more agent definition files in agents/
2. Create or update a SKILL.md in skills/<pipeline-name>/ to orchestrate or document the workflow
3. Optionally add supporting commands, scripts, or examples
4. Update AGENTS.md and/or README.md to document the new pipeline
**Files typically involved**:
- `agents/*.md`
- `skills/*/SKILL.md`
- `commands/*.md`
- `scripts/*.sh`
- `examples/*/README.md`
- `AGENTS.md`
- `README.md`
**Example commit sequence**:
```
Create one or more agent definition files in agents/
Create or update a SKILL.md in skills/<pipeline-name>/ to orchestrate or document the workflow
Optionally add supporting commands, scripts, or examples
Update AGENTS.md and/or README.md to document the new pipeline
```
### Add Or Update Command Workflow
Adds or extends a CLI command for agent workflows, often with review feedback iterations.
**Frequency**: ~2 times per month
**Steps**:
1. Create or update one or more command markdown files in commands/
2. Iterate with fixes based on review feedback (improving YAML frontmatter, usage, output, etc.)
3. Optionally update AGENTS.md or documentation to reference the new command
**Files typically involved**:
- `commands/*.md`
- `AGENTS.md`
- `README.md`
**Example commit sequence**:
```
Create or update one or more command markdown files in commands/
Iterate with fixes based on review feedback (improving YAML frontmatter, usage, output, etc.)
Optionally update AGENTS.md or documentation to reference the new command
```
### Add Install Target Or Adapter
Adds support for a new install target (e.g., Gemini, CodeBuddy), including scripts, schemas, and manifest updates.
**Frequency**: ~2 times per month
**Steps**:
1. Add a new directory for the install target (e.g., .gemini/, .codebuddy/)
2. Add install/uninstall scripts and README(s)
3. Update schemas/ecc-install-config.schema.json and/or install-modules.schema.json
4. Update manifests/install-modules.json
5. Update scripts/lib/install-manifests.js and scripts/lib/install-targets/<target>.js
6. Add or update tests for the new install target
**Files typically involved**:
- `.<target>/*`
- `schemas/ecc-install-config.schema.json`
- `schemas/install-modules.schema.json`
- `manifests/install-modules.json`
- `scripts/lib/install-manifests.js`
- `scripts/lib/install-targets/*.js`
- `tests/lib/install-targets.test.js`
**Example commit sequence**:
```
Add a new directory for the install target (e.g., .gemini/, .codebuddy/)
Add install/uninstall scripts and README(s)
Update schemas/ecc-install-config.schema.json and/or install-modules.schema.json
Update manifests/install-modules.json
Update scripts/lib/install-manifests.js and scripts/lib/install-targets/<target>.js
Add or update tests for the new install target
```
### Update Hooks Or Automation
Refactors or fixes hooks and automation scripts for CI, formatting, or agent workflow integration.
**Frequency**: ~3 times per month
**Steps**:
1. Edit hooks/hooks.json to update hook configuration
2. Edit or add scripts/hooks/*.js or scripts/hooks/*.sh for hook logic
3. Update or add tests for the hooks in tests/hooks/*.test.js
4. Optionally update related scripts or documentation
**Files typically involved**:
- `hooks/hooks.json`
- `scripts/hooks/*.js`
- `scripts/hooks/*.sh`
- `tests/hooks/*.test.js`
**Example commit sequence**:
```
Edit hooks/hooks.json to update hook configuration
Edit or add scripts/hooks/*.js or scripts/hooks/*.sh for hook logic
Update or add tests for the hooks in tests/hooks/*.test.js
Optionally update related scripts or documentation
```
### Documentation Sync And Localization
Updates documentation and ensures Chinese and English docs are in sync, including AGENTS.md and README files.
**Frequency**: ~3 times per month
**Steps**:
1. Edit AGENTS.md, README.md, README.zh-CN.md
2. Edit docs/zh-CN/AGENTS.md, docs/zh-CN/README.md
3. Optionally update skills/*/SKILL.md and .agents/skills/*/SKILL.md for doc improvements
4. Edit WORKING-CONTEXT.md or the-shortform-guide.md as needed
**Files typically involved**:
- `AGENTS.md`
- `README.md`
- `README.zh-CN.md`
- `docs/zh-CN/AGENTS.md`
- `docs/zh-CN/README.md`
- `skills/*/SKILL.md`
- `.agents/skills/*/SKILL.md`
- `WORKING-CONTEXT.md`
- `the-shortform-guide.md`
**Example commit sequence**:
```
Edit AGENTS.md, README.md, README.zh-CN.md
Edit docs/zh-CN/AGENTS.md, docs/zh-CN/README.md
Optionally update skills/*/SKILL.md and .agents/skills/*/SKILL.md for doc improvements
Edit WORKING-CONTEXT.md or the-shortform-guide.md as needed
```
### Dependency Or Ci Update
Updates dependencies or CI workflow files, often via automated bots like dependabot.
**Frequency**: ~4 times per month
**Steps**:
1. Edit .github/workflows/*.yml to update actions or workflow steps
2. Edit package.json, yarn.lock, or package-lock.json for dependency updates
3. Optionally update related scripts or lockfiles
**Files typically involved**:
- `.github/workflows/*.yml`
- `package.json`
- `yarn.lock`
- `package-lock.json`
**Example commit sequence**:
```
Edit .github/workflows/*.yml to update actions or workflow steps
Edit package.json, yarn.lock, or package-lock.json for dependency updates
Optionally update related scripts or lockfiles
```
## Best Practices
Based on analysis of the codebase, follow these practices:
### Do
- Use conventional commit format (feat:, fix:, etc.)
- Follow *.test.js naming pattern
- Use camelCase for file names
- Prefer mixed exports
### Don't
- Don't write vague commit messages
- Don't skip tests for new features
- Don't deviate from established patterns without discussion
---
### Add New Agent or Agent Prompt
**Trigger:** When introducing a new agent persona or capability
**Command:** `/add-agent`
1. Create a new agent definition (e.g., `agents/myAgent.md` or `.opencode/prompts/agents/myAgent.txt`).
2. Register the agent in the relevant configuration file (e.g., `.opencode/opencode.json`).
3. Update `AGENTS.md` or related documentation.
*Example:*
```
agents/supportAgent.md
```
---
### Add or Update Command Workflow
**Trigger:** When introducing or improving a workflow command
**Command:** `/add-command`
1. Create or update a command file under `commands/` (e.g., `commands/review.md`).
2. Optionally update related documentation or scripts.
3. Address review feedback and refine command logic.
---
### Add Install Target or Adapter
**Trigger:** When supporting a new platform or integration
**Command:** `/add-install-target`
1. Add new install scripts and documentation under a dot-directory (e.g., `.gemini/`, `.codebuddy/`).
2. Update `manifests/install-modules.json` and relevant schema files.
3. Implement or update `scripts/lib/install-targets/*.js` for the new target.
4. Add or update tests for the install target.
*Example:*
```
.gemini/install.sh
scripts/lib/install-targets/gemini.js
tests/lib/install-targets.test.js
```
---
### Refactor or Collapse Commands to Skills
**Trigger:** When modernizing command logic under the skills architecture
**Command:** `/refactor-to-skill`
1. Update or remove `commands/*.md` files.
2. Add or update `skills/*/SKILL.md` files.
3. Update documentation (`README.md`, `AGENTS.md`, `WORKING-CONTEXT.md`, etc.).
4. Update manifests if required.
---
### Documentation and Guidance Update
**Trigger:** When clarifying, updating, or adding documentation
**Command:** `/update-docs`
1. Edit or create documentation files (`README.md`, `WORKING-CONTEXT.md`, `docs/*.md`, etc.).
2. Sync or update guidance in multiple language versions if needed.
3. Optionally update related configuration or manifest files.
---
### CI/CD Workflow Update
**Trigger:** When improving or fixing CI/CD processes
**Command:** `/update-ci`
1. Edit `.github/workflows/*.yml` files.
2. Update lockfiles (`package-lock.json`, `yarn.lock`) or validation scripts.
3. Update or add test files for CI/CD logic.
---
## Testing Patterns
- **Test Files:**
Test files follow the pattern `*.test.js`.
- **Framework:**
No specific testing framework detected; structure your tests in standard Node.js or your preferred test runner.
*Example:*
```
tests/lib/install-targets.test.js
```
- **Test Example:**
```js
// install-targets.test.js
const installTarget = require('../../scripts/lib/install-targets/gemini');
test('should install Gemini target', () => {
expect(installTarget()).toBe(true);
});
```
## Commands
| Command | Purpose |
|--------------------|----------------------------------------------------------------|
| /add-skill | Add a new skill module and document it |
| /add-agent | Add a new agent definition or prompt |
| /add-command | Add or update a command workflow |
| /add-install-target| Add a new install target or adapter |
| /refactor-to-skill | Refactor commands into skill-based modules |
| /update-docs | Update documentation or guidance files |
| /update-ci | Update CI/CD workflows, lockfiles, or validation scripts |
```
*This skill was auto-generated by [ECC Tools](https://ecc.tools). Review and customize as needed for your team.*

View File

@@ -11,5 +11,5 @@
".claude/commands/refactoring.md",
".claude/commands/add-new-skill.md"
],
"updatedAt": "2026-04-01T22:51:43.152Z"
"updatedAt": "2026-04-02T13:23:38.785Z"
}

View File

@@ -1,6 +1,6 @@
# Everything Claude Code (ECC) — Agent Instructions
This is a **production-ready AI coding plugin** providing 36 specialized agents, 147 skills, 68 commands, and automated hook workflows for software development.
This is a **production-ready AI coding plugin** providing 36 specialized agents, 151 skills, 68 commands, and automated hook workflows for software development.
**Version:** 1.9.0
@@ -146,7 +146,7 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
```
agents/ — 36 specialized subagents
skills/ — 147 workflow skills and domain knowledge
skills/ — 151 workflow skills and domain knowledge
commands/ — 68 slash commands
hooks/ — Trigger-based automations
rules/ — Always-follow guidelines (common + per-language)

View File

@@ -225,7 +225,7 @@ For manual install instructions see the README in the `rules/` folder. When copy
/plugin list everything-claude-code@everything-claude-code
```
**That's it!** You now have access to 36 agents, 147 skills, and 68 legacy command shims.
**That's it!** You now have access to 36 agents, 151 skills, and 68 legacy command shims.
### Multi-model commands require additional setup
@@ -1120,7 +1120,7 @@ The configuration is automatically detected from `.opencode/opencode.json`.
|---------|-------------|----------|--------|
| Agents | PASS: 36 agents | PASS: 12 agents | **Claude Code leads** |
| Commands | PASS: 68 commands | PASS: 31 commands | **Claude Code leads** |
| Skills | PASS: 147 skills | PASS: 37 skills | **Claude Code leads** |
| Skills | PASS: 151 skills | PASS: 37 skills | **Claude Code leads** |
| Hooks | PASS: 8 event types | PASS: 11 events | **OpenCode has more!** |
| Rules | PASS: 29 rules | PASS: 13 instructions | **Claude Code leads** |
| MCP Servers | PASS: 14 servers | PASS: Full | **Full parity** |
@@ -1229,7 +1229,7 @@ ECC is the **first plugin to maximize every major AI coding tool**. Here's how e
|---------|------------|------------|-----------|----------|
| **Agents** | 36 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
| **Commands** | 68 | Shared | Instruction-based | 31 |
| **Skills** | 147 | Shared | 10 (native format) | 37 |
| **Skills** | 151 | Shared | 10 (native format) | 37 |
| **Hook Events** | 8 types | 15 types | None yet | 11 types |
| **Hook Scripts** | 20+ scripts | 16 scripts (DRY adapter) | N/A | Plugin hooks |
| **Rules** | 34 (common + lang) | 34 (YAML frontmatter) | Instruction-based | 13 instructions |

View File

@@ -106,7 +106,7 @@ cp -r everything-claude-code/rules/perl ~/.claude/rules/
/plugin list everything-claude-code@everything-claude-code
```
**完成!** 你现在可以使用 36 个代理、147 个技能和 68 个命令。
**完成!** 你现在可以使用 36 个代理、151 个技能和 68 个命令。
### multi-* 命令需要额外配置

View File

@@ -36,6 +36,7 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
- continue one-by-one audit of overlapping or low-signal skill content
- move repo guidance and contribution flow to skills-first, leaving commands only as explicit compatibility shims
- add operator skills that wrap connected surfaces instead of exposing only raw APIs or disconnected primitives
- land the canonical voice system, network-optimization lane, and reusable Manim explainer lane
- Security:
- keep dependency posture clean
- preserve self-contained hook and MCP behavior
@@ -59,6 +60,10 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
- Direct-port candidates landed after audit:
- `#1078` hook-id dedupe for managed Claude hook reinstalls
- `#844` ui-demo skill
- `#1110` install-time Claude hook root resolution
- `#1106` portable Codex Context7 key extraction
- `#1107` Codex baseline merge and sample agent-role sync
- `#1119` stale CI/lint cleanup that still contained safe low-risk fixes
- Port or rebuild inside ECC after full audit:
- `#894` Jira integration
- `#814` + `#808` rebuild as a single consolidated notifications lane for Opencode and cross-harness surfaces
@@ -97,3 +102,12 @@ Keep this file detailed for only the current sprint, blockers, and next actions.
- 2026-04-01: Collapsed the obvious command/skill duplicates into thin legacy shims so `skills/` now hold the maintained bodies for NanoClaw, context-budget, DevFleet, docs lookup, E2E, evals, orchestration, prompt optimization, rules distillation, TDD, and verification.
- 2026-04-01: Ported the self-contained core of `#844` directly into `main` as `skills/ui-demo/SKILL.md` and registered it under the `media-generation` install module instead of merging the PR wholesale.
- 2026-04-01: Added the first connected-workflow operator lane as ECC-native skills instead of leaving the surface as raw plugins or APIs: `workspace-surface-audit`, `customer-billing-ops`, `project-flow-ops`, and `google-workspace-ops`. These are tracked under the new `operator-workflows` install module.
- 2026-04-01: Direct-ported the real fix from the unresolved hook-path PR lane into the active installer. Claude installs now replace `${CLAUDE_PLUGIN_ROOT}` with the concrete install root in both `settings.json` and the copied `hooks/hooks.json`, which keeps PreToolUse/PostToolUse hooks working outside plugin-managed env injection.
- 2026-04-01: Replaced the GNU-only `grep -P` parser in `scripts/sync-ecc-to-codex.sh` with a portable Node parser for Context7 key extraction. Added source-level regression coverage so BSD/macOS syncs do not drift back to non-portable parsing.
- 2026-04-01: Targeted regression suite after the direct ports is green: `tests/scripts/install-apply.test.js`, `tests/scripts/sync-ecc-to-codex.test.js`, and `tests/scripts/codex-hooks.test.js`.
- 2026-04-01: Ported the useful core of `#1107` directly into `main` as an add-only Codex baseline merge. `scripts/sync-ecc-to-codex.sh` now fills missing non-MCP defaults from `.codex/config.toml`, syncs sample agent role files into `~/.codex/agents`, and preserves user config instead of replacing it. Added regression coverage for sparse configs and implicit parent tables.
- 2026-04-01: Ported the safe low-risk cleanup from `#1119` directly into `main` instead of keeping an obsolete CI PR open. This included `.mjs` eslint handling, stricter null checks, Windows home-dir coverage in bash-log tests, and longer Trae shell-test timeouts.
- 2026-04-01: Added `brand-voice` as the canonical source-derived writing-style system and wired the content lane to treat it as the shared voice source of truth instead of duplicating partial style heuristics across skills.
- 2026-04-01: Added `connections-optimizer` as the review-first social-graph reorganization workflow for X and LinkedIn, with explicit pruning modes, browser fallback expectations, and Apple Mail drafting guidance.
- 2026-04-01: Added `manim-video` as the reusable technical explainer lane and seeded it with a starter network-graph scene so launch and systems animations do not depend on one-off scratch scripts.
- 2026-04-02: Re-extracted `social-graph-ranker` as a standalone primitive because the weighted bridge-decay model is reusable outside the full lead workflow. `lead-intelligence` now points to it for canonical graph ranking instead of carrying the full algorithm explanation inline, while `connections-optimizer` stays the broader operator layer for pruning, adds, and outbound review packs.

View File

@@ -1,6 +1,6 @@
# Everything Claude Code (ECC) — 智能体指令
这是一个**生产就绪的 AI 编码插件**,提供 36 个专业代理、147 项技能、68 条命令以及自动化钩子工作流,用于软件开发。
这是一个**生产就绪的 AI 编码插件**,提供 36 个专业代理、151 项技能、68 条命令以及自动化钩子工作流,用于软件开发。
**版本:** 1.9.0
@@ -147,7 +147,7 @@
```
agents/ — 36 个专业子代理
skills/ — 147 个工作流技能和领域知识
skills/ — 151 个工作流技能和领域知识
commands/ — 68 个斜杠命令
hooks/ — 基于触发的自动化
rules/ — 始终遵循的指导方针(通用 + 每种语言)

View File

@@ -209,7 +209,7 @@ npx ecc-install typescript
/plugin list everything-claude-code@everything-claude-code
```
**搞定!** 你现在可以使用 36 个智能体、147 项技能和 68 个命令了。
**搞定!** 你现在可以使用 36 个智能体、151 项技能和 68 个命令了。
***
@@ -1096,7 +1096,7 @@ opencode
|---------|-------------|----------|--------|
| 智能体 | PASS: 36 个 | PASS: 12 个 | **Claude Code 领先** |
| 命令 | PASS: 68 个 | PASS: 31 个 | **Claude Code 领先** |
| 技能 | PASS: 147 项 | PASS: 37 项 | **Claude Code 领先** |
| 技能 | PASS: 151 项 | PASS: 37 项 | **Claude Code 领先** |
| 钩子 | PASS: 8 种事件类型 | PASS: 11 种事件 | **OpenCode 更多!** |
| 规则 | PASS: 29 条 | PASS: 13 条指令 | **Claude Code 领先** |
| MCP 服务器 | PASS: 14 个 | PASS: 完整 | **完全对等** |
@@ -1208,7 +1208,7 @@ ECC 是**第一个最大化利用每个主要 AI 编码工具的插件**。以
|---------|------------|------------|-----------|----------|
| **智能体** | 36 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 |
| **命令** | 68 | 共享 | 基于指令 | 31 |
| **技能** | 147 | 共享 | 10 (原生格式) | 37 |
| **技能** | 151 | 共享 | 10 (原生格式) | 37 |
| **钩子事件** | 8 种类型 | 15 种类型 | 暂无 | 11 种类型 |
| **钩子脚本** | 20+ 个脚本 | 16 个脚本 (DRY 适配器) | N/A | 插件钩子 |
| **规则** | 34 (通用 + 语言) | 34 (YAML 前页) | 基于指令 | 13 条指令 |

View File

@@ -24,5 +24,11 @@ module.exports = [
'no-undef': 'error',
'eqeqeq': 'warn'
}
},
{
files: ['**/*.mjs'],
languageOptions: {
sourceType: 'module'
}
}
];

View File

@@ -140,7 +140,7 @@
{
"id": "capability:content",
"family": "capability",
"description": "Business, writing, market, and investor communication skills.",
"description": "Business, writing, market, investor communication, and reusable voice-system skills.",
"modules": [
"business-content"
]
@@ -148,7 +148,7 @@
{
"id": "capability:operators",
"family": "capability",
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, and Google Workspace.",
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, Google Workspace, and network optimization.",
"modules": [
"operator-workflows"
]
@@ -164,7 +164,7 @@
{
"id": "capability:media",
"family": "capability",
"description": "Media generation and AI-assisted editing skills.",
"description": "Media generation, technical explainers, and AI-assisted editing skills.",
"modules": [
"media-generation"
]

View File

@@ -282,10 +282,12 @@
"description": "Business, writing, market, and investor communication skills.",
"paths": [
"skills/article-writing",
"skills/brand-voice",
"skills/content-engine",
"skills/investor-materials",
"skills/investor-outreach",
"skills/lead-intelligence",
"skills/social-graph-ranker",
"skills/market-research"
],
"targets": [
@@ -306,8 +308,9 @@
{
"id": "operator-workflows",
"kind": "skills",
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, and Google Workspace.",
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, Google Workspace, and network optimization.",
"paths": [
"skills/connections-optimizer",
"skills/customer-billing-ops",
"skills/google-workspace-ops",
"skills/project-flow-ops",
@@ -354,9 +357,10 @@
{
"id": "media-generation",
"kind": "skills",
"description": "Media generation and AI-assisted editing skills.",
"description": "Media generation, technical explainers, and AI-assisted editing skills.",
"paths": [
"skills/fal-ai-media",
"skills/manim-video",
"skills/remotion-video-creation",
"skills/ui-demo",
"skills/video-editing",

View File

@@ -80,6 +80,7 @@
"scripts/orchestrate-worktrees.js",
"scripts/setup-package-manager.js",
"scripts/skill-create-output.js",
"scripts/codex/merge-codex-config.js",
"scripts/codex/merge-mcp-config.js",
"scripts/repair.js",
"scripts/harness-audit.js",

View File

@@ -0,0 +1,317 @@
#!/usr/bin/env node
'use strict';
/**
* Merge the non-MCP Codex baseline from `.codex/config.toml` into a target
* `config.toml` without overwriting existing user choices.
*
* Strategy: add-only.
* - Missing root keys are inserted before the first TOML table.
* - Missing table keys are appended to existing tables.
* - Missing tables are appended to the end of the file.
*/
const fs = require('fs');
const path = require('path');
let TOML;
try {
TOML = require('@iarna/toml');
} catch {
console.error('[ecc-codex] Missing dependency: @iarna/toml');
console.error('[ecc-codex] Run: npm install (from the ECC repo root)');
process.exit(1);
}
const ROOT_KEYS = ['approval_policy', 'sandbox_mode', 'web_search', 'notify', 'persistent_instructions'];
const TABLE_PATHS = [
'features',
'profiles.strict',
'profiles.yolo',
'agents',
'agents.explorer',
'agents.reviewer',
'agents.docs_researcher',
];
const TOML_HEADER_RE = /^[ \t]*(?:\[[^[\]\n][^\]\n]*\]|\[\[[^[\]\n][^\]\n]*\]\])[ \t]*(?:#.*)?$/m;
function log(message) {
console.log(`[ecc-codex] ${message}`);
}
function warn(message) {
console.warn(`[ecc-codex] WARNING: ${message}`);
}
function getNested(obj, pathParts) {
let current = obj;
for (const part of pathParts) {
if (!current || typeof current !== 'object' || !(part in current)) {
return undefined;
}
current = current[part];
}
return current;
}
function setNested(obj, pathParts, value) {
let current = obj;
for (let i = 0; i < pathParts.length - 1; i += 1) {
const part = pathParts[i];
if (!current[part] || typeof current[part] !== 'object' || Array.isArray(current[part])) {
current[part] = {};
}
current = current[part];
}
current[pathParts[pathParts.length - 1]] = value;
}
function findFirstTableIndex(raw) {
const match = TOML_HEADER_RE.exec(raw);
return match ? match.index : -1;
}
function findTableRange(raw, tablePath) {
const escaped = tablePath.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
const headerPattern = new RegExp(`^[ \\t]*\\[${escaped}\\][ \\t]*(?:#.*)?$`, 'm');
const match = headerPattern.exec(raw);
if (!match) {
return null;
}
const headerEnd = raw.indexOf('\n', match.index);
const bodyStart = headerEnd === -1 ? raw.length : headerEnd + 1;
const nextHeaderRel = raw.slice(bodyStart).search(TOML_HEADER_RE);
const bodyEnd = nextHeaderRel === -1 ? raw.length : bodyStart + nextHeaderRel;
return { bodyStart, bodyEnd };
}
function ensureTrailingNewline(text) {
return text.endsWith('\n') ? text : `${text}\n`;
}
function insertBeforeFirstTable(raw, block) {
const normalizedBlock = ensureTrailingNewline(block.trimEnd());
const firstTableIndex = findFirstTableIndex(raw);
if (firstTableIndex === -1) {
const prefix = raw.trimEnd();
return prefix ? `${prefix}\n${normalizedBlock}` : normalizedBlock;
}
const before = raw.slice(0, firstTableIndex).trimEnd();
const after = raw.slice(firstTableIndex).replace(/^\n+/, '');
return `${before}\n\n${normalizedBlock}\n${after}`;
}
function appendBlock(raw, block) {
const prefix = raw.trimEnd();
const normalizedBlock = block.trimEnd();
return prefix ? `${prefix}\n\n${normalizedBlock}\n` : `${normalizedBlock}\n`;
}
function stringifyValue(value) {
return TOML.stringify({ value }).trim().replace(/^value = /, '');
}
function updateInlineTableKeys(raw, tablePath, missingKeys) {
const pathParts = tablePath.split('.');
if (pathParts.length < 2) {
return null;
}
const parentPath = pathParts.slice(0, -1).join('.');
const parentRange = findTableRange(raw, parentPath);
if (!parentRange) {
return null;
}
const tableKey = pathParts[pathParts.length - 1];
const escapedKey = tableKey.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
const body = raw.slice(parentRange.bodyStart, parentRange.bodyEnd);
const lines = body.split('\n');
for (let index = 0; index < lines.length; index += 1) {
const inlinePattern = new RegExp(`^(\\s*${escapedKey}\\s*=\\s*\\{)(.*?)(\\}\\s*(?:#.*)?)$`);
const match = inlinePattern.exec(lines[index]);
if (!match) {
continue;
}
const additions = Object.entries(missingKeys)
.map(([key, value]) => `${key} = ${stringifyValue(value)}`)
.join(', ');
const existingEntries = match[2].trim();
const nextEntries = existingEntries ? `${existingEntries}, ${additions}` : additions;
lines[index] = `${match[1]}${nextEntries}${match[3]}`;
return `${raw.slice(0, parentRange.bodyStart)}${lines.join('\n')}${raw.slice(parentRange.bodyEnd)}`;
}
return null;
}
function appendImplicitTable(raw, tablePath, missingKeys) {
const candidate = appendBlock(raw, stringifyTable(tablePath, missingKeys));
try {
TOML.parse(candidate);
return candidate;
} catch {
return null;
}
}
function appendToTable(raw, tablePath, block, missingKeys = null) {
const range = findTableRange(raw, tablePath);
if (!range) {
if (missingKeys) {
const inlineUpdated = updateInlineTableKeys(raw, tablePath, missingKeys);
if (inlineUpdated) {
return inlineUpdated;
}
const appendedTable = appendImplicitTable(raw, tablePath, missingKeys);
if (appendedTable) {
return appendedTable;
}
}
warn(`Skipping missing keys for [${tablePath}] because it has no standalone header and could not be safely updated`);
return raw;
}
const before = raw.slice(0, range.bodyEnd).trimEnd();
const after = raw.slice(range.bodyEnd).replace(/^\n*/, '\n');
return `${before}\n${block.trimEnd()}\n${after}`;
}
function stringifyRootKeys(keys) {
return TOML.stringify(keys).trim();
}
function stringifyTable(tablePath, value) {
const scalarOnly = {};
for (const [key, entryValue] of Object.entries(value)) {
if (entryValue && typeof entryValue === 'object' && !Array.isArray(entryValue)) {
continue;
}
scalarOnly[key] = entryValue;
}
const snippet = {};
setNested(snippet, tablePath.split('.'), scalarOnly);
return TOML.stringify(snippet).trim();
}
function stringifyTableKeys(tableValue) {
const lines = [];
for (const [key, value] of Object.entries(tableValue)) {
if (value && typeof value === 'object' && !Array.isArray(value)) {
continue;
}
lines.push(TOML.stringify({ [key]: value }).trim());
}
return lines.join('\n');
}
function main() {
const args = process.argv.slice(2);
const configPath = args.find(arg => !arg.startsWith('-'));
const dryRun = args.includes('--dry-run');
if (!configPath) {
console.error('Usage: merge-codex-config.js <config.toml> [--dry-run]');
process.exit(1);
}
const referencePath = path.join(__dirname, '..', '..', '.codex', 'config.toml');
if (!fs.existsSync(referencePath)) {
console.error(`[ecc-codex] Reference config not found: ${referencePath}`);
process.exit(1);
}
if (!fs.existsSync(configPath)) {
console.error(`[ecc-codex] Config file not found: ${configPath}`);
process.exit(1);
}
const raw = fs.readFileSync(configPath, 'utf8');
const referenceRaw = fs.readFileSync(referencePath, 'utf8');
let targetConfig;
let referenceConfig;
try {
targetConfig = TOML.parse(raw);
referenceConfig = TOML.parse(referenceRaw);
} catch (error) {
console.error(`[ecc-codex] Failed to parse TOML: ${error.message}`);
process.exit(1);
}
const missingRootKeys = {};
for (const key of ROOT_KEYS) {
if (referenceConfig[key] !== undefined && targetConfig[key] === undefined) {
missingRootKeys[key] = referenceConfig[key];
}
}
const missingTables = [];
const missingTableKeys = [];
for (const tablePath of TABLE_PATHS) {
const pathParts = tablePath.split('.');
const referenceValue = getNested(referenceConfig, pathParts);
if (referenceValue === undefined) {
continue;
}
const targetValue = getNested(targetConfig, pathParts);
if (targetValue === undefined) {
missingTables.push(tablePath);
continue;
}
const missingKeys = {};
for (const [key, value] of Object.entries(referenceValue)) {
if (value && typeof value === 'object' && !Array.isArray(value)) {
continue;
}
if (targetValue[key] === undefined) {
missingKeys[key] = value;
}
}
if (Object.keys(missingKeys).length > 0) {
missingTableKeys.push({ tablePath, missingKeys });
}
}
if (
Object.keys(missingRootKeys).length === 0 &&
missingTables.length === 0 &&
missingTableKeys.length === 0
) {
log('All baseline Codex settings already present. Nothing to do.');
return;
}
let nextRaw = raw;
if (Object.keys(missingRootKeys).length > 0) {
log(` [add-root] ${Object.keys(missingRootKeys).join(', ')}`);
nextRaw = insertBeforeFirstTable(nextRaw, stringifyRootKeys(missingRootKeys));
}
for (const { tablePath, missingKeys } of missingTableKeys) {
log(` [add-keys] [${tablePath}] -> ${Object.keys(missingKeys).join(', ')}`);
nextRaw = appendToTable(nextRaw, tablePath, stringifyTableKeys(missingKeys), missingKeys);
}
for (const tablePath of missingTables) {
log(` [add-table] [${tablePath}]`);
nextRaw = appendBlock(nextRaw, stringifyTable(tablePath, getNested(referenceConfig, tablePath.split('.'))));
}
if (dryRun) {
log('Dry run — would write the merged Codex baseline.');
return;
}
fs.writeFileSync(configPath, nextRaw, 'utf8');
log('Done. Baseline Codex settings merged.');
}
main();

View File

@@ -67,7 +67,7 @@ function findFileIssues(filePath) {
try {
const content = getStagedFileContent(filePath);
if (content == null) {
if (content === null || content === undefined) {
return issues;
}
const lines = content.split('\n');

View File

@@ -20,16 +20,43 @@ function readJsonObject(filePath, label) {
return parsed;
}
function buildLegacyHookSignature(entry) {
function replacePluginRootPlaceholders(value, pluginRoot) {
if (!pluginRoot) {
return value;
}
if (typeof value === 'string') {
return value.split('${CLAUDE_PLUGIN_ROOT}').join(pluginRoot);
}
if (Array.isArray(value)) {
return value.map(item => replacePluginRootPlaceholders(item, pluginRoot));
}
if (value && typeof value === 'object') {
return Object.fromEntries(
Object.entries(value).map(([key, nestedValue]) => [
key,
replacePluginRootPlaceholders(nestedValue, pluginRoot),
])
);
}
return value;
}
function buildLegacyHookSignature(entry, pluginRoot) {
if (!entry || typeof entry !== 'object' || Array.isArray(entry)) {
return null;
}
if (typeof entry.matcher !== 'string' || !Array.isArray(entry.hooks)) {
const normalizedEntry = replacePluginRootPlaceholders(entry, pluginRoot);
if (typeof normalizedEntry.matcher !== 'string' || !Array.isArray(normalizedEntry.hooks)) {
return null;
}
const hookSignature = entry.hooks.map(hook => JSON.stringify({
const hookSignature = normalizedEntry.hooks.map(hook => JSON.stringify({
type: hook && typeof hook === 'object' ? hook.type : undefined,
command: hook && typeof hook === 'object' ? hook.command : undefined,
timeout: hook && typeof hook === 'object' ? hook.timeout : undefined,
@@ -37,33 +64,35 @@ function buildLegacyHookSignature(entry) {
}));
return JSON.stringify({
matcher: entry.matcher,
matcher: normalizedEntry.matcher,
hooks: hookSignature,
});
}
function getHookEntryAliases(entry) {
function getHookEntryAliases(entry, pluginRoot) {
const aliases = [];
if (!entry || typeof entry !== 'object' || Array.isArray(entry)) {
return aliases;
}
if (typeof entry.id === 'string' && entry.id.trim().length > 0) {
aliases.push(`id:${entry.id.trim()}`);
const normalizedEntry = replacePluginRootPlaceholders(entry, pluginRoot);
if (typeof normalizedEntry.id === 'string' && normalizedEntry.id.trim().length > 0) {
aliases.push(`id:${normalizedEntry.id.trim()}`);
}
const legacySignature = buildLegacyHookSignature(entry);
const legacySignature = buildLegacyHookSignature(normalizedEntry, pluginRoot);
if (legacySignature) {
aliases.push(`legacy:${legacySignature}`);
}
aliases.push(`json:${JSON.stringify(entry)}`);
aliases.push(`json:${JSON.stringify(normalizedEntry)}`);
return aliases;
}
function mergeHookEntries(existingEntries, incomingEntries) {
function mergeHookEntries(existingEntries, incomingEntries, pluginRoot) {
const mergedEntries = [];
const seenEntries = new Set();
@@ -76,7 +105,7 @@ function mergeHookEntries(existingEntries, incomingEntries) {
continue;
}
const aliases = getHookEntryAliases(entry);
const aliases = getHookEntryAliases(entry, pluginRoot);
if (aliases.some(alias => seenEntries.has(alias))) {
continue;
}
@@ -84,7 +113,7 @@ function mergeHookEntries(existingEntries, incomingEntries) {
for (const alias of aliases) {
seenEntries.add(alias);
}
mergedEntries.push(entry);
mergedEntries.push(replacePluginRootPlaceholders(entry, pluginRoot));
}
return mergedEntries;
@@ -100,6 +129,7 @@ function buildMergedSettings(plan) {
return null;
}
const pluginRoot = plan.targetRoot;
const hooksDestinationPath = path.join(plan.targetRoot, 'hooks', 'hooks.json');
const hooksSourcePath = findHooksSourcePath(plan, hooksDestinationPath) || hooksDestinationPath;
if (!fs.existsSync(hooksSourcePath)) {
@@ -107,7 +137,7 @@ function buildMergedSettings(plan) {
}
const hooksConfig = readJsonObject(hooksSourcePath, 'hooks config');
const incomingHooks = hooksConfig.hooks;
const incomingHooks = replacePluginRootPlaceholders(hooksConfig.hooks, pluginRoot);
if (!incomingHooks || typeof incomingHooks !== 'object' || Array.isArray(incomingHooks)) {
throw new Error(`Invalid hooks config at ${hooksSourcePath}: expected "hooks" to be a JSON object`);
}
@@ -126,7 +156,7 @@ function buildMergedSettings(plan) {
for (const [eventName, incomingEntries] of Object.entries(incomingHooks)) {
const currentEntries = Array.isArray(existingHooks[eventName]) ? existingHooks[eventName] : [];
const nextEntries = Array.isArray(incomingEntries) ? incomingEntries : [];
mergedHooks[eventName] = mergeHookEntries(currentEntries, nextEntries);
mergedHooks[eventName] = mergeHookEntries(currentEntries, nextEntries, pluginRoot);
}
const mergedSettings = {
@@ -137,6 +167,11 @@ function buildMergedSettings(plan) {
return {
settingsPath,
mergedSettings,
hooksDestinationPath,
resolvedHooksConfig: {
...hooksConfig,
hooks: incomingHooks,
},
};
}
@@ -149,6 +184,12 @@ function applyInstallPlan(plan) {
}
if (mergedSettingsPlan) {
fs.mkdirSync(path.dirname(mergedSettingsPlan.hooksDestinationPath), { recursive: true });
fs.writeFileSync(
mergedSettingsPlan.hooksDestinationPath,
JSON.stringify(mergedSettingsPlan.resolvedHooksConfig, null, 2) + '\n',
'utf8'
);
fs.mkdirSync(path.dirname(mergedSettingsPlan.settingsPath), { recursive: true });
fs.writeFileSync(
mergedSettingsPlan.settingsPath,

View File

@@ -37,8 +37,8 @@ function createSkillObservation(input) {
? input.skill.path.trim()
: null;
const success = Boolean(input.success);
const error = input.error == null ? null : String(input.error);
const feedback = input.feedback == null ? null : String(input.feedback);
const error = input.error === null || input.error === undefined ? null : String(input.error);
const feedback = input.feedback === null || input.feedback === undefined ? null : String(input.feedback);
const variant = typeof input.variant === 'string' && input.variant.trim().length > 0
? input.variant.trim()
: 'baseline';

View File

@@ -27,8 +27,11 @@ CONFIG_FILE="$CODEX_HOME/config.toml"
AGENTS_FILE="$CODEX_HOME/AGENTS.md"
AGENTS_ROOT_SRC="$REPO_ROOT/AGENTS.md"
AGENTS_CODEX_SUPP_SRC="$REPO_ROOT/.codex/AGENTS.md"
CODEX_AGENTS_SRC="$REPO_ROOT/.codex/agents"
CODEX_AGENTS_DEST="$CODEX_HOME/agents"
PROMPTS_SRC="$REPO_ROOT/commands"
PROMPTS_DEST="$CODEX_HOME/prompts"
BASELINE_MERGE_SCRIPT="$REPO_ROOT/scripts/codex/merge-codex-config.js"
HOOKS_INSTALLER="$REPO_ROOT/scripts/codex/install-global-git-hooks.sh"
SANITY_CHECKER="$REPO_ROOT/scripts/codex/check-codex-global-state.sh"
CURSOR_RULES_DIR="$REPO_ROOT/.cursor/rules"
@@ -106,7 +109,23 @@ extract_toml_value() {
extract_context7_key() {
local file="$1"
grep -oP -- '--key",[[:space:]]*"\K[^"]+' "$file" | head -n 1 || true
node - "$file" <<'EOF'
const fs = require('fs');
const filePath = process.argv[2];
let source = '';
try {
source = fs.readFileSync(filePath, 'utf8');
} catch {
process.exit(0);
}
const match = source.match(/--key",\s*"([^"]+)"/);
if (match && match[1]) {
process.stdout.write(`${match[1]}\n`);
}
EOF
}
generate_prompt_file() {
@@ -130,7 +149,9 @@ MCP_MERGE_SCRIPT="$REPO_ROOT/scripts/codex/merge-mcp-config.js"
require_path "$REPO_ROOT/AGENTS.md" "ECC AGENTS.md"
require_path "$AGENTS_CODEX_SUPP_SRC" "ECC Codex AGENTS supplement"
require_path "$CODEX_AGENTS_SRC" "ECC Codex agent roles"
require_path "$PROMPTS_SRC" "ECC commands directory"
require_path "$BASELINE_MERGE_SCRIPT" "ECC Codex baseline merge script"
require_path "$HOOKS_INSTALLER" "ECC global git hooks installer"
require_path "$SANITY_CHECKER" "ECC global sanity checker"
require_path "$CURSOR_RULES_DIR" "ECC Cursor rules directory"
@@ -231,6 +252,26 @@ else
fi
fi
log "Merging ECC Codex baseline into $CONFIG_FILE (add-only, preserving user config)"
if [[ "$MODE" == "dry-run" ]]; then
node "$BASELINE_MERGE_SCRIPT" "$CONFIG_FILE" --dry-run
else
node "$BASELINE_MERGE_SCRIPT" "$CONFIG_FILE"
fi
log "Syncing sample Codex agent role files"
run_or_echo mkdir -p "$CODEX_AGENTS_DEST"
for agent_file in "$CODEX_AGENTS_SRC"/*.toml; do
[[ -f "$agent_file" ]] || continue
agent_name="$(basename "$agent_file")"
dest="$CODEX_AGENTS_DEST/$agent_name"
if [[ -e "$dest" ]]; then
log "Keeping existing Codex agent role file: $dest"
else
run_or_echo cp "$agent_file" "$dest"
fi
done
# Skills are NOT synced here — Codex CLI reads directly from
# ~/.agents/skills/ (installed by ECC installer / npx skills).
# Copying into ~/.codex/skills/ was unnecessary.

View File

@@ -0,0 +1,97 @@
---
name: brand-voice
description: Build a source-derived writing style profile from real posts, essays, launch notes, docs, or site copy, then reuse that profile across content, outreach, and social workflows. Use when the user wants voice consistency without generic AI writing tropes.
origin: ECC
---
# Brand Voice
Build a durable voice profile from real source material, then use that profile everywhere instead of re-deriving style from scratch or defaulting to generic AI copy.
## When to Activate
- the user wants content or outreach in a specific voice
- writing for X, LinkedIn, email, launch posts, threads, or product updates
- adapting a known author's tone across channels
- the existing content lane needs a reusable style system instead of one-off mimicry
## Source Priority
Use the strongest real source set available, in this order:
1. recent original X posts and threads
2. articles, essays, memos, launch notes, or newsletters
3. real outbound emails or DMs that worked
4. product docs, changelogs, README framing, and site copy
Do not use generic platform exemplars as source material.
## Collection Workflow
1. Gather 5 to 20 representative samples when available.
2. Prefer recent material over old material unless the user says the older writing is more canonical.
3. Separate "public launch voice" from "private working voice" if the source set clearly splits.
4. If live X access is available, use `x-api` to pull recent original posts before drafting.
5. If site copy matters, include the current ECC landing page and repo/plugin framing.
## What to Extract
- rhythm and sentence length
- compression vs explanation
- capitalization norms
- parenthetical use
- question frequency and purpose
- how sharply claims are made
- how often numbers, mechanisms, or receipts show up
- how transitions work
- what the author never does
## Output Contract
Produce a reusable `VOICE PROFILE` block that downstream skills can consume directly. Use the schema in [references/voice-profile-schema.md](references/voice-profile-schema.md).
Keep the profile structured and short enough to reuse in session context. The point is not literary criticism. The point is operational reuse.
## Affaan / ECC Defaults
If the user wants Affaan / ECC voice and live sources are thin, start here unless newer source material overrides it:
- direct, compressed, concrete
- specifics, mechanisms, receipts, and numbers beat adjectives
- parentheticals are for qualification, narrowing, or over-clarification
- capitalization is conventional unless there is a real reason to break it
- questions are rare and should not be used as bait
- tone can be sharp, blunt, skeptical, or dry
- transitions should feel earned, not smoothed over
## Hard Bans
Delete and rewrite any of these:
- fake curiosity hooks
- "not X, just Y"
- "no fluff"
- forced lowercase
- LinkedIn thought-leader cadence
- bait questions
- "Excited to share"
- generic founder-journey filler
- corny parentheticals
## Persistence Rules
- Reuse the latest confirmed `VOICE PROFILE` across related tasks in the same session.
- If the user asks for a durable artifact, save the profile in the requested workspace location or memory surface.
- Do not create repo-tracked files that store personal voice fingerprints unless the user explicitly asks for that.
## Downstream Use
Use this skill before or inside:
- `content-engine`
- `crosspost`
- `lead-intelligence`
- article or launch writing
- cold or warm outbound across X, LinkedIn, and email
If another skill already has a partial voice capture section, this skill is the canonical source of truth.

View File

@@ -0,0 +1,55 @@
# Voice Profile Schema
Use this exact structure when building a reusable voice profile:
```text
VOICE PROFILE
=============
Author:
Goal:
Confidence:
Source Set
- source 1
- source 2
- source 3
Rhythm
- short note on sentence length, pacing, and fragmentation
Compression
- how dense or explanatory the writing is
Capitalization
- conventional, mixed, or situational
Parentheticals
- how they are used and how they are not used
Question Use
- rare, frequent, rhetorical, direct, or mostly absent
Claim Style
- how claims are framed, supported, and sharpened
Preferred Moves
- concrete moves the author does use
Banned Moves
- specific patterns the author does not use
CTA Rules
- how, when, or whether to close with asks
Channel Notes
- X:
- LinkedIn:
- Email:
```
Guidelines:
- Keep the profile concrete and source-backed.
- Use short bullets, not essay paragraphs.
- Every banned move should be observable in the source set or explicitly requested by the user.
- If the source set conflicts, call out the split instead of averaging it into mush.

View File

@@ -8,8 +8,7 @@
* exit 0: success exit 1: no projects
*/
import { readProjects, loadContext, today, CONTEXTS_DIR } from './shared.mjs';
import { renderListTable } from './shared.mjs';
import { readProjects, loadContext, today, renderListTable } from './shared.mjs';
const cwd = process.env.PWD || process.cwd();
const projects = readProjects();

View File

@@ -11,7 +11,7 @@
* exit 0: success exit 1: error
*/
import { readFileSync, writeFileSync, existsSync, renameSync } from 'fs';
import { readFileSync, existsSync, renameSync } from 'fs';
import { resolve } from 'path';
import { readProjects, writeProjects, saveContext, today, shortId, CONTEXTS_DIR } from './shared.mjs';
@@ -112,7 +112,11 @@ for (const [projectPath, info] of Object.entries(projects)) {
const contextMd = existsSync(contextMdPath) ? readFileSync(contextMdPath, 'utf8') : '';
let meta = {};
if (existsSync(metaPath)) {
try { meta = JSON.parse(readFileSync(metaPath, 'utf8')); } catch {}
try {
meta = JSON.parse(readFileSync(metaPath, 'utf8'));
} catch (e) {
console.warn(` ! ${contextDir}: invalid meta.json, continuing with defaults (${e.message})`);
}
}
// Extract fields from CONTEXT.md

View File

@@ -20,8 +20,8 @@ import { readFileSync, mkdirSync, writeFileSync } from 'fs';
import { resolve } from 'path';
import {
readProjects, writeProjects, loadContext, saveContext,
today, shortId, gitSummary, nativeMemoryDir, encodeProjectPath,
CONTEXTS_DIR, CURRENT_SESSION,
today, shortId, gitSummary, nativeMemoryDir,
CURRENT_SESSION,
} from './shared.mjs';
const isInit = process.argv.includes('--init');

View File

@@ -5,8 +5,8 @@
* No external dependencies. Node.js stdlib only.
*/
import { readFileSync, writeFileSync, existsSync, mkdirSync, readdirSync } from 'fs';
import { resolve, basename } from 'path';
import { readFileSync, writeFileSync, existsSync, mkdirSync } from 'fs';
import { resolve } from 'path';
import { homedir } from 'os';
import { spawnSync } from 'child_process';
import { randomBytes } from 'crypto';
@@ -270,7 +270,7 @@ export function renderContextMd(ctx) {
}
/** Render the bordered briefing box used by /ck:resume */
export function renderBriefingBox(ctx, meta = {}) {
export function renderBriefingBox(ctx, _meta = {}) {
const latest = ctx.sessions?.[ctx.sessions.length - 1] || {};
const W = 57;
const pad = (str, w) => {
@@ -344,7 +344,7 @@ export function renderInfoBlock(ctx) {
}
/** Render ASCII list table used by /ck:list */
export function renderListTable(entries, cwd, todayStr) {
export function renderListTable(entries, cwd, _todayStr) {
// entries: [{name, contextDir, path, context, lastUpdated}]
// Sorted alphabetically by contextDir before calling
const rows = entries.map((e, i) => {

View File

@@ -0,0 +1,189 @@
---
name: connections-optimizer
description: Reorganize the user's X and LinkedIn network with review-first pruning, add/follow recommendations, and channel-specific warm outreach drafted in the user's real voice. Use when the user wants to clean up following lists, grow toward current priorities, or rebalance a social graph around higher-signal relationships.
origin: ECC
---
# Connections Optimizer
Reorganize the user's network instead of treating outbound as a one-way prospecting list.
This skill handles:
- X following cleanup and expansion
- LinkedIn follow and connection analysis
- review-first prune queues
- add and follow recommendations
- warm-path identification
- Apple Mail, X DM, and LinkedIn draft generation in the user's real voice
## When to Activate
- the user wants to prune their X following
- the user wants to rebalance who they follow or stay connected to
- the user says "clean up my network", "who should I unfollow", "who should I follow", "who should I reconnect with"
- outreach quality depends on network structure, not just cold list generation
## Required Inputs
Collect or infer:
- current priorities and active work
- target roles, industries, geos, or ecosystems
- platform selection: X, LinkedIn, or both
- do-not-touch list
- mode: `light-pass`, `default`, or `aggressive`
If the user does not specify a mode, use `default`.
## Tool Requirements
### Preferred
- `x-api` for X graph inspection and recent activity
- `lead-intelligence` for target discovery and warm-path ranking
- `social-graph-ranker` when the user wants bridge value scored independently of the broader lead workflow
- Exa / deep research for person and company enrichment
- `brand-voice` before drafting outbound
### Fallbacks
- browser control for LinkedIn analysis and drafting
- browser control for X if API coverage is constrained
- Apple Mail or Mail.app drafting via desktop automation when email is the right channel
## Safety Defaults
- default is review-first, never blind auto-pruning
- X: prune only accounts the user follows, never followers
- LinkedIn: treat 1st-degree connection removal as manual-review-first
- do not auto-send DMs, invites, or emails
- emit a ranked action plan and drafts before any apply step
## Platform Rules
### X
- mutuals are stickier than one-way follows
- non-follow-backs can be pruned more aggressively
- heavily inactive or disappeared accounts should surface quickly
- engagement, signal quality, and bridge value matter more than raw follower count
### LinkedIn
- API-first if the user actually has LinkedIn API access
- browser workflow must work when API access is missing
- distinguish outbound follows from accepted 1st-degree connections
- outbound follows can be pruned more freely
- accepted 1st-degree connections should default to review, not auto-remove
## Modes
### `light-pass`
- prune only high-confidence low-value one-way follows
- surface the rest for review
- generate a small add/follow list
### `default`
- balanced prune queue
- balanced keep list
- ranked add/follow queue
- draft warm intros or direct outreach where useful
### `aggressive`
- larger prune queue
- lower tolerance for stale non-follow-backs
- still review-gated before apply
## Scoring Model
Use these positive signals:
- reciprocity
- recent activity
- alignment to current priorities
- network bridge value
- role relevance
- real engagement history
- recent presence and responsiveness
Use these negative signals:
- disappeared or abandoned account
- stale one-way follow
- off-priority topic cluster
- low-value noise
- repeated non-response
- no follow-back when many better replacements exist
Mutuals and real warm-path bridges should be penalized less aggressively than one-way follows.
## Workflow
1. Capture priorities, do-not-touch constraints, and selected platforms.
2. Pull the current following / connection inventory.
3. Score prune candidates with explicit reasons.
4. Score keep candidates with explicit reasons.
5. Use `lead-intelligence` plus research surfaces to rank expansion candidates.
6. Match the right channel:
- X DM for warm, fast social touch points
- LinkedIn message for professional graph adjacency
- Apple Mail draft for higher-context intros or outreach
7. Run `brand-voice` before drafting messages.
8. Return a review pack before any apply step.
## Review Pack Format
```text
CONNECTIONS OPTIMIZER REPORT
============================
Mode:
Platforms:
Priority Set:
Prune Queue
- handle / profile
reason:
confidence:
action:
Review Queue
- handle / profile
reason:
risk:
Keep / Protect
- handle / profile
bridge value:
Add / Follow Targets
- person
why now:
warm path:
preferred channel:
Drafts
- X DM:
- LinkedIn:
- Apple Mail:
```
## Outbound Rules
- Default email path is Apple Mail / Mail.app draft creation.
- Do not send automatically.
- Choose the channel based on warmth, relevance, and context depth.
- Do not force a DM when an email or no outreach is the right move.
- Drafts should sound like the user, not like automated sales copy.
## Related Skills
- `brand-voice` for the reusable voice profile
- `social-graph-ranker` for the standalone bridge-scoring and warm-path math
- `lead-intelligence` for weighted target and warm-path discovery
- `x-api` for X graph access, drafting, and optional apply flows
- `content-engine` when the user also wants public launch content around network moves

View File

@@ -36,27 +36,24 @@ Before drafting, identify the source set:
- prior posts from the same author
If the user wants a specific voice, build a voice profile from real examples before writing.
Use `brand-voice` as the canonical workflow when voice consistency matters across more than one output.
## Voice Capture Workflow
Collect 5 to 20 examples when available. Good sources:
- articles or essays
- X posts or threads
- docs or release notes
- newsletters
- previous launch posts
Run `brand-voice` first when:
If live X access is available, use `x-api` to pull recent original posts before drafting. If not, use the examples already provided or present in the repo.
- there are multiple downstream outputs
- the user explicitly cares about writing style
- the content is launch, outreach, or reputation-sensitive
Extract and write down:
- sentence length and rhythm
- how compressed or explanatory the writing is
- whether capitalization is conventional, mixed, or situational
- how parentheses are used
- whether the writer uses fragments, lists, or abrupt pivots
- how often the writer asks questions
- how sharp, formal, opinionated, or dry the voice is
- what the writer never does
At minimum, produce a compact `VOICE PROFILE` covering:
- rhythm
- compression
- capitalization
- parenthetical use
- question use
- preferred moves
- banned moves
Do not start drafting until the voice profile is clear enough to enforce.
@@ -148,3 +145,9 @@ Before delivering:
- no fake engagement bait remains
- no duplicated copy across platforms unless requested
- any CTA is earned and user-approved
## Related Skills
- `brand-voice` for source-derived voice profiles
- `crosspost` for platform-specific distribution
- `x-api` for sourcing recent posts and publishing approved X output

View File

@@ -37,6 +37,8 @@ Use `content-engine` first if the source still needs voice shaping.
### Step 2: Capture the Voice Fingerprint
Run `brand-voice` first if the source voice is not already captured in the current session.
Before adapting, note:
- how blunt or explanatory the source is
- whether the source uses fragments, lists, or longer transitions
@@ -110,5 +112,6 @@ Before delivering:
## Related Skills
- `brand-voice` for reusable source-derived voice capture
- `content-engine` for voice capture and source shaping
- `x-api` for X publishing workflows

View File

@@ -21,7 +21,7 @@ Agent-powered lead intelligence pipeline that finds, scores, and reaches high-va
### Required
- **Exa MCP** — Deep web search for people, companies, and signals (`web_search_exa`)
- **X API** — Follower/following graph, mutual analysis, recent activity (`X_BEARER_TOKEN`, `X_ACCESS_TOKEN`)
- **X API** — Follower/following graph, mutual analysis, recent activity (`X_BEARER_TOKEN`, plus write-context credentials such as `X_CONSUMER_KEY`, `X_CONSUMER_SECRET`, `X_ACCESS_TOKEN`, `X_ACCESS_TOKEN_SECRET`)
### Optional (enhance results)
- **LinkedIn** — Direct API if available, otherwise browser control for search, profile inspection, and drafting
@@ -43,36 +43,10 @@ Agent-powered lead intelligence pipeline that finds, scores, and reaches high-va
Do not draft outbound from generic sales copy.
Before writing a message, build a voice profile from real source material. Prefer:
- recent X posts and threads
- published articles, memos, or launch notes
- prior outreach emails that actually worked
- docs, changelogs, or product writing if those are the strongest signals
Run `brand-voice` first whenever the user's voice matters. Reuse its `VOICE PROFILE` instead of re-deriving style ad hoc inside this skill.
If live X access is available, pull recent original posts before drafting. If not, use supplied examples or the best repo/site material available.
Extract:
- sentence length and rhythm
- how compressed or explanatory the writing is
- how parentheses are used
- whether capitalization is conventional or situational
- how often questions are used
- phrases or transitions the author never uses
For Affaan / ECC style specifically:
- direct, compressed, concrete
- strong preference for specifics, mechanisms, and receipts
- parentheticals are for qualification or over-clarification, not jokes
- lowercase is optional, not mandatory
- no fake curiosity hooks
- no "not X, just Y"
- no "no fluff"
- no LinkedIn thought-leader cadence
- no bait question at the end
## Stage 1: Signal Scoring
Search for high-signal people in target verticals. Assign a weight to each based on:
@@ -115,11 +89,12 @@ x_search = search_recent_tweets(
For each scored target, analyze the user's social graph to find the warmest path.
### Algorithm
### Ranking Model
1. Pull user's X following list and LinkedIn connections
2. For each high-signal target, check for shared connections
3. Rank mutuals by:
3. Apply the `social-graph-ranker` model to score bridge value
4. Rank mutuals by:
| Factor | Weight |
|--------|--------|
@@ -129,47 +104,20 @@ For each scored target, analyze the user's social graph to find the warmest path
| Industry alignment | 15% — same vertical = natural intro |
| Mutual's X handle / LinkedIn | 10% — identifiability for outreach |
### Weighted Bridge Ranking
Canonical rule:
Treat this as the canonical network-ranking stage for lead intelligence. Do not run a separate graph skill when this stage is enough.
```text
Use social-graph-ranker when the user wants the graph math itself,
the bridge ranking as a standalone report, or explicit decay-model tuning.
```
Given:
- `T` = target leads
- `M` = your mutuals / existing connections
- `d(m, t)` = shortest hop distance from mutual `m` to target `t`
- `w(t)` = target weight from signal scoring
Compute the base bridge score for each mutual:
Inside this skill, use the same weighted bridge model:
```text
B(m) = Σ_{t ∈ T} w(t) · λ^(d(m,t) - 1)
```
Where:
- `λ` is the decay factor, usually `0.5`
- a direct connection contributes full value
- each extra hop halves the contribution
For second-order reach, expand one level into the mutual's own network:
```text
B_ext(m) = B(m) + α · Σ_{m' ∈ N(m) \\ M} Σ_{t ∈ T} w(t) · λ^(d(m',t))
```
Where:
- `N(m) \\ M` is the set of people the mutual knows that you do not
- `α` is the second-order discount, usually `0.3`
Then rank by response-adjusted bridge value:
```text
R(m) = B_ext(m) · (1 + β · engagement(m))
```
Where:
- `engagement(m)` is a normalized responsiveness score
- `β` is the engagement bonus, usually `0.2`
Interpretation:
- Tier 1: high `R(m)` and direct bridge paths -> warm intro asks
- Tier 2: medium `R(m)` and one-hop bridge paths -> conditional intro asks
@@ -178,6 +126,8 @@ Interpretation:
### Output Format
```
If the user explicitly wants the ranking engine broken out, the math visualized, or the network scored outside the full lead workflow, run `social-graph-ranker` as a standalone pass first and feed the result back into this pipeline.
MUTUAL RANKING REPORT
=====================
@@ -333,8 +283,8 @@ Users should set these environment variables:
export X_BEARER_TOKEN="..."
export X_ACCESS_TOKEN="..."
export X_ACCESS_TOKEN_SECRET="..."
export X_API_KEY="..."
export X_API_SECRET="..."
export X_CONSUMER_KEY="..."
export X_CONSUMER_SECRET="..."
export EXA_API_KEY="..."
# Optional
@@ -364,3 +314,8 @@ Agent workflow:
Output: Ranked list with warm paths, voice profile summary, and channel-specific outreach drafts or drafts-in-app
```
## Related Skills
- `brand-voice` for canonical voice capture
- `connections-optimizer` for review-first network pruning and expansion before outreach

View File

@@ -0,0 +1,89 @@
---
name: manim-video
description: Build reusable Manim explainers for technical concepts, graphs, system diagrams, and product walkthroughs, then hand off to the wider ECC video stack if needed. Use when the user wants a clean animated explainer rather than a generic talking-head script.
origin: ECC
---
# Manim Video
Use Manim for technical explainers where motion, structure, and clarity matter more than photorealism.
## When to Activate
- the user wants a technical explainer animation
- the concept is a graph, workflow, architecture, metric progression, or system diagram
- the user wants a short product or launch explainer for X or a landing page
- the visual should feel precise instead of generically cinematic
## Tool Requirements
- `manim` CLI for scene rendering
- `ffmpeg` for post-processing if needed
- `video-editing` for final assembly or polish
- `remotion-video-creation` when the final package needs composited UI, captions, or additional motion layers
## Default Output
- short 16:9 MP4
- one thumbnail or poster frame
- storyboard plus scene plan
## Workflow
1. Define the core visual thesis in one sentence.
2. Break the concept into 3 to 6 scenes.
3. Decide what each scene proves.
4. Write the scene outline before writing Manim code.
5. Render the smallest working version first.
6. Tighten typography, spacing, color, and pacing after the render works.
7. Hand off to the wider video stack only if it adds value.
## Scene Planning Rules
- each scene should prove one thing
- avoid overstuffed diagrams
- prefer progressive reveal over full-screen clutter
- use motion to explain state change, not just to keep the screen busy
- title cards should be short and loaded with meaning
## Network Graph Default
For social-graph and network-optimization explainers:
- show the current graph before showing the optimized graph
- distinguish low-signal follow clutter from high-signal bridges
- highlight warm-path nodes and target clusters
- if useful, add a final scene showing the self-improvement lineage that informed the skill
## Render Conventions
- default to 16:9 landscape unless the user asks for vertical
- start with a low-quality smoke test render
- only push to higher quality after composition and timing are stable
- export one clean thumbnail frame that reads at social size
## Reusable Starter
Use [assets/network_graph_scene.py](assets/network_graph_scene.py) as a starting point for network-graph explainers.
Example smoke test:
```bash
manim -ql assets/network_graph_scene.py NetworkGraphExplainer
```
## Output Format
Return:
- core visual thesis
- storyboard
- scene outline
- render plan
- any follow-on polish recommendations
## Related Skills
- `video-editing` for final polish
- `remotion-video-creation` for motion-heavy post-processing or compositing
- `content-engine` when the animation is part of a broader launch

View File

@@ -0,0 +1,52 @@
from manim import DOWN, LEFT, RIGHT, UP, Circle, Create, FadeIn, FadeOut, Scene, Text, VGroup, CurvedArrow
class NetworkGraphExplainer(Scene):
def construct(self):
title = Text("Connections Optimizer", font_size=40).to_edge(UP)
subtitle = Text("Prune low-signal follows. Strengthen warm paths.", font_size=20).next_to(title, DOWN)
you = Circle(radius=0.45, color="#4F8EF7").shift(LEFT * 4 + DOWN * 0.5)
you_label = Text("You", font_size=22).move_to(you.get_center())
stale_a = Circle(radius=0.32, color="#7A7A7A").shift(LEFT * 1.6 + UP * 1.2)
stale_b = Circle(radius=0.32, color="#7A7A7A").shift(LEFT * 1.2 + DOWN * 1.4)
bridge = Circle(radius=0.38, color="#21A179").shift(RIGHT * 0.2 + UP * 0.2)
target = Circle(radius=0.42, color="#FF9F1C").shift(RIGHT * 3.2 + UP * 0.7)
new_target = Circle(radius=0.42, color="#FF9F1C").shift(RIGHT * 3.0 + DOWN * 1.4)
stale_a_label = Text("stale", font_size=18).move_to(stale_a.get_center())
stale_b_label = Text("noise", font_size=18).move_to(stale_b.get_center())
bridge_label = Text("bridge", font_size=18).move_to(bridge.get_center())
target_label = Text("target", font_size=18).move_to(target.get_center())
new_target_label = Text("add", font_size=18).move_to(new_target.get_center())
edge_stale_a = CurvedArrow(you.get_right(), stale_a.get_left(), angle=0.2, color="#7A7A7A")
edge_stale_b = CurvedArrow(you.get_right(), stale_b.get_left(), angle=-0.2, color="#7A7A7A")
edge_bridge = CurvedArrow(you.get_right(), bridge.get_left(), angle=0.0, color="#21A179")
edge_target = CurvedArrow(bridge.get_right(), target.get_left(), angle=0.1, color="#21A179")
edge_new_target = CurvedArrow(bridge.get_right(), new_target.get_left(), angle=-0.12, color="#21A179")
self.play(FadeIn(title), FadeIn(subtitle))
self.play(
Create(you),
FadeIn(you_label),
Create(stale_a),
Create(stale_b),
Create(bridge),
Create(target),
FadeIn(stale_a_label),
FadeIn(stale_b_label),
FadeIn(bridge_label),
FadeIn(target_label),
)
self.play(Create(edge_stale_a), Create(edge_stale_b), Create(edge_bridge), Create(edge_target))
optimize = Text("Optimize the graph", font_size=24).to_edge(DOWN)
self.play(FadeIn(optimize))
self.play(FadeOut(stale_a), FadeOut(stale_b), FadeOut(stale_a_label), FadeOut(stale_b_label), FadeOut(edge_stale_a), FadeOut(edge_stale_b))
self.play(Create(new_target), FadeIn(new_target_label), Create(edge_new_target))
final_group = VGroup(you, you_label, bridge, bridge_label, target, target_label, new_target, new_target_label)
self.play(final_group.animate.shift(UP * 0.1))
self.wait(1)

View File

@@ -0,0 +1,154 @@
---
name: social-graph-ranker
description: Weighted social-graph ranking for warm intro discovery, bridge scoring, and network gap analysis across X and LinkedIn. Use when the user wants the reusable graph-ranking engine itself, not the broader outreach or network-maintenance workflow layered on top of it.
origin: ECC
---
# Social Graph Ranker
Canonical weighted graph-ranking layer for network-aware outreach.
Use this when the user needs to:
- rank existing mutuals or connections by intro value
- map warm paths to a target list
- measure bridge value across first- and second-order connections
- decide which targets deserve warm intros versus direct cold outreach
- understand the graph math independently from `lead-intelligence` or `connections-optimizer`
## When To Use This Standalone
Choose this skill when the user primarily wants the ranking engine:
- "who in my network is best positioned to introduce me?"
- "rank my mutuals by who can get me to these people"
- "map my graph against this ICP"
- "show me the bridge math"
Do not use this by itself when the user really wants:
- full lead generation and outbound sequencing -> use `lead-intelligence`
- pruning, rebalancing, and growing the network -> use `connections-optimizer`
## Inputs
Collect or infer:
- target people, companies, or ICP definition
- the user's current graph on X, LinkedIn, or both
- weighting priorities such as role, industry, geography, and responsiveness
- traversal depth and decay tolerance
## Core Model
Given:
- `T` = weighted target set
- `M` = your current mutuals / direct connections
- `d(m, t)` = shortest hop distance from mutual `m` to target `t`
- `w(t)` = target weight from signal scoring
Base bridge score:
```text
B(m) = Σ_{t ∈ T} w(t) · λ^(d(m,t) - 1)
```
Where:
- `λ` is the decay factor, usually `0.5`
- a direct path contributes full value
- each extra hop halves the contribution
Second-order expansion:
```text
B_ext(m) = B(m) + α · Σ_{m' ∈ N(m) \\ M} Σ_{t ∈ T} w(t) · λ^(d(m',t))
```
Where:
- `N(m) \\ M` is the set of people the mutual knows that you do not
- `α` discounts second-order reach, usually `0.3`
Response-adjusted final ranking:
```text
R(m) = B_ext(m) · (1 + β · engagement(m))
```
Where:
- `engagement(m)` is normalized responsiveness or relationship strength
- `β` is the engagement bonus, usually `0.2`
Interpretation:
- Tier 1: high `R(m)` and direct bridge paths -> warm intro asks
- Tier 2: medium `R(m)` and one-hop bridge paths -> conditional intro asks
- Tier 3: low `R(m)` or no viable bridge -> direct outreach or follow-gap fill
## Scoring Signals
Weight targets before graph traversal with whatever matters for the current priority set:
- role or title alignment
- company or industry fit
- current activity and recency
- geographic relevance
- influence or reach
- likelihood of response
Weight mutuals after traversal with:
- number of weighted paths into the target set
- directness of those paths
- responsiveness or prior interaction history
- contextual fit for making the intro
## Workflow
1. Build the weighted target set.
2. Pull the user's graph from X, LinkedIn, or both.
3. Compute direct bridge scores.
4. Expand second-order candidates for the highest-value mutuals.
5. Rank by `R(m)`.
6. Return:
- best warm intro asks
- conditional bridge paths
- graph gaps where no warm path exists
## Output Shape
```text
SOCIAL GRAPH RANKING
====================
Priority Set:
Platforms:
Decay Model:
Top Bridges
- mutual / connection
base_score:
extended_score:
best_targets:
path_summary:
recommended_action:
Conditional Paths
- mutual / connection
reason:
extra hop cost:
No Warm Path
- target
recommendation: direct outreach / fill graph gap
```
## Related Skills
- `lead-intelligence` uses this ranking model inside the broader target-discovery and outreach pipeline
- `connections-optimizer` uses the same bridge logic when deciding who to keep, prune, or add
- `brand-voice` should run before drafting any intro request or direct outreach
- `x-api` provides X graph access and optional execution paths

View File

@@ -46,25 +46,27 @@ tweets = resp.json()
### OAuth 1.0a (User Context)
Required for: posting tweets, managing account, DMs.
Required for: posting tweets, managing account, DMs, and any write flow.
```bash
# Environment setup — source before use
export X_API_KEY="your-api-key"
export X_API_SECRET="your-api-secret"
export X_CONSUMER_KEY="your-consumer-key"
export X_CONSUMER_SECRET="your-consumer-secret"
export X_ACCESS_TOKEN="your-access-token"
export X_ACCESS_SECRET="your-access-secret"
export X_ACCESS_TOKEN_SECRET="your-access-token-secret"
```
Legacy aliases such as `X_API_KEY`, `X_API_SECRET`, and `X_ACCESS_SECRET` may exist in older setups. Prefer the `X_CONSUMER_*` and `X_ACCESS_TOKEN_SECRET` names when documenting or wiring new flows.
```python
import os
from requests_oauthlib import OAuth1Session
oauth = OAuth1Session(
os.environ["X_API_KEY"],
client_secret=os.environ["X_API_SECRET"],
os.environ["X_CONSUMER_KEY"],
client_secret=os.environ["X_CONSUMER_SECRET"],
resource_owner_key=os.environ["X_ACCESS_TOKEN"],
resource_owner_secret=os.environ["X_ACCESS_SECRET"],
resource_owner_secret=os.environ["X_ACCESS_TOKEN_SECRET"],
)
```
@@ -125,6 +127,21 @@ resp = requests.get(
)
```
### Pull Recent Original Posts for Voice Modeling
```python
resp = requests.get(
"https://api.x.com/2/tweets/search/recent",
headers=headers,
params={
"query": "from:affaanmustafa -is:retweet -is:reply",
"max_results": 25,
"tweet.fields": "created_at,public_metrics",
}
)
voice_samples = resp.json()
```
### Get User by Username
```python
@@ -196,13 +213,18 @@ else:
## Integration with Content Engine
Use `content-engine` skill to generate platform-native content, then post via X API:
1. Generate content with content-engine (X platform format)
2. Validate length (280 chars for single tweet)
3. Post via X API using patterns above
4. Track engagement via public_metrics
Use `brand-voice` plus `content-engine` to generate platform-native content, then post via X API:
1. Pull recent original posts when voice matching matters
2. Build or reuse a `VOICE PROFILE`
3. Generate content with `content-engine` in X-native format
4. Validate length and thread structure
5. Return the draft for approval unless the user explicitly asked to post now
6. Post via X API only after approval
7. Track engagement via public_metrics
## Related Skills
- `brand-voice` — Build a reusable voice profile from real X and site/source material
- `content-engine` — Generate platform-native content for X
- `crosspost` — Distribute content across X, LinkedIn, and other platforms
- `connections-optimizer` — Reorganize the X graph before drafting network-driven outreach

View File

@@ -24,7 +24,7 @@ async function runTests() {
let store
try {
store = await import(pathToFileURL(storePath).href)
} catch (err) {
} catch (_err) {
console.log('\n[warn] Skipping: build .opencode first (cd .opencode && npm run build)\n')
process.exit(0)
}

View File

@@ -7,6 +7,7 @@ const fs = require('fs');
const os = require('os');
const path = require('path');
const { spawnSync } = require('child_process');
const TOML = require('@iarna/toml');
const repoRoot = path.join(__dirname, '..', '..');
const installScript = path.join(repoRoot, 'scripts', 'codex', 'install-global-git-hooks.sh');
@@ -93,29 +94,16 @@ if (os.platform() === 'win32') {
else failed++;
if (
test('sync preserves baseline config and accepts the legacy context7 MCP section', () => {
test('sync installs the missing Codex baseline and accepts the legacy context7 MCP section', () => {
const homeDir = createTempDir('codex-sync-home-');
const codexDir = path.join(homeDir, '.codex');
const configPath = path.join(codexDir, 'config.toml');
const agentsPath = path.join(codexDir, 'AGENTS.md');
const config = [
'approval_policy = "on-request"',
'sandbox_mode = "workspace-write"',
'web_search = "live"',
'persistent_instructions = ""',
'',
'[features]',
'multi_agent = true',
'',
'[profiles.strict]',
'approval_policy = "on-request"',
'sandbox_mode = "read-only"',
'web_search = "cached"',
'',
'[profiles.yolo]',
'approval_policy = "never"',
'sandbox_mode = "workspace-write"',
'web_search = "live"',
'[agents]',
'explorer = { description = "Read-only codebase explorer for gathering evidence before changes are proposed." }',
'',
'[mcp_servers.context7]',
'command = "npx"',
@@ -147,13 +135,63 @@ if (
assert.match(syncedAgents, /^# Codex Supplement \(From ECC \.codex\/AGENTS\.md\)/m);
const syncedConfig = fs.readFileSync(configPath, 'utf8');
assert.match(syncedConfig, /^multi_agent\s*=\s*true$/m);
assert.match(syncedConfig, /^\[profiles\.strict\]$/m);
assert.match(syncedConfig, /^\[profiles\.yolo\]$/m);
assert.match(syncedConfig, /^\[mcp_servers\.github\]$/m);
assert.match(syncedConfig, /^\[mcp_servers\.memory\]$/m);
assert.match(syncedConfig, /^\[mcp_servers\.sequential-thinking\]$/m);
assert.match(syncedConfig, /^\[mcp_servers\.context7\]$/m);
const parsedConfig = TOML.parse(syncedConfig);
assert.strictEqual(parsedConfig.approval_policy, 'on-request');
assert.strictEqual(parsedConfig.sandbox_mode, 'workspace-write');
assert.strictEqual(parsedConfig.web_search, 'live');
assert.ok(!Object.prototype.hasOwnProperty.call(parsedConfig, 'multi_agent'));
assert.ok(parsedConfig.features);
assert.strictEqual(parsedConfig.features.multi_agent, true);
assert.ok(parsedConfig.profiles);
assert.strictEqual(parsedConfig.profiles.strict.approval_policy, 'on-request');
assert.strictEqual(parsedConfig.profiles.yolo.approval_policy, 'never');
assert.ok(parsedConfig.agents);
assert.strictEqual(parsedConfig.agents.max_threads, 6);
assert.strictEqual(parsedConfig.agents.max_depth, 1);
assert.strictEqual(parsedConfig.agents.explorer.config_file, 'agents/explorer.toml');
assert.strictEqual(parsedConfig.agents.reviewer.config_file, 'agents/reviewer.toml');
assert.strictEqual(parsedConfig.agents.docs_researcher.config_file, 'agents/docs-researcher.toml');
assert.ok(parsedConfig.mcp_servers.exa);
assert.ok(parsedConfig.mcp_servers.github);
assert.ok(parsedConfig.mcp_servers.memory);
assert.ok(parsedConfig.mcp_servers['sequential-thinking']);
assert.ok(parsedConfig.mcp_servers.context7);
for (const roleFile of ['explorer.toml', 'reviewer.toml', 'docs-researcher.toml']) {
assert.ok(fs.existsSync(path.join(codexDir, 'agents', roleFile)));
}
} finally {
cleanup(homeDir);
}
})
)
passed++;
else failed++;
if (
test('sync adds parent-table keys when the target only declares an implicit parent table', () => {
const homeDir = createTempDir('codex-sync-implicit-parent-home-');
const codexDir = path.join(homeDir, '.codex');
const configPath = path.join(codexDir, 'config.toml');
const config = [
'persistent_instructions = ""',
'',
'[agents.explorer]',
'description = "Read-only codebase explorer for gathering evidence before changes are proposed."',
'',
].join('\n');
try {
fs.mkdirSync(codexDir, { recursive: true });
fs.writeFileSync(configPath, config);
const syncResult = runBash(syncScript, [], makeHermeticCodexEnv(homeDir, codexDir));
assert.strictEqual(syncResult.status, 0, `${syncResult.stdout}\n${syncResult.stderr}`);
const parsedConfig = TOML.parse(fs.readFileSync(configPath, 'utf8'));
assert.strictEqual(parsedConfig.agents.max_threads, 6);
assert.strictEqual(parsedConfig.agents.max_depth, 1);
assert.strictEqual(parsedConfig.agents.explorer.config_file, 'agents/explorer.toml');
} finally {
cleanup(homeDir);
}

View File

@@ -353,6 +353,45 @@ function runTests() {
}
})) passed++; else failed++;
if (test('resolves CLAUDE_PLUGIN_ROOT placeholders in installed claude hooks', () => {
const homeDir = createTempDir('install-apply-home-');
const projectDir = createTempDir('install-apply-project-');
try {
const result = run(['--profile', 'core'], { cwd: projectDir, homeDir });
assert.strictEqual(result.code, 0, result.stderr);
const claudeRoot = path.join(homeDir, '.claude');
const settings = readJson(path.join(claudeRoot, 'settings.json'));
const installedHooks = readJson(path.join(claudeRoot, 'hooks', 'hooks.json'));
const autoTmuxEntry = settings.hooks.PreToolUse.find(entry => entry.id === 'pre:bash:auto-tmux-dev');
assert.ok(autoTmuxEntry, 'settings.json should include the auto tmux hook');
assert.ok(
autoTmuxEntry.hooks[0].command.includes(path.join(claudeRoot, 'scripts', 'hooks', 'auto-tmux-dev.js')),
'settings.json should use the installed Claude root for hook commands'
);
assert.ok(
!autoTmuxEntry.hooks[0].command.includes('${CLAUDE_PLUGIN_ROOT}'),
'settings.json should not retain CLAUDE_PLUGIN_ROOT placeholders after install'
);
const installedAutoTmuxEntry = installedHooks.hooks.PreToolUse.find(entry => entry.id === 'pre:bash:auto-tmux-dev');
assert.ok(installedAutoTmuxEntry, 'hooks/hooks.json should include the auto tmux hook');
assert.ok(
installedAutoTmuxEntry.hooks[0].command.includes(path.join(claudeRoot, 'scripts', 'hooks', 'auto-tmux-dev.js')),
'hooks/hooks.json should use the installed Claude root for hook commands'
);
assert.ok(
!installedAutoTmuxEntry.hooks[0].command.includes('${CLAUDE_PLUGIN_ROOT}'),
'hooks/hooks.json should not retain CLAUDE_PLUGIN_ROOT placeholders after install'
);
} finally {
cleanup(homeDir);
cleanup(projectDir);
}
})) passed++; else failed++;
if (test('preserves existing settings fields and hook entries when merging hooks', () => {
const homeDir = createTempDir('install-apply-home-');
const projectDir = createTempDir('install-apply-project-');

View File

@@ -26,6 +26,7 @@ function runHook(mode, payload, homeDir) {
env: {
...process.env,
HOME: homeDir,
USERPROFILE: homeDir,
},
});
}

View File

@@ -74,6 +74,15 @@ function runTests() {
assert.ok(!source.includes('run_or_echo cp -R "$skill_dir" "$dest"'), 'skill sync cp should be removed');
})) passed++; else failed++;
if (test('sync script avoids GNU-only grep -P parsing', () => {
assert.ok(!source.includes('grep -oP'), 'sync-ecc-to-codex.sh should remain portable across BSD and GNU environments');
})) passed++; else failed++;
if (test('extract_context7_key uses a portable parser', () => {
assert.ok(source.includes('extract_context7_key() {'), 'Expected extract_context7_key helper');
assert.ok(source.includes('node - "$file"'), 'extract_context7_key should use Node-based parsing');
})) passed++; else failed++;
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}

View File

@@ -29,7 +29,7 @@ function runInstall(options = {}) {
},
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
timeout: 20000,
timeout: 60000,
});
}
@@ -43,7 +43,7 @@ function runUninstall(options = {}) {
encoding: 'utf8',
input: options.input || 'y\n',
stdio: ['pipe', 'pipe', 'pipe'],
timeout: 20000,
timeout: 60000,
});
}