mirror of
https://github.com/affaan-m/everything-claude-code.git
synced 2026-04-02 23:23:31 +08:00
Compare commits
19 Commits
ecc-tools/
...
ecc-tools/
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2200582b18 | ||
|
|
2e645ca04f | ||
|
|
73df5a0180 | ||
|
|
838464fa2f | ||
|
|
a667b2124b | ||
|
|
bf45d3cbc1 | ||
|
|
afd2a031cb | ||
|
|
d4cdcc5de7 | ||
|
|
2ac04c6a8e | ||
|
|
7c8af2537e | ||
|
|
6d1b5cbe89 | ||
|
|
33caa95137 | ||
|
|
a317a60950 | ||
|
|
05f745dd8c | ||
|
|
2c33c90c76 | ||
|
|
bf3fd69d2c | ||
|
|
31525854b5 | ||
|
|
8f63697113 | ||
|
|
9a6080f2e1 |
97
.agents/skills/brand-voice/SKILL.md
Normal file
97
.agents/skills/brand-voice/SKILL.md
Normal file
@@ -0,0 +1,97 @@
|
||||
---
|
||||
name: brand-voice
|
||||
description: Build a source-derived writing style profile from real posts, essays, launch notes, docs, or site copy, then reuse that profile across content, outreach, and social workflows. Use when the user wants voice consistency without generic AI writing tropes.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Brand Voice
|
||||
|
||||
Build a durable voice profile from real source material, then use that profile everywhere instead of re-deriving style from scratch or defaulting to generic AI copy.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- the user wants content or outreach in a specific voice
|
||||
- writing for X, LinkedIn, email, launch posts, threads, or product updates
|
||||
- adapting a known author's tone across channels
|
||||
- the existing content lane needs a reusable style system instead of one-off mimicry
|
||||
|
||||
## Source Priority
|
||||
|
||||
Use the strongest real source set available, in this order:
|
||||
|
||||
1. recent original X posts and threads
|
||||
2. articles, essays, memos, launch notes, or newsletters
|
||||
3. real outbound emails or DMs that worked
|
||||
4. product docs, changelogs, README framing, and site copy
|
||||
|
||||
Do not use generic platform exemplars as source material.
|
||||
|
||||
## Collection Workflow
|
||||
|
||||
1. Gather 5 to 20 representative samples when available.
|
||||
2. Prefer recent material over old material unless the user says the older writing is more canonical.
|
||||
3. Separate "public launch voice" from "private working voice" if the source set clearly splits.
|
||||
4. If live X access is available, use `x-api` to pull recent original posts before drafting.
|
||||
5. If site copy matters, include the current ECC landing page and repo/plugin framing.
|
||||
|
||||
## What to Extract
|
||||
|
||||
- rhythm and sentence length
|
||||
- compression vs explanation
|
||||
- capitalization norms
|
||||
- parenthetical use
|
||||
- question frequency and purpose
|
||||
- how sharply claims are made
|
||||
- how often numbers, mechanisms, or receipts show up
|
||||
- how transitions work
|
||||
- what the author never does
|
||||
|
||||
## Output Contract
|
||||
|
||||
Produce a reusable `VOICE PROFILE` block that downstream skills can consume directly. Use the schema in [references/voice-profile-schema.md](references/voice-profile-schema.md).
|
||||
|
||||
Keep the profile structured and short enough to reuse in session context. The point is not literary criticism. The point is operational reuse.
|
||||
|
||||
## Affaan / ECC Defaults
|
||||
|
||||
If the user wants Affaan / ECC voice and live sources are thin, start here unless newer source material overrides it:
|
||||
|
||||
- direct, compressed, concrete
|
||||
- specifics, mechanisms, receipts, and numbers beat adjectives
|
||||
- parentheticals are for qualification, narrowing, or over-clarification
|
||||
- capitalization is conventional unless there is a real reason to break it
|
||||
- questions are rare and should not be used as bait
|
||||
- tone can be sharp, blunt, skeptical, or dry
|
||||
- transitions should feel earned, not smoothed over
|
||||
|
||||
## Hard Bans
|
||||
|
||||
Delete and rewrite any of these:
|
||||
|
||||
- fake curiosity hooks
|
||||
- "not X, just Y"
|
||||
- "no fluff"
|
||||
- forced lowercase
|
||||
- LinkedIn thought-leader cadence
|
||||
- bait questions
|
||||
- "Excited to share"
|
||||
- generic founder-journey filler
|
||||
- corny parentheticals
|
||||
|
||||
## Persistence Rules
|
||||
|
||||
- Reuse the latest confirmed `VOICE PROFILE` across related tasks in the same session.
|
||||
- If the user asks for a durable artifact, save the profile in the requested workspace location or memory surface.
|
||||
- Do not create repo-tracked files that store personal voice fingerprints unless the user explicitly asks for that.
|
||||
|
||||
## Downstream Use
|
||||
|
||||
Use this skill before or inside:
|
||||
|
||||
- `content-engine`
|
||||
- `crosspost`
|
||||
- `lead-intelligence`
|
||||
- article or launch writing
|
||||
- cold or warm outbound across X, LinkedIn, and email
|
||||
|
||||
If another skill already has a partial voice capture section, this skill is the canonical source of truth.
|
||||
@@ -0,0 +1,55 @@
|
||||
# Voice Profile Schema
|
||||
|
||||
Use this exact structure when building a reusable voice profile:
|
||||
|
||||
```text
|
||||
VOICE PROFILE
|
||||
=============
|
||||
Author:
|
||||
Goal:
|
||||
Confidence:
|
||||
|
||||
Source Set
|
||||
- source 1
|
||||
- source 2
|
||||
- source 3
|
||||
|
||||
Rhythm
|
||||
- short note on sentence length, pacing, and fragmentation
|
||||
|
||||
Compression
|
||||
- how dense or explanatory the writing is
|
||||
|
||||
Capitalization
|
||||
- conventional, mixed, or situational
|
||||
|
||||
Parentheticals
|
||||
- how they are used and how they are not used
|
||||
|
||||
Question Use
|
||||
- rare, frequent, rhetorical, direct, or mostly absent
|
||||
|
||||
Claim Style
|
||||
- how claims are framed, supported, and sharpened
|
||||
|
||||
Preferred Moves
|
||||
- concrete moves the author does use
|
||||
|
||||
Banned Moves
|
||||
- specific patterns the author does not use
|
||||
|
||||
CTA Rules
|
||||
- how, when, or whether to close with asks
|
||||
|
||||
Channel Notes
|
||||
- X:
|
||||
- LinkedIn:
|
||||
- Email:
|
||||
```
|
||||
|
||||
Guidelines:
|
||||
|
||||
- Keep the profile concrete and source-backed.
|
||||
- Use short bullets, not essay paragraphs.
|
||||
- Every banned move should be observable in the source set or explicitly requested by the user.
|
||||
- If the source set conflicts, call out the split instead of averaging it into mush.
|
||||
@@ -36,27 +36,24 @@ Before drafting, identify the source set:
|
||||
- prior posts from the same author
|
||||
|
||||
If the user wants a specific voice, build a voice profile from real examples before writing.
|
||||
Use `brand-voice` as the canonical workflow when voice consistency matters across more than one output.
|
||||
|
||||
## Voice Capture Workflow
|
||||
|
||||
Collect 5 to 20 examples when available. Good sources:
|
||||
- articles or essays
|
||||
- X posts or threads
|
||||
- docs or release notes
|
||||
- newsletters
|
||||
- previous launch posts
|
||||
Run `brand-voice` first when:
|
||||
|
||||
If live X access is available, use `x-api` to pull recent original posts before drafting. If not, use the examples already provided or present in the repo.
|
||||
- there are multiple downstream outputs
|
||||
- the user explicitly cares about writing style
|
||||
- the content is launch, outreach, or reputation-sensitive
|
||||
|
||||
Extract and write down:
|
||||
- sentence length and rhythm
|
||||
- how compressed or explanatory the writing is
|
||||
- whether capitalization is conventional, mixed, or situational
|
||||
- how parentheses are used
|
||||
- whether the writer uses fragments, lists, or abrupt pivots
|
||||
- how often the writer asks questions
|
||||
- how sharp, formal, opinionated, or dry the voice is
|
||||
- what the writer never does
|
||||
At minimum, produce a compact `VOICE PROFILE` covering:
|
||||
- rhythm
|
||||
- compression
|
||||
- capitalization
|
||||
- parenthetical use
|
||||
- question use
|
||||
- preferred moves
|
||||
- banned moves
|
||||
|
||||
Do not start drafting until the voice profile is clear enough to enforce.
|
||||
|
||||
@@ -148,3 +145,9 @@ Before delivering:
|
||||
- no fake engagement bait remains
|
||||
- no duplicated copy across platforms unless requested
|
||||
- any CTA is earned and user-approved
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for source-derived voice profiles
|
||||
- `crosspost` for platform-specific distribution
|
||||
- `x-api` for sourcing recent posts and publishing approved X output
|
||||
|
||||
@@ -37,6 +37,8 @@ Use `content-engine` first if the source still needs voice shaping.
|
||||
|
||||
### Step 2: Capture the Voice Fingerprint
|
||||
|
||||
Run `brand-voice` first if the source voice is not already captured in the current session.
|
||||
|
||||
Before adapting, note:
|
||||
- how blunt or explanatory the source is
|
||||
- whether the source uses fragments, lists, or longer transitions
|
||||
@@ -110,5 +112,6 @@ Before delivering:
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for reusable source-derived voice capture
|
||||
- `content-engine` for voice capture and source shaping
|
||||
- `x-api` for X publishing workflows
|
||||
|
||||
@@ -5,184 +5,161 @@
|
||||
|
||||
## Overview
|
||||
|
||||
This skill documents the core development patterns, coding conventions, and agentic workflows used in the `everything-claude-code` (ECC) repository. The project is written in JavaScript (no framework detected) and implements modular, agent-driven automation and installable skills. This guide covers how to contribute new skills, commands, install targets, agent definitions, and more, following the repository's conventions and workflows.
|
||||
This skill provides a comprehensive guide to the development patterns, coding conventions, and key workflows used in the `everything-claude-code` repository. The project is written in JavaScript (no framework detected) and focuses on modular skill development, extensible command workflows, and robust automation for agents and integrations. This guide will help contributors maintain consistency and efficiency across code, documentation, and automation.
|
||||
|
||||
## Coding Conventions
|
||||
|
||||
ECC follows consistent JavaScript coding and repository organization conventions:
|
||||
**File Naming**
|
||||
- Use `camelCase` for JavaScript files and directories.
|
||||
- Example: `installTargets.js`, `mySkillHandler.js`
|
||||
|
||||
### File Naming
|
||||
**Import Style**
|
||||
- Use relative imports for modules within the repository.
|
||||
- Example:
|
||||
```js
|
||||
import myUtil from '../utils/myUtil.js';
|
||||
```
|
||||
|
||||
- Use **camelCase** for JavaScript files and modules.
|
||||
- Example: `myModule.js`, `installTarget.js`
|
||||
**Export Style**
|
||||
- Mixed: Both default and named exports are used.
|
||||
- Example (default export):
|
||||
```js
|
||||
export default function installTarget() { ... }
|
||||
```
|
||||
- Example (named export):
|
||||
```js
|
||||
export function registerSkill() { ... }
|
||||
```
|
||||
|
||||
### Import Style
|
||||
|
||||
- Use **relative imports** for internal modules.
|
||||
```js
|
||||
// Good
|
||||
const utils = require('./utils');
|
||||
import { doThing } from '../lib/doThing.js';
|
||||
```
|
||||
|
||||
### Export Style
|
||||
|
||||
- Both **CommonJS** and **ES module** exports are used, depending on context.
|
||||
```js
|
||||
// CommonJS
|
||||
module.exports = function doSomething() { ... };
|
||||
|
||||
// ES Module
|
||||
export function doSomethingElse() { ... }
|
||||
```
|
||||
|
||||
### Commit Messages
|
||||
|
||||
- Prefix with `fix:`, `feat:`, `docs:`, or `chore:`
|
||||
- Keep messages concise (average ~56 characters)
|
||||
```
|
||||
feat: add support for new install target
|
||||
fix: resolve agent prompt parsing bug
|
||||
```
|
||||
**Commit Messages**
|
||||
- Use [Conventional Commits](https://www.conventionalcommits.org/) with these prefixes: `fix`, `feat`, `docs`, `chore`.
|
||||
- Example: `feat: add support for new install target (CodeBuddy)`
|
||||
|
||||
## Workflows
|
||||
|
||||
### Add or Update a Skill
|
||||
|
||||
**Trigger:** When introducing or updating a skill for agentic workflows
|
||||
**Trigger:** When introducing a new skill or updating an existing skill's capabilities or documentation
|
||||
**Command:** `/add-skill`
|
||||
|
||||
1. Create or update a `SKILL.md` file under `skills/{skill-name}/` or `.agents/skills/{skill-name}/`.
|
||||
2. Optionally update `AGENTS.md`, `README.md`, and localized docs (e.g., `README.zh-CN.md`).
|
||||
3. Update `manifests/install-modules.json` and/or `install-components.json` if the skill is installable.
|
||||
4. Optionally add or update related agent markdown files.
|
||||
5. Optionally update tests or scripts if the skill introduces new hooks or behaviors.
|
||||
1. Create or update `SKILL.md` in `skills/<skill-name>/` or `.agents/skills/<skill-name>/`.
|
||||
2. Optionally add or update related reference files (e.g., assets, rules, schemas) in the skill directory.
|
||||
3. Update `manifests/install-modules.json` and/or `manifests/install-components.json` to register the new skill.
|
||||
4. Update documentation files: `AGENTS.md`, `README.md`, `README.zh-CN.md`, `docs/zh-CN/AGENTS.md`, and `docs/zh-CN/README.md`.
|
||||
5. Optionally add or update tests for the skill.
|
||||
|
||||
**Example:**
|
||||
```shell
|
||||
mkdir -p skills/myNewSkill
|
||||
```bash
|
||||
# Add a new skill
|
||||
mkdir skills/myNewSkill
|
||||
touch skills/myNewSkill/SKILL.md
|
||||
# Edit SKILL.md with documentation
|
||||
# Update manifests
|
||||
vim manifests/install-modules.json
|
||||
# Document the skill
|
||||
vim README.md AGENTS.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Add or Update a Command Workflow
|
||||
|
||||
**Trigger:** When adding or updating a repeatable workflow command
|
||||
**Trigger:** When introducing or improving a command-driven workflow (e.g., PRP, code review, refactoring)
|
||||
**Command:** `/add-command`
|
||||
|
||||
1. Create or update a markdown file in `commands/` (e.g., `prp-*.md`, `gan-*.md`, `santa-loop.md`).
|
||||
2. Document workflow phases, usage, and outputs in the file.
|
||||
3. Optionally update related skills or agent definitions.
|
||||
4. Optionally add shell scripts or orchestrators if automation is needed.
|
||||
5. Optionally update `AGENTS.md` or `README.md` to reference the new command.
|
||||
1. Create or update a markdown file in `commands/` (e.g., `commands/prp-*.md`).
|
||||
2. If relevant, update or add supporting files in `.claude/commands/`.
|
||||
3. Optionally update documentation or references to the command in `README.md` or `AGENTS.md`.
|
||||
|
||||
**Example:**
|
||||
```shell
|
||||
touch commands/prp-myworkflow.md
|
||||
# Document the workflow steps in markdown
|
||||
```bash
|
||||
# Add a new PRP command workflow
|
||||
touch commands/prp-enhanced.md
|
||||
# Optionally update supporting files
|
||||
vim .claude/commands/prp-enhanced.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Add or Update an Install Target
|
||||
### Refactor Skill or Core Logic
|
||||
**Trigger:** When improving, reorganizing, or merging existing skill or core logic for maintainability or performance
|
||||
**Command:** `/refactor-skill`
|
||||
|
||||
**Trigger:** When supporting a new IDE/platform or updating install logic
|
||||
1. Edit multiple `SKILL.md` files in `skills/` or `.agents/skills/`.
|
||||
2. Update documentation: `AGENTS.md`, `README.md`, `README.zh-CN.md`, `docs/zh-CN/AGENTS.md`, `docs/zh-CN/README.md`.
|
||||
3. Update `manifests/install-modules.json` or related manifests as needed.
|
||||
4. Remove, merge, or rename legacy command or skill files.
|
||||
5. Optionally update or add tests for the refactored logic.
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# Refactor multiple skills
|
||||
vim skills/skillA/SKILL.md skills/skillB/SKILL.md
|
||||
# Update manifests and docs
|
||||
vim manifests/install-modules.json README.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Add or Update Install Target
|
||||
**Trigger:** When supporting a new external tool or environment as an install target
|
||||
**Command:** `/add-install-target`
|
||||
|
||||
1. Add or update install scripts (`install.sh`, `install.js`, `uninstall.sh`, `uninstall.js`) in a new or existing directory (e.g., `.codebuddy/`).
|
||||
2. Update `manifests/install-modules.json` and `schemas/ecc-install-config.schema.json`.
|
||||
3. Update `scripts/lib/install-manifests.js` and `scripts/lib/install-targets/{target}.js`.
|
||||
4. Add or update tests for install targets.
|
||||
5. Optionally update `README.md` or `AGENTS.md`.
|
||||
1. Create or update install scripts in a dedicated directory (e.g., `.codebuddy/`, `.gemini/`).
|
||||
2. Add or update install target logic in `scripts/lib/install-targets/`.
|
||||
3. Update `manifests/install-modules.json` and `schemas/ecc-install-config.schema.json`.
|
||||
4. Update or add tests for the new install target in `tests/lib/install-targets.test.js`.
|
||||
5. Update registry logic if necessary.
|
||||
|
||||
**Example:**
|
||||
```shell
|
||||
mkdir -p .codebuddy
|
||||
touch .codebuddy/install.sh
|
||||
# Implement install logic
|
||||
```bash
|
||||
# Add a Gemini install target
|
||||
mkdir .gemini
|
||||
touch .gemini/install.sh
|
||||
vim scripts/lib/install-targets/gemini.js
|
||||
# Update schemas and manifests
|
||||
vim schemas/ecc-install-config.schema.json manifests/install-modules.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Update Hooks and Hook Tests
|
||||
### Update or Add CI/CD Workflow
|
||||
**Trigger:** When improving, fixing, or adding CI/CD automation (e.g., dependency updates, release validation)
|
||||
**Command:** `/update-ci`
|
||||
|
||||
**Trigger:** When refactoring, fixing, or extending system hooks
|
||||
**Command:** `/update-hook`
|
||||
|
||||
1. Edit `hooks/hooks.json` to change configuration or add/remove hooks.
|
||||
2. Edit or add `scripts/hooks/*.js` to implement hook logic.
|
||||
3. Edit or add `tests/hooks/*.test.js` to cover new or changed behaviors.
|
||||
4. Optionally update related scripts or documentation.
|
||||
1. Edit or add YAML files in `.github/workflows/`.
|
||||
2. Optionally update related scripts or lockfiles (`yarn.lock`, `package-lock.json`).
|
||||
3. Optionally update hooks or validation scripts.
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
// hooks/hooks.json
|
||||
{
|
||||
"pre-commit": ["format", "typecheck"]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Dependency Bump via Dependabot
|
||||
|
||||
**Trigger:** When updating dependencies for security or features
|
||||
**Command:** `/bump-dependency`
|
||||
|
||||
1. Update dependency version in `package.json`, `yarn.lock`, or workflow YAML files.
|
||||
2. Commit with a standardized message (often by dependabot).
|
||||
3. Optionally update related documentation or changelogs.
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
// package.json
|
||||
"dependencies": {
|
||||
"some-lib": "^2.0.0"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Add or Update Agent Definition
|
||||
|
||||
**Trigger:** When adding or modifying agent definitions/prompts
|
||||
**Command:** `/add-agent`
|
||||
|
||||
1. Add or update agent markdown files (`agents/*.md`).
|
||||
2. Add or update prompt files (`.opencode/prompts/agents/*.txt`).
|
||||
3. Update `.opencode/opencode.json` to register new agents.
|
||||
4. Update `AGENTS.md` to document new agents.
|
||||
5. Optionally update related skills or orchestrators.
|
||||
|
||||
**Example:**
|
||||
```shell
|
||||
touch agents/myAgent.md
|
||||
touch .opencode/prompts/agents/myAgent.txt
|
||||
```bash
|
||||
# Add a new CI workflow
|
||||
touch .github/workflows/dependency-check.yml
|
||||
# Update lockfiles if needed
|
||||
yarn install
|
||||
```
|
||||
|
||||
## Testing Patterns
|
||||
|
||||
- Test files follow the pattern `*.test.js`.
|
||||
- Testing framework is **unknown** (not detected), but test files are placed alongside code or in `tests/` directories.
|
||||
- Example test file:
|
||||
```js
|
||||
// tests/hooks/formatHook.test.js
|
||||
const formatHook = require('../../scripts/hooks/formatHook');
|
||||
test('should format code correctly', () => {
|
||||
// test implementation
|
||||
});
|
||||
```
|
||||
- **Test Files:** Use the `*.test.js` naming pattern.
|
||||
- Example: `installTargets.test.js`
|
||||
- **Framework:** Not explicitly specified; use standard Node.js or your preferred JS testing framework.
|
||||
- **Placement:** Tests are typically placed alongside implementation or in a `tests/` directory.
|
||||
|
||||
**Example Test File:**
|
||||
```js
|
||||
// tests/lib/install-targets.test.js
|
||||
import { installTarget } from '../../scripts/lib/install-targets/gemini.js';
|
||||
|
||||
test('should install Gemini target', () => {
|
||||
expect(installTarget()).toBe(true);
|
||||
});
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
| Command | Purpose |
|
||||
|--------------------|----------------------------------------------------------------|
|
||||
| /add-skill | Add or update a skill and its documentation |
|
||||
| /add-command | Add or update a workflow command |
|
||||
| /add-install-target| Add or update an install target (IDE/platform/environment) |
|
||||
| /update-hook | Refactor, fix, or extend system hooks and their tests |
|
||||
| /bump-dependency | Update dependency versions in package or workflow files |
|
||||
| /add-agent | Add or update agent definitions and prompts |
|
||||
| Command | Purpose |
|
||||
|---------------------|--------------------------------------------------------------|
|
||||
| /add-skill | Add or update a skill, including documentation and manifests |
|
||||
| /add-command | Add or update a command workflow |
|
||||
| /refactor-skill | Refactor skill or core logic and update documentation |
|
||||
| /add-install-target | Add or update an install target integration |
|
||||
| /update-ci | Add or update CI/CD workflow files |
|
||||
```
|
||||
|
||||
@@ -19,7 +19,7 @@ Programmatic interaction with X (Twitter) for posting, reading, searching, and a
|
||||
|
||||
## Authentication
|
||||
|
||||
### OAuth 2.0 (App-Only / User Context)
|
||||
### OAuth 2.0 Bearer Token (App-Only)
|
||||
|
||||
Best for: read-heavy operations, search, public data.
|
||||
|
||||
@@ -46,25 +46,27 @@ tweets = resp.json()
|
||||
|
||||
### OAuth 1.0a (User Context)
|
||||
|
||||
Required for: posting tweets, managing account, DMs.
|
||||
Required for: posting tweets, managing account, DMs, and any write flow.
|
||||
|
||||
```bash
|
||||
# Environment setup — source before use
|
||||
export X_API_KEY="your-api-key"
|
||||
export X_API_SECRET="your-api-secret"
|
||||
export X_CONSUMER_KEY="your-consumer-key"
|
||||
export X_CONSUMER_SECRET="your-consumer-secret"
|
||||
export X_ACCESS_TOKEN="your-access-token"
|
||||
export X_ACCESS_SECRET="your-access-secret"
|
||||
export X_ACCESS_TOKEN_SECRET="your-access-token-secret"
|
||||
```
|
||||
|
||||
Legacy aliases such as `X_API_KEY`, `X_API_SECRET`, and `X_ACCESS_SECRET` may exist in older setups. Prefer the `X_CONSUMER_*` and `X_ACCESS_TOKEN_SECRET` names when documenting or wiring new flows.
|
||||
|
||||
```python
|
||||
import os
|
||||
from requests_oauthlib import OAuth1Session
|
||||
|
||||
oauth = OAuth1Session(
|
||||
os.environ["X_API_KEY"],
|
||||
client_secret=os.environ["X_API_SECRET"],
|
||||
os.environ["X_CONSUMER_KEY"],
|
||||
client_secret=os.environ["X_CONSUMER_SECRET"],
|
||||
resource_owner_key=os.environ["X_ACCESS_TOKEN"],
|
||||
resource_owner_secret=os.environ["X_ACCESS_SECRET"],
|
||||
resource_owner_secret=os.environ["X_ACCESS_TOKEN_SECRET"],
|
||||
)
|
||||
```
|
||||
|
||||
@@ -92,7 +94,6 @@ def post_thread(oauth, tweets: list[str]) -> list[str]:
|
||||
if reply_to:
|
||||
payload["reply"] = {"in_reply_to_tweet_id": reply_to}
|
||||
resp = oauth.post("https://api.x.com/2/tweets", json=payload)
|
||||
resp.raise_for_status()
|
||||
tweet_id = resp.json()["data"]["id"]
|
||||
ids.append(tweet_id)
|
||||
reply_to = tweet_id
|
||||
@@ -126,6 +127,21 @@ resp = requests.get(
|
||||
)
|
||||
```
|
||||
|
||||
### Pull Recent Original Posts for Voice Modeling
|
||||
|
||||
```python
|
||||
resp = requests.get(
|
||||
"https://api.x.com/2/tweets/search/recent",
|
||||
headers=headers,
|
||||
params={
|
||||
"query": "from:affaanmustafa -is:retweet -is:reply",
|
||||
"max_results": 25,
|
||||
"tweet.fields": "created_at,public_metrics",
|
||||
}
|
||||
)
|
||||
voice_samples = resp.json()
|
||||
```
|
||||
|
||||
### Get User by Username
|
||||
|
||||
```python
|
||||
@@ -155,17 +171,12 @@ resp = oauth.post(
|
||||
)
|
||||
```
|
||||
|
||||
## Rate Limits Reference
|
||||
## Rate Limits
|
||||
|
||||
| Endpoint | Limit | Window |
|
||||
|----------|-------|--------|
|
||||
| POST /2/tweets | 200 | 15 min |
|
||||
| GET /2/tweets/search/recent | 450 | 15 min |
|
||||
| GET /2/users/:id/tweets | 1500 | 15 min |
|
||||
| GET /2/users/by/username | 300 | 15 min |
|
||||
| POST media/upload | 415 | 15 min |
|
||||
|
||||
Always check `x-rate-limit-remaining` and `x-rate-limit-reset` headers.
|
||||
X API rate limits vary by endpoint, auth method, and account tier, and they change over time. Always:
|
||||
- Check the current X developer docs before hardcoding assumptions
|
||||
- Read `x-rate-limit-remaining` and `x-rate-limit-reset` headers at runtime
|
||||
- Back off automatically instead of relying on static tables in code
|
||||
|
||||
```python
|
||||
import time
|
||||
@@ -202,13 +213,18 @@ else:
|
||||
|
||||
## Integration with Content Engine
|
||||
|
||||
Use `content-engine` skill to generate platform-native content, then post via X API:
|
||||
1. Generate content with content-engine (X platform format)
|
||||
2. Validate length (280 chars for single tweet)
|
||||
3. Post via X API using patterns above
|
||||
4. Track engagement via public_metrics
|
||||
Use `brand-voice` plus `content-engine` to generate platform-native content, then post via X API:
|
||||
1. Pull recent original posts when voice matching matters
|
||||
2. Build or reuse a `VOICE PROFILE`
|
||||
3. Generate content with `content-engine` in X-native format
|
||||
4. Validate length and thread structure
|
||||
5. Return the draft for approval unless the user explicitly asked to post now
|
||||
6. Post via X API only after approval
|
||||
7. Track engagement via public_metrics
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` — Build a reusable voice profile from real X and site/source material
|
||||
- `content-engine` — Generate platform-native content for X
|
||||
- `crosspost` — Distribute content across X, LinkedIn, and other platforms
|
||||
- `connections-optimizer` — Reorganize the X graph before drafting network-driven outreach
|
||||
|
||||
@@ -10,16 +10,16 @@ Use this workflow when working on **add-or-update-skill** in `everything-claude-
|
||||
|
||||
## Goal
|
||||
|
||||
Adds a new skill or updates an existing skill for an agentic workflow, including documentation and sometimes related manifests.
|
||||
Adds a new skill or updates an existing skill in the system, including documentation and registration in manifests.
|
||||
|
||||
## Common Files
|
||||
|
||||
- `skills/*/SKILL.md`
|
||||
- `.agents/skills/*/SKILL.md`
|
||||
- `manifests/install-modules.json`
|
||||
- `manifests/install-components.json`
|
||||
- `AGENTS.md`
|
||||
- `README.md`
|
||||
- `README.zh-CN.md`
|
||||
- `docs/zh-CN/AGENTS.md`
|
||||
|
||||
## Suggested Sequence
|
||||
|
||||
@@ -30,11 +30,11 @@ Adds a new skill or updates an existing skill for an agentic workflow, including
|
||||
|
||||
## Typical Commit Signals
|
||||
|
||||
- Create or update a SKILL.md file under skills/{skill-name}/ or .agents/skills/{skill-name}/
|
||||
- Optionally update AGENTS.md, README.md, and localized docs
|
||||
- Update manifests/install-modules.json and/or install-components.json if the skill is part of installable modules
|
||||
- Optionally add or update related agent markdown files
|
||||
- Optionally update tests or scripts if the skill introduces new hooks or behaviors
|
||||
- Create or update SKILL.md in skills/<skill-name>/ or .agents/skills/<skill-name>/
|
||||
- Optionally add or update related reference files (e.g., assets, rules, schemas) in the skill directory
|
||||
- Update manifests/install-modules.json and/or manifests/install-components.json to register the new skill
|
||||
- Update AGENTS.md, README.md, README.zh-CN.md, docs/zh-CN/AGENTS.md, and docs/zh-CN/README.md to document the new skill
|
||||
- Optionally add or update tests for the skill
|
||||
|
||||
## Notes
|
||||
|
||||
|
||||
@@ -18,6 +18,7 @@ Standard feature implementation workflow
|
||||
- `.opencode/plugins/*`
|
||||
- `.opencode/plugins/lib/*`
|
||||
- `**/*.test.*`
|
||||
- `**/api/**`
|
||||
|
||||
## Suggested Sequence
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
"version": "1.3",
|
||||
"schemaVersion": "1.0",
|
||||
"generatedBy": "ecc-tools",
|
||||
"generatedAt": "2026-04-01T23:05:46.781Z",
|
||||
"generatedAt": "2026-04-02T13:44:44.593Z",
|
||||
"repo": "https://github.com/affaan-m/everything-claude-code",
|
||||
"profiles": {
|
||||
"requested": "full",
|
||||
|
||||
@@ -10,5 +10,5 @@
|
||||
"javascript"
|
||||
],
|
||||
"suggestedBy": "ecc-tools-repo-analysis",
|
||||
"createdAt": "2026-04-01T23:06:31.168Z"
|
||||
"createdAt": "2026-04-02T13:45:31.240Z"
|
||||
}
|
||||
@@ -18,4 +18,4 @@ Use this when the task is documentation-heavy, source-sensitive, or requires bro
|
||||
|
||||
- Primary language: JavaScript
|
||||
- Framework: Not detected
|
||||
- Workflows detected: 8
|
||||
- Workflows detected: 7
|
||||
@@ -4,7 +4,7 @@ Generated by ECC Tools from repository history. Review before treating it as a h
|
||||
|
||||
## Commit Workflow
|
||||
|
||||
- Prefer `mixed` commit messaging with prefixes such as fix, feat, docs, chore.
|
||||
- Prefer `conventional` commit messaging with prefixes such as fix, feat, docs, chore.
|
||||
- Keep new changes aligned with the existing pull-request and review flow already present in the repo.
|
||||
|
||||
## Architecture
|
||||
@@ -26,7 +26,7 @@ Generated by ECC Tools from repository history. Review before treating it as a h
|
||||
|
||||
- feature-development: Standard feature implementation workflow
|
||||
- refactoring: Code refactoring and cleanup workflow
|
||||
- add-or-update-skill: Adds a new skill or updates an existing skill for an agentic workflow, including documentation and sometimes related manifests.
|
||||
- add-or-update-skill: Adds a new skill or updates an existing skill in the system, including documentation and registration in manifests.
|
||||
|
||||
## Review Reminder
|
||||
|
||||
|
||||
@@ -5,184 +5,161 @@
|
||||
|
||||
## Overview
|
||||
|
||||
This skill documents the core development patterns, coding conventions, and agentic workflows used in the `everything-claude-code` (ECC) repository. The project is written in JavaScript (no framework detected) and implements modular, agent-driven automation and installable skills. This guide covers how to contribute new skills, commands, install targets, agent definitions, and more, following the repository's conventions and workflows.
|
||||
This skill provides a comprehensive guide to the development patterns, coding conventions, and key workflows used in the `everything-claude-code` repository. The project is written in JavaScript (no framework detected) and focuses on modular skill development, extensible command workflows, and robust automation for agents and integrations. This guide will help contributors maintain consistency and efficiency across code, documentation, and automation.
|
||||
|
||||
## Coding Conventions
|
||||
|
||||
ECC follows consistent JavaScript coding and repository organization conventions:
|
||||
**File Naming**
|
||||
- Use `camelCase` for JavaScript files and directories.
|
||||
- Example: `installTargets.js`, `mySkillHandler.js`
|
||||
|
||||
### File Naming
|
||||
**Import Style**
|
||||
- Use relative imports for modules within the repository.
|
||||
- Example:
|
||||
```js
|
||||
import myUtil from '../utils/myUtil.js';
|
||||
```
|
||||
|
||||
- Use **camelCase** for JavaScript files and modules.
|
||||
- Example: `myModule.js`, `installTarget.js`
|
||||
**Export Style**
|
||||
- Mixed: Both default and named exports are used.
|
||||
- Example (default export):
|
||||
```js
|
||||
export default function installTarget() { ... }
|
||||
```
|
||||
- Example (named export):
|
||||
```js
|
||||
export function registerSkill() { ... }
|
||||
```
|
||||
|
||||
### Import Style
|
||||
|
||||
- Use **relative imports** for internal modules.
|
||||
```js
|
||||
// Good
|
||||
const utils = require('./utils');
|
||||
import { doThing } from '../lib/doThing.js';
|
||||
```
|
||||
|
||||
### Export Style
|
||||
|
||||
- Both **CommonJS** and **ES module** exports are used, depending on context.
|
||||
```js
|
||||
// CommonJS
|
||||
module.exports = function doSomething() { ... };
|
||||
|
||||
// ES Module
|
||||
export function doSomethingElse() { ... }
|
||||
```
|
||||
|
||||
### Commit Messages
|
||||
|
||||
- Prefix with `fix:`, `feat:`, `docs:`, or `chore:`
|
||||
- Keep messages concise (average ~56 characters)
|
||||
```
|
||||
feat: add support for new install target
|
||||
fix: resolve agent prompt parsing bug
|
||||
```
|
||||
**Commit Messages**
|
||||
- Use [Conventional Commits](https://www.conventionalcommits.org/) with these prefixes: `fix`, `feat`, `docs`, `chore`.
|
||||
- Example: `feat: add support for new install target (CodeBuddy)`
|
||||
|
||||
## Workflows
|
||||
|
||||
### Add or Update a Skill
|
||||
|
||||
**Trigger:** When introducing or updating a skill for agentic workflows
|
||||
**Trigger:** When introducing a new skill or updating an existing skill's capabilities or documentation
|
||||
**Command:** `/add-skill`
|
||||
|
||||
1. Create or update a `SKILL.md` file under `skills/{skill-name}/` or `.agents/skills/{skill-name}/`.
|
||||
2. Optionally update `AGENTS.md`, `README.md`, and localized docs (e.g., `README.zh-CN.md`).
|
||||
3. Update `manifests/install-modules.json` and/or `install-components.json` if the skill is installable.
|
||||
4. Optionally add or update related agent markdown files.
|
||||
5. Optionally update tests or scripts if the skill introduces new hooks or behaviors.
|
||||
1. Create or update `SKILL.md` in `skills/<skill-name>/` or `.agents/skills/<skill-name>/`.
|
||||
2. Optionally add or update related reference files (e.g., assets, rules, schemas) in the skill directory.
|
||||
3. Update `manifests/install-modules.json` and/or `manifests/install-components.json` to register the new skill.
|
||||
4. Update documentation files: `AGENTS.md`, `README.md`, `README.zh-CN.md`, `docs/zh-CN/AGENTS.md`, and `docs/zh-CN/README.md`.
|
||||
5. Optionally add or update tests for the skill.
|
||||
|
||||
**Example:**
|
||||
```shell
|
||||
mkdir -p skills/myNewSkill
|
||||
```bash
|
||||
# Add a new skill
|
||||
mkdir skills/myNewSkill
|
||||
touch skills/myNewSkill/SKILL.md
|
||||
# Edit SKILL.md with documentation
|
||||
# Update manifests
|
||||
vim manifests/install-modules.json
|
||||
# Document the skill
|
||||
vim README.md AGENTS.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Add or Update a Command Workflow
|
||||
|
||||
**Trigger:** When adding or updating a repeatable workflow command
|
||||
**Trigger:** When introducing or improving a command-driven workflow (e.g., PRP, code review, refactoring)
|
||||
**Command:** `/add-command`
|
||||
|
||||
1. Create or update a markdown file in `commands/` (e.g., `prp-*.md`, `gan-*.md`, `santa-loop.md`).
|
||||
2. Document workflow phases, usage, and outputs in the file.
|
||||
3. Optionally update related skills or agent definitions.
|
||||
4. Optionally add shell scripts or orchestrators if automation is needed.
|
||||
5. Optionally update `AGENTS.md` or `README.md` to reference the new command.
|
||||
1. Create or update a markdown file in `commands/` (e.g., `commands/prp-*.md`).
|
||||
2. If relevant, update or add supporting files in `.claude/commands/`.
|
||||
3. Optionally update documentation or references to the command in `README.md` or `AGENTS.md`.
|
||||
|
||||
**Example:**
|
||||
```shell
|
||||
touch commands/prp-myworkflow.md
|
||||
# Document the workflow steps in markdown
|
||||
```bash
|
||||
# Add a new PRP command workflow
|
||||
touch commands/prp-enhanced.md
|
||||
# Optionally update supporting files
|
||||
vim .claude/commands/prp-enhanced.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Add or Update an Install Target
|
||||
### Refactor Skill or Core Logic
|
||||
**Trigger:** When improving, reorganizing, or merging existing skill or core logic for maintainability or performance
|
||||
**Command:** `/refactor-skill`
|
||||
|
||||
**Trigger:** When supporting a new IDE/platform or updating install logic
|
||||
1. Edit multiple `SKILL.md` files in `skills/` or `.agents/skills/`.
|
||||
2. Update documentation: `AGENTS.md`, `README.md`, `README.zh-CN.md`, `docs/zh-CN/AGENTS.md`, `docs/zh-CN/README.md`.
|
||||
3. Update `manifests/install-modules.json` or related manifests as needed.
|
||||
4. Remove, merge, or rename legacy command or skill files.
|
||||
5. Optionally update or add tests for the refactored logic.
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# Refactor multiple skills
|
||||
vim skills/skillA/SKILL.md skills/skillB/SKILL.md
|
||||
# Update manifests and docs
|
||||
vim manifests/install-modules.json README.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Add or Update Install Target
|
||||
**Trigger:** When supporting a new external tool or environment as an install target
|
||||
**Command:** `/add-install-target`
|
||||
|
||||
1. Add or update install scripts (`install.sh`, `install.js`, `uninstall.sh`, `uninstall.js`) in a new or existing directory (e.g., `.codebuddy/`).
|
||||
2. Update `manifests/install-modules.json` and `schemas/ecc-install-config.schema.json`.
|
||||
3. Update `scripts/lib/install-manifests.js` and `scripts/lib/install-targets/{target}.js`.
|
||||
4. Add or update tests for install targets.
|
||||
5. Optionally update `README.md` or `AGENTS.md`.
|
||||
1. Create or update install scripts in a dedicated directory (e.g., `.codebuddy/`, `.gemini/`).
|
||||
2. Add or update install target logic in `scripts/lib/install-targets/`.
|
||||
3. Update `manifests/install-modules.json` and `schemas/ecc-install-config.schema.json`.
|
||||
4. Update or add tests for the new install target in `tests/lib/install-targets.test.js`.
|
||||
5. Update registry logic if necessary.
|
||||
|
||||
**Example:**
|
||||
```shell
|
||||
mkdir -p .codebuddy
|
||||
touch .codebuddy/install.sh
|
||||
# Implement install logic
|
||||
```bash
|
||||
# Add a Gemini install target
|
||||
mkdir .gemini
|
||||
touch .gemini/install.sh
|
||||
vim scripts/lib/install-targets/gemini.js
|
||||
# Update schemas and manifests
|
||||
vim schemas/ecc-install-config.schema.json manifests/install-modules.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Update Hooks and Hook Tests
|
||||
### Update or Add CI/CD Workflow
|
||||
**Trigger:** When improving, fixing, or adding CI/CD automation (e.g., dependency updates, release validation)
|
||||
**Command:** `/update-ci`
|
||||
|
||||
**Trigger:** When refactoring, fixing, or extending system hooks
|
||||
**Command:** `/update-hook`
|
||||
|
||||
1. Edit `hooks/hooks.json` to change configuration or add/remove hooks.
|
||||
2. Edit or add `scripts/hooks/*.js` to implement hook logic.
|
||||
3. Edit or add `tests/hooks/*.test.js` to cover new or changed behaviors.
|
||||
4. Optionally update related scripts or documentation.
|
||||
1. Edit or add YAML files in `.github/workflows/`.
|
||||
2. Optionally update related scripts or lockfiles (`yarn.lock`, `package-lock.json`).
|
||||
3. Optionally update hooks or validation scripts.
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
// hooks/hooks.json
|
||||
{
|
||||
"pre-commit": ["format", "typecheck"]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Dependency Bump via Dependabot
|
||||
|
||||
**Trigger:** When updating dependencies for security or features
|
||||
**Command:** `/bump-dependency`
|
||||
|
||||
1. Update dependency version in `package.json`, `yarn.lock`, or workflow YAML files.
|
||||
2. Commit with a standardized message (often by dependabot).
|
||||
3. Optionally update related documentation or changelogs.
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
// package.json
|
||||
"dependencies": {
|
||||
"some-lib": "^2.0.0"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Add or Update Agent Definition
|
||||
|
||||
**Trigger:** When adding or modifying agent definitions/prompts
|
||||
**Command:** `/add-agent`
|
||||
|
||||
1. Add or update agent markdown files (`agents/*.md`).
|
||||
2. Add or update prompt files (`.opencode/prompts/agents/*.txt`).
|
||||
3. Update `.opencode/opencode.json` to register new agents.
|
||||
4. Update `AGENTS.md` to document new agents.
|
||||
5. Optionally update related skills or orchestrators.
|
||||
|
||||
**Example:**
|
||||
```shell
|
||||
touch agents/myAgent.md
|
||||
touch .opencode/prompts/agents/myAgent.txt
|
||||
```bash
|
||||
# Add a new CI workflow
|
||||
touch .github/workflows/dependency-check.yml
|
||||
# Update lockfiles if needed
|
||||
yarn install
|
||||
```
|
||||
|
||||
## Testing Patterns
|
||||
|
||||
- Test files follow the pattern `*.test.js`.
|
||||
- Testing framework is **unknown** (not detected), but test files are placed alongside code or in `tests/` directories.
|
||||
- Example test file:
|
||||
```js
|
||||
// tests/hooks/formatHook.test.js
|
||||
const formatHook = require('../../scripts/hooks/formatHook');
|
||||
test('should format code correctly', () => {
|
||||
// test implementation
|
||||
});
|
||||
```
|
||||
- **Test Files:** Use the `*.test.js` naming pattern.
|
||||
- Example: `installTargets.test.js`
|
||||
- **Framework:** Not explicitly specified; use standard Node.js or your preferred JS testing framework.
|
||||
- **Placement:** Tests are typically placed alongside implementation or in a `tests/` directory.
|
||||
|
||||
**Example Test File:**
|
||||
```js
|
||||
// tests/lib/install-targets.test.js
|
||||
import { installTarget } from '../../scripts/lib/install-targets/gemini.js';
|
||||
|
||||
test('should install Gemini target', () => {
|
||||
expect(installTarget()).toBe(true);
|
||||
});
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
| Command | Purpose |
|
||||
|--------------------|----------------------------------------------------------------|
|
||||
| /add-skill | Add or update a skill and its documentation |
|
||||
| /add-command | Add or update a workflow command |
|
||||
| /add-install-target| Add or update an install target (IDE/platform/environment) |
|
||||
| /update-hook | Refactor, fix, or extend system hooks and their tests |
|
||||
| /bump-dependency | Update dependency versions in package or workflow files |
|
||||
| /add-agent | Add or update agent definitions and prompts |
|
||||
| Command | Purpose |
|
||||
|---------------------|--------------------------------------------------------------|
|
||||
| /add-skill | Add or update a skill, including documentation and manifests |
|
||||
| /add-command | Add or update a command workflow |
|
||||
| /refactor-skill | Refactor skill or core logic and update documentation |
|
||||
| /add-install-target | Add or update an install target integration |
|
||||
| /update-ci | Add or update CI/CD workflow files |
|
||||
```
|
||||
|
||||
@@ -11,5 +11,5 @@
|
||||
".claude/commands/refactoring.md",
|
||||
".claude/commands/add-or-update-skill.md"
|
||||
],
|
||||
"updatedAt": "2026-04-01T23:05:46.781Z"
|
||||
"updatedAt": "2026-04-02T13:44:44.593Z"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
# Everything Claude Code (ECC) — Agent Instructions
|
||||
|
||||
This is a **production-ready AI coding plugin** providing 36 specialized agents, 147 skills, 68 commands, and automated hook workflows for software development.
|
||||
This is a **production-ready AI coding plugin** providing 36 specialized agents, 151 skills, 68 commands, and automated hook workflows for software development.
|
||||
|
||||
**Version:** 1.9.0
|
||||
|
||||
@@ -146,7 +146,7 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
|
||||
|
||||
```
|
||||
agents/ — 36 specialized subagents
|
||||
skills/ — 147 workflow skills and domain knowledge
|
||||
skills/ — 151 workflow skills and domain knowledge
|
||||
commands/ — 68 slash commands
|
||||
hooks/ — Trigger-based automations
|
||||
rules/ — Always-follow guidelines (common + per-language)
|
||||
|
||||
@@ -225,7 +225,7 @@ For manual install instructions see the README in the `rules/` folder. When copy
|
||||
/plugin list everything-claude-code@everything-claude-code
|
||||
```
|
||||
|
||||
**That's it!** You now have access to 36 agents, 147 skills, and 68 legacy command shims.
|
||||
**That's it!** You now have access to 36 agents, 151 skills, and 68 legacy command shims.
|
||||
|
||||
### Multi-model commands require additional setup
|
||||
|
||||
@@ -1120,7 +1120,7 @@ The configuration is automatically detected from `.opencode/opencode.json`.
|
||||
|---------|-------------|----------|--------|
|
||||
| Agents | PASS: 36 agents | PASS: 12 agents | **Claude Code leads** |
|
||||
| Commands | PASS: 68 commands | PASS: 31 commands | **Claude Code leads** |
|
||||
| Skills | PASS: 147 skills | PASS: 37 skills | **Claude Code leads** |
|
||||
| Skills | PASS: 151 skills | PASS: 37 skills | **Claude Code leads** |
|
||||
| Hooks | PASS: 8 event types | PASS: 11 events | **OpenCode has more!** |
|
||||
| Rules | PASS: 29 rules | PASS: 13 instructions | **Claude Code leads** |
|
||||
| MCP Servers | PASS: 14 servers | PASS: Full | **Full parity** |
|
||||
@@ -1229,7 +1229,7 @@ ECC is the **first plugin to maximize every major AI coding tool**. Here's how e
|
||||
|---------|------------|------------|-----------|----------|
|
||||
| **Agents** | 36 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
|
||||
| **Commands** | 68 | Shared | Instruction-based | 31 |
|
||||
| **Skills** | 147 | Shared | 10 (native format) | 37 |
|
||||
| **Skills** | 151 | Shared | 10 (native format) | 37 |
|
||||
| **Hook Events** | 8 types | 15 types | None yet | 11 types |
|
||||
| **Hook Scripts** | 20+ scripts | 16 scripts (DRY adapter) | N/A | Plugin hooks |
|
||||
| **Rules** | 34 (common + lang) | 34 (YAML frontmatter) | Instruction-based | 13 instructions |
|
||||
|
||||
@@ -106,7 +106,7 @@ cp -r everything-claude-code/rules/perl ~/.claude/rules/
|
||||
/plugin list everything-claude-code@everything-claude-code
|
||||
```
|
||||
|
||||
**完成!** 你现在可以使用 36 个代理、147 个技能和 68 个命令。
|
||||
**完成!** 你现在可以使用 36 个代理、151 个技能和 68 个命令。
|
||||
|
||||
### multi-* 命令需要额外配置
|
||||
|
||||
|
||||
@@ -36,6 +36,7 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
|
||||
- continue one-by-one audit of overlapping or low-signal skill content
|
||||
- move repo guidance and contribution flow to skills-first, leaving commands only as explicit compatibility shims
|
||||
- add operator skills that wrap connected surfaces instead of exposing only raw APIs or disconnected primitives
|
||||
- land the canonical voice system, network-optimization lane, and reusable Manim explainer lane
|
||||
- Security:
|
||||
- keep dependency posture clean
|
||||
- preserve self-contained hook and MCP behavior
|
||||
@@ -61,6 +62,8 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
|
||||
- `#844` ui-demo skill
|
||||
- `#1110` install-time Claude hook root resolution
|
||||
- `#1106` portable Codex Context7 key extraction
|
||||
- `#1107` Codex baseline merge and sample agent-role sync
|
||||
- `#1119` stale CI/lint cleanup that still contained safe low-risk fixes
|
||||
- Port or rebuild inside ECC after full audit:
|
||||
- `#894` Jira integration
|
||||
- `#814` + `#808` rebuild as a single consolidated notifications lane for Opencode and cross-harness surfaces
|
||||
@@ -102,3 +105,9 @@ Keep this file detailed for only the current sprint, blockers, and next actions.
|
||||
- 2026-04-01: Direct-ported the real fix from the unresolved hook-path PR lane into the active installer. Claude installs now replace `${CLAUDE_PLUGIN_ROOT}` with the concrete install root in both `settings.json` and the copied `hooks/hooks.json`, which keeps PreToolUse/PostToolUse hooks working outside plugin-managed env injection.
|
||||
- 2026-04-01: Replaced the GNU-only `grep -P` parser in `scripts/sync-ecc-to-codex.sh` with a portable Node parser for Context7 key extraction. Added source-level regression coverage so BSD/macOS syncs do not drift back to non-portable parsing.
|
||||
- 2026-04-01: Targeted regression suite after the direct ports is green: `tests/scripts/install-apply.test.js`, `tests/scripts/sync-ecc-to-codex.test.js`, and `tests/scripts/codex-hooks.test.js`.
|
||||
- 2026-04-01: Ported the useful core of `#1107` directly into `main` as an add-only Codex baseline merge. `scripts/sync-ecc-to-codex.sh` now fills missing non-MCP defaults from `.codex/config.toml`, syncs sample agent role files into `~/.codex/agents`, and preserves user config instead of replacing it. Added regression coverage for sparse configs and implicit parent tables.
|
||||
- 2026-04-01: Ported the safe low-risk cleanup from `#1119` directly into `main` instead of keeping an obsolete CI PR open. This included `.mjs` eslint handling, stricter null checks, Windows home-dir coverage in bash-log tests, and longer Trae shell-test timeouts.
|
||||
- 2026-04-01: Added `brand-voice` as the canonical source-derived writing-style system and wired the content lane to treat it as the shared voice source of truth instead of duplicating partial style heuristics across skills.
|
||||
- 2026-04-01: Added `connections-optimizer` as the review-first social-graph reorganization workflow for X and LinkedIn, with explicit pruning modes, browser fallback expectations, and Apple Mail drafting guidance.
|
||||
- 2026-04-01: Added `manim-video` as the reusable technical explainer lane and seeded it with a starter network-graph scene so launch and systems animations do not depend on one-off scratch scripts.
|
||||
- 2026-04-02: Re-extracted `social-graph-ranker` as a standalone primitive because the weighted bridge-decay model is reusable outside the full lead workflow. `lead-intelligence` now points to it for canonical graph ranking instead of carrying the full algorithm explanation inline, while `connections-optimizer` stays the broader operator layer for pruning, adds, and outbound review packs.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Everything Claude Code (ECC) — 智能体指令
|
||||
|
||||
这是一个**生产就绪的 AI 编码插件**,提供 36 个专业代理、147 项技能、68 条命令以及自动化钩子工作流,用于软件开发。
|
||||
这是一个**生产就绪的 AI 编码插件**,提供 36 个专业代理、151 项技能、68 条命令以及自动化钩子工作流,用于软件开发。
|
||||
|
||||
**版本:** 1.9.0
|
||||
|
||||
@@ -147,7 +147,7 @@
|
||||
|
||||
```
|
||||
agents/ — 36 个专业子代理
|
||||
skills/ — 147 个工作流技能和领域知识
|
||||
skills/ — 151 个工作流技能和领域知识
|
||||
commands/ — 68 个斜杠命令
|
||||
hooks/ — 基于触发的自动化
|
||||
rules/ — 始终遵循的指导方针(通用 + 每种语言)
|
||||
|
||||
@@ -209,7 +209,7 @@ npx ecc-install typescript
|
||||
/plugin list everything-claude-code@everything-claude-code
|
||||
```
|
||||
|
||||
**搞定!** 你现在可以使用 36 个智能体、147 项技能和 68 个命令了。
|
||||
**搞定!** 你现在可以使用 36 个智能体、151 项技能和 68 个命令了。
|
||||
|
||||
***
|
||||
|
||||
@@ -1096,7 +1096,7 @@ opencode
|
||||
|---------|-------------|----------|--------|
|
||||
| 智能体 | PASS: 36 个 | PASS: 12 个 | **Claude Code 领先** |
|
||||
| 命令 | PASS: 68 个 | PASS: 31 个 | **Claude Code 领先** |
|
||||
| 技能 | PASS: 147 项 | PASS: 37 项 | **Claude Code 领先** |
|
||||
| 技能 | PASS: 151 项 | PASS: 37 项 | **Claude Code 领先** |
|
||||
| 钩子 | PASS: 8 种事件类型 | PASS: 11 种事件 | **OpenCode 更多!** |
|
||||
| 规则 | PASS: 29 条 | PASS: 13 条指令 | **Claude Code 领先** |
|
||||
| MCP 服务器 | PASS: 14 个 | PASS: 完整 | **完全对等** |
|
||||
@@ -1208,7 +1208,7 @@ ECC 是**第一个最大化利用每个主要 AI 编码工具的插件**。以
|
||||
|---------|------------|------------|-----------|----------|
|
||||
| **智能体** | 36 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 |
|
||||
| **命令** | 68 | 共享 | 基于指令 | 31 |
|
||||
| **技能** | 147 | 共享 | 10 (原生格式) | 37 |
|
||||
| **技能** | 151 | 共享 | 10 (原生格式) | 37 |
|
||||
| **钩子事件** | 8 种类型 | 15 种类型 | 暂无 | 11 种类型 |
|
||||
| **钩子脚本** | 20+ 个脚本 | 16 个脚本 (DRY 适配器) | N/A | 插件钩子 |
|
||||
| **规则** | 34 (通用 + 语言) | 34 (YAML 前页) | 基于指令 | 13 条指令 |
|
||||
|
||||
@@ -24,5 +24,11 @@ module.exports = [
|
||||
'no-undef': 'error',
|
||||
'eqeqeq': 'warn'
|
||||
}
|
||||
},
|
||||
{
|
||||
files: ['**/*.mjs'],
|
||||
languageOptions: {
|
||||
sourceType: 'module'
|
||||
}
|
||||
}
|
||||
];
|
||||
|
||||
@@ -140,7 +140,7 @@
|
||||
{
|
||||
"id": "capability:content",
|
||||
"family": "capability",
|
||||
"description": "Business, writing, market, and investor communication skills.",
|
||||
"description": "Business, writing, market, investor communication, and reusable voice-system skills.",
|
||||
"modules": [
|
||||
"business-content"
|
||||
]
|
||||
@@ -148,7 +148,7 @@
|
||||
{
|
||||
"id": "capability:operators",
|
||||
"family": "capability",
|
||||
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, and Google Workspace.",
|
||||
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, Google Workspace, and network optimization.",
|
||||
"modules": [
|
||||
"operator-workflows"
|
||||
]
|
||||
@@ -164,7 +164,7 @@
|
||||
{
|
||||
"id": "capability:media",
|
||||
"family": "capability",
|
||||
"description": "Media generation and AI-assisted editing skills.",
|
||||
"description": "Media generation, technical explainers, and AI-assisted editing skills.",
|
||||
"modules": [
|
||||
"media-generation"
|
||||
]
|
||||
|
||||
@@ -282,10 +282,12 @@
|
||||
"description": "Business, writing, market, and investor communication skills.",
|
||||
"paths": [
|
||||
"skills/article-writing",
|
||||
"skills/brand-voice",
|
||||
"skills/content-engine",
|
||||
"skills/investor-materials",
|
||||
"skills/investor-outreach",
|
||||
"skills/lead-intelligence",
|
||||
"skills/social-graph-ranker",
|
||||
"skills/market-research"
|
||||
],
|
||||
"targets": [
|
||||
@@ -306,8 +308,9 @@
|
||||
{
|
||||
"id": "operator-workflows",
|
||||
"kind": "skills",
|
||||
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, and Google Workspace.",
|
||||
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, Google Workspace, and network optimization.",
|
||||
"paths": [
|
||||
"skills/connections-optimizer",
|
||||
"skills/customer-billing-ops",
|
||||
"skills/google-workspace-ops",
|
||||
"skills/project-flow-ops",
|
||||
@@ -354,9 +357,10 @@
|
||||
{
|
||||
"id": "media-generation",
|
||||
"kind": "skills",
|
||||
"description": "Media generation and AI-assisted editing skills.",
|
||||
"description": "Media generation, technical explainers, and AI-assisted editing skills.",
|
||||
"paths": [
|
||||
"skills/fal-ai-media",
|
||||
"skills/manim-video",
|
||||
"skills/remotion-video-creation",
|
||||
"skills/ui-demo",
|
||||
"skills/video-editing",
|
||||
|
||||
@@ -80,6 +80,7 @@
|
||||
"scripts/orchestrate-worktrees.js",
|
||||
"scripts/setup-package-manager.js",
|
||||
"scripts/skill-create-output.js",
|
||||
"scripts/codex/merge-codex-config.js",
|
||||
"scripts/codex/merge-mcp-config.js",
|
||||
"scripts/repair.js",
|
||||
"scripts/harness-audit.js",
|
||||
|
||||
317
scripts/codex/merge-codex-config.js
Normal file
317
scripts/codex/merge-codex-config.js
Normal file
@@ -0,0 +1,317 @@
|
||||
#!/usr/bin/env node
|
||||
'use strict';
|
||||
|
||||
/**
|
||||
* Merge the non-MCP Codex baseline from `.codex/config.toml` into a target
|
||||
* `config.toml` without overwriting existing user choices.
|
||||
*
|
||||
* Strategy: add-only.
|
||||
* - Missing root keys are inserted before the first TOML table.
|
||||
* - Missing table keys are appended to existing tables.
|
||||
* - Missing tables are appended to the end of the file.
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
let TOML;
|
||||
try {
|
||||
TOML = require('@iarna/toml');
|
||||
} catch {
|
||||
console.error('[ecc-codex] Missing dependency: @iarna/toml');
|
||||
console.error('[ecc-codex] Run: npm install (from the ECC repo root)');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const ROOT_KEYS = ['approval_policy', 'sandbox_mode', 'web_search', 'notify', 'persistent_instructions'];
|
||||
const TABLE_PATHS = [
|
||||
'features',
|
||||
'profiles.strict',
|
||||
'profiles.yolo',
|
||||
'agents',
|
||||
'agents.explorer',
|
||||
'agents.reviewer',
|
||||
'agents.docs_researcher',
|
||||
];
|
||||
const TOML_HEADER_RE = /^[ \t]*(?:\[[^[\]\n][^\]\n]*\]|\[\[[^[\]\n][^\]\n]*\]\])[ \t]*(?:#.*)?$/m;
|
||||
|
||||
function log(message) {
|
||||
console.log(`[ecc-codex] ${message}`);
|
||||
}
|
||||
|
||||
function warn(message) {
|
||||
console.warn(`[ecc-codex] WARNING: ${message}`);
|
||||
}
|
||||
|
||||
function getNested(obj, pathParts) {
|
||||
let current = obj;
|
||||
for (const part of pathParts) {
|
||||
if (!current || typeof current !== 'object' || !(part in current)) {
|
||||
return undefined;
|
||||
}
|
||||
current = current[part];
|
||||
}
|
||||
return current;
|
||||
}
|
||||
|
||||
function setNested(obj, pathParts, value) {
|
||||
let current = obj;
|
||||
for (let i = 0; i < pathParts.length - 1; i += 1) {
|
||||
const part = pathParts[i];
|
||||
if (!current[part] || typeof current[part] !== 'object' || Array.isArray(current[part])) {
|
||||
current[part] = {};
|
||||
}
|
||||
current = current[part];
|
||||
}
|
||||
current[pathParts[pathParts.length - 1]] = value;
|
||||
}
|
||||
|
||||
function findFirstTableIndex(raw) {
|
||||
const match = TOML_HEADER_RE.exec(raw);
|
||||
return match ? match.index : -1;
|
||||
}
|
||||
|
||||
function findTableRange(raw, tablePath) {
|
||||
const escaped = tablePath.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
const headerPattern = new RegExp(`^[ \\t]*\\[${escaped}\\][ \\t]*(?:#.*)?$`, 'm');
|
||||
const match = headerPattern.exec(raw);
|
||||
if (!match) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const headerEnd = raw.indexOf('\n', match.index);
|
||||
const bodyStart = headerEnd === -1 ? raw.length : headerEnd + 1;
|
||||
const nextHeaderRel = raw.slice(bodyStart).search(TOML_HEADER_RE);
|
||||
const bodyEnd = nextHeaderRel === -1 ? raw.length : bodyStart + nextHeaderRel;
|
||||
return { bodyStart, bodyEnd };
|
||||
}
|
||||
|
||||
function ensureTrailingNewline(text) {
|
||||
return text.endsWith('\n') ? text : `${text}\n`;
|
||||
}
|
||||
|
||||
function insertBeforeFirstTable(raw, block) {
|
||||
const normalizedBlock = ensureTrailingNewline(block.trimEnd());
|
||||
const firstTableIndex = findFirstTableIndex(raw);
|
||||
if (firstTableIndex === -1) {
|
||||
const prefix = raw.trimEnd();
|
||||
return prefix ? `${prefix}\n${normalizedBlock}` : normalizedBlock;
|
||||
}
|
||||
|
||||
const before = raw.slice(0, firstTableIndex).trimEnd();
|
||||
const after = raw.slice(firstTableIndex).replace(/^\n+/, '');
|
||||
return `${before}\n\n${normalizedBlock}\n${after}`;
|
||||
}
|
||||
|
||||
function appendBlock(raw, block) {
|
||||
const prefix = raw.trimEnd();
|
||||
const normalizedBlock = block.trimEnd();
|
||||
return prefix ? `${prefix}\n\n${normalizedBlock}\n` : `${normalizedBlock}\n`;
|
||||
}
|
||||
|
||||
function stringifyValue(value) {
|
||||
return TOML.stringify({ value }).trim().replace(/^value = /, '');
|
||||
}
|
||||
|
||||
function updateInlineTableKeys(raw, tablePath, missingKeys) {
|
||||
const pathParts = tablePath.split('.');
|
||||
if (pathParts.length < 2) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const parentPath = pathParts.slice(0, -1).join('.');
|
||||
const parentRange = findTableRange(raw, parentPath);
|
||||
if (!parentRange) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const tableKey = pathParts[pathParts.length - 1];
|
||||
const escapedKey = tableKey.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
const body = raw.slice(parentRange.bodyStart, parentRange.bodyEnd);
|
||||
const lines = body.split('\n');
|
||||
for (let index = 0; index < lines.length; index += 1) {
|
||||
const inlinePattern = new RegExp(`^(\\s*${escapedKey}\\s*=\\s*\\{)(.*?)(\\}\\s*(?:#.*)?)$`);
|
||||
const match = inlinePattern.exec(lines[index]);
|
||||
if (!match) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const additions = Object.entries(missingKeys)
|
||||
.map(([key, value]) => `${key} = ${stringifyValue(value)}`)
|
||||
.join(', ');
|
||||
const existingEntries = match[2].trim();
|
||||
const nextEntries = existingEntries ? `${existingEntries}, ${additions}` : additions;
|
||||
lines[index] = `${match[1]}${nextEntries}${match[3]}`;
|
||||
return `${raw.slice(0, parentRange.bodyStart)}${lines.join('\n')}${raw.slice(parentRange.bodyEnd)}`;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function appendImplicitTable(raw, tablePath, missingKeys) {
|
||||
const candidate = appendBlock(raw, stringifyTable(tablePath, missingKeys));
|
||||
try {
|
||||
TOML.parse(candidate);
|
||||
return candidate;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
function appendToTable(raw, tablePath, block, missingKeys = null) {
|
||||
const range = findTableRange(raw, tablePath);
|
||||
if (!range) {
|
||||
if (missingKeys) {
|
||||
const inlineUpdated = updateInlineTableKeys(raw, tablePath, missingKeys);
|
||||
if (inlineUpdated) {
|
||||
return inlineUpdated;
|
||||
}
|
||||
|
||||
const appendedTable = appendImplicitTable(raw, tablePath, missingKeys);
|
||||
if (appendedTable) {
|
||||
return appendedTable;
|
||||
}
|
||||
}
|
||||
warn(`Skipping missing keys for [${tablePath}] because it has no standalone header and could not be safely updated`);
|
||||
return raw;
|
||||
}
|
||||
|
||||
const before = raw.slice(0, range.bodyEnd).trimEnd();
|
||||
const after = raw.slice(range.bodyEnd).replace(/^\n*/, '\n');
|
||||
return `${before}\n${block.trimEnd()}\n${after}`;
|
||||
}
|
||||
|
||||
function stringifyRootKeys(keys) {
|
||||
return TOML.stringify(keys).trim();
|
||||
}
|
||||
|
||||
function stringifyTable(tablePath, value) {
|
||||
const scalarOnly = {};
|
||||
for (const [key, entryValue] of Object.entries(value)) {
|
||||
if (entryValue && typeof entryValue === 'object' && !Array.isArray(entryValue)) {
|
||||
continue;
|
||||
}
|
||||
scalarOnly[key] = entryValue;
|
||||
}
|
||||
|
||||
const snippet = {};
|
||||
setNested(snippet, tablePath.split('.'), scalarOnly);
|
||||
return TOML.stringify(snippet).trim();
|
||||
}
|
||||
|
||||
function stringifyTableKeys(tableValue) {
|
||||
const lines = [];
|
||||
for (const [key, value] of Object.entries(tableValue)) {
|
||||
if (value && typeof value === 'object' && !Array.isArray(value)) {
|
||||
continue;
|
||||
}
|
||||
lines.push(TOML.stringify({ [key]: value }).trim());
|
||||
}
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
function main() {
|
||||
const args = process.argv.slice(2);
|
||||
const configPath = args.find(arg => !arg.startsWith('-'));
|
||||
const dryRun = args.includes('--dry-run');
|
||||
|
||||
if (!configPath) {
|
||||
console.error('Usage: merge-codex-config.js <config.toml> [--dry-run]');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const referencePath = path.join(__dirname, '..', '..', '.codex', 'config.toml');
|
||||
if (!fs.existsSync(referencePath)) {
|
||||
console.error(`[ecc-codex] Reference config not found: ${referencePath}`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (!fs.existsSync(configPath)) {
|
||||
console.error(`[ecc-codex] Config file not found: ${configPath}`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const raw = fs.readFileSync(configPath, 'utf8');
|
||||
const referenceRaw = fs.readFileSync(referencePath, 'utf8');
|
||||
|
||||
let targetConfig;
|
||||
let referenceConfig;
|
||||
try {
|
||||
targetConfig = TOML.parse(raw);
|
||||
referenceConfig = TOML.parse(referenceRaw);
|
||||
} catch (error) {
|
||||
console.error(`[ecc-codex] Failed to parse TOML: ${error.message}`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const missingRootKeys = {};
|
||||
for (const key of ROOT_KEYS) {
|
||||
if (referenceConfig[key] !== undefined && targetConfig[key] === undefined) {
|
||||
missingRootKeys[key] = referenceConfig[key];
|
||||
}
|
||||
}
|
||||
|
||||
const missingTables = [];
|
||||
const missingTableKeys = [];
|
||||
for (const tablePath of TABLE_PATHS) {
|
||||
const pathParts = tablePath.split('.');
|
||||
const referenceValue = getNested(referenceConfig, pathParts);
|
||||
if (referenceValue === undefined) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const targetValue = getNested(targetConfig, pathParts);
|
||||
if (targetValue === undefined) {
|
||||
missingTables.push(tablePath);
|
||||
continue;
|
||||
}
|
||||
|
||||
const missingKeys = {};
|
||||
for (const [key, value] of Object.entries(referenceValue)) {
|
||||
if (value && typeof value === 'object' && !Array.isArray(value)) {
|
||||
continue;
|
||||
}
|
||||
if (targetValue[key] === undefined) {
|
||||
missingKeys[key] = value;
|
||||
}
|
||||
}
|
||||
|
||||
if (Object.keys(missingKeys).length > 0) {
|
||||
missingTableKeys.push({ tablePath, missingKeys });
|
||||
}
|
||||
}
|
||||
|
||||
if (
|
||||
Object.keys(missingRootKeys).length === 0 &&
|
||||
missingTables.length === 0 &&
|
||||
missingTableKeys.length === 0
|
||||
) {
|
||||
log('All baseline Codex settings already present. Nothing to do.');
|
||||
return;
|
||||
}
|
||||
|
||||
let nextRaw = raw;
|
||||
if (Object.keys(missingRootKeys).length > 0) {
|
||||
log(` [add-root] ${Object.keys(missingRootKeys).join(', ')}`);
|
||||
nextRaw = insertBeforeFirstTable(nextRaw, stringifyRootKeys(missingRootKeys));
|
||||
}
|
||||
|
||||
for (const { tablePath, missingKeys } of missingTableKeys) {
|
||||
log(` [add-keys] [${tablePath}] -> ${Object.keys(missingKeys).join(', ')}`);
|
||||
nextRaw = appendToTable(nextRaw, tablePath, stringifyTableKeys(missingKeys), missingKeys);
|
||||
}
|
||||
|
||||
for (const tablePath of missingTables) {
|
||||
log(` [add-table] [${tablePath}]`);
|
||||
nextRaw = appendBlock(nextRaw, stringifyTable(tablePath, getNested(referenceConfig, tablePath.split('.'))));
|
||||
}
|
||||
|
||||
if (dryRun) {
|
||||
log('Dry run — would write the merged Codex baseline.');
|
||||
return;
|
||||
}
|
||||
|
||||
fs.writeFileSync(configPath, nextRaw, 'utf8');
|
||||
log('Done. Baseline Codex settings merged.');
|
||||
}
|
||||
|
||||
main();
|
||||
@@ -67,7 +67,7 @@ function findFileIssues(filePath) {
|
||||
|
||||
try {
|
||||
const content = getStagedFileContent(filePath);
|
||||
if (content == null) {
|
||||
if (content === null || content === undefined) {
|
||||
return issues;
|
||||
}
|
||||
const lines = content.split('\n');
|
||||
|
||||
@@ -37,8 +37,8 @@ function createSkillObservation(input) {
|
||||
? input.skill.path.trim()
|
||||
: null;
|
||||
const success = Boolean(input.success);
|
||||
const error = input.error == null ? null : String(input.error);
|
||||
const feedback = input.feedback == null ? null : String(input.feedback);
|
||||
const error = input.error === null || input.error === undefined ? null : String(input.error);
|
||||
const feedback = input.feedback === null || input.feedback === undefined ? null : String(input.feedback);
|
||||
const variant = typeof input.variant === 'string' && input.variant.trim().length > 0
|
||||
? input.variant.trim()
|
||||
: 'baseline';
|
||||
|
||||
@@ -27,8 +27,11 @@ CONFIG_FILE="$CODEX_HOME/config.toml"
|
||||
AGENTS_FILE="$CODEX_HOME/AGENTS.md"
|
||||
AGENTS_ROOT_SRC="$REPO_ROOT/AGENTS.md"
|
||||
AGENTS_CODEX_SUPP_SRC="$REPO_ROOT/.codex/AGENTS.md"
|
||||
CODEX_AGENTS_SRC="$REPO_ROOT/.codex/agents"
|
||||
CODEX_AGENTS_DEST="$CODEX_HOME/agents"
|
||||
PROMPTS_SRC="$REPO_ROOT/commands"
|
||||
PROMPTS_DEST="$CODEX_HOME/prompts"
|
||||
BASELINE_MERGE_SCRIPT="$REPO_ROOT/scripts/codex/merge-codex-config.js"
|
||||
HOOKS_INSTALLER="$REPO_ROOT/scripts/codex/install-global-git-hooks.sh"
|
||||
SANITY_CHECKER="$REPO_ROOT/scripts/codex/check-codex-global-state.sh"
|
||||
CURSOR_RULES_DIR="$REPO_ROOT/.cursor/rules"
|
||||
@@ -146,7 +149,9 @@ MCP_MERGE_SCRIPT="$REPO_ROOT/scripts/codex/merge-mcp-config.js"
|
||||
|
||||
require_path "$REPO_ROOT/AGENTS.md" "ECC AGENTS.md"
|
||||
require_path "$AGENTS_CODEX_SUPP_SRC" "ECC Codex AGENTS supplement"
|
||||
require_path "$CODEX_AGENTS_SRC" "ECC Codex agent roles"
|
||||
require_path "$PROMPTS_SRC" "ECC commands directory"
|
||||
require_path "$BASELINE_MERGE_SCRIPT" "ECC Codex baseline merge script"
|
||||
require_path "$HOOKS_INSTALLER" "ECC global git hooks installer"
|
||||
require_path "$SANITY_CHECKER" "ECC global sanity checker"
|
||||
require_path "$CURSOR_RULES_DIR" "ECC Cursor rules directory"
|
||||
@@ -247,6 +252,26 @@ else
|
||||
fi
|
||||
fi
|
||||
|
||||
log "Merging ECC Codex baseline into $CONFIG_FILE (add-only, preserving user config)"
|
||||
if [[ "$MODE" == "dry-run" ]]; then
|
||||
node "$BASELINE_MERGE_SCRIPT" "$CONFIG_FILE" --dry-run
|
||||
else
|
||||
node "$BASELINE_MERGE_SCRIPT" "$CONFIG_FILE"
|
||||
fi
|
||||
|
||||
log "Syncing sample Codex agent role files"
|
||||
run_or_echo mkdir -p "$CODEX_AGENTS_DEST"
|
||||
for agent_file in "$CODEX_AGENTS_SRC"/*.toml; do
|
||||
[[ -f "$agent_file" ]] || continue
|
||||
agent_name="$(basename "$agent_file")"
|
||||
dest="$CODEX_AGENTS_DEST/$agent_name"
|
||||
if [[ -e "$dest" ]]; then
|
||||
log "Keeping existing Codex agent role file: $dest"
|
||||
else
|
||||
run_or_echo cp "$agent_file" "$dest"
|
||||
fi
|
||||
done
|
||||
|
||||
# Skills are NOT synced here — Codex CLI reads directly from
|
||||
# ~/.agents/skills/ (installed by ECC installer / npx skills).
|
||||
# Copying into ~/.codex/skills/ was unnecessary.
|
||||
|
||||
97
skills/brand-voice/SKILL.md
Normal file
97
skills/brand-voice/SKILL.md
Normal file
@@ -0,0 +1,97 @@
|
||||
---
|
||||
name: brand-voice
|
||||
description: Build a source-derived writing style profile from real posts, essays, launch notes, docs, or site copy, then reuse that profile across content, outreach, and social workflows. Use when the user wants voice consistency without generic AI writing tropes.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Brand Voice
|
||||
|
||||
Build a durable voice profile from real source material, then use that profile everywhere instead of re-deriving style from scratch or defaulting to generic AI copy.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- the user wants content or outreach in a specific voice
|
||||
- writing for X, LinkedIn, email, launch posts, threads, or product updates
|
||||
- adapting a known author's tone across channels
|
||||
- the existing content lane needs a reusable style system instead of one-off mimicry
|
||||
|
||||
## Source Priority
|
||||
|
||||
Use the strongest real source set available, in this order:
|
||||
|
||||
1. recent original X posts and threads
|
||||
2. articles, essays, memos, launch notes, or newsletters
|
||||
3. real outbound emails or DMs that worked
|
||||
4. product docs, changelogs, README framing, and site copy
|
||||
|
||||
Do not use generic platform exemplars as source material.
|
||||
|
||||
## Collection Workflow
|
||||
|
||||
1. Gather 5 to 20 representative samples when available.
|
||||
2. Prefer recent material over old material unless the user says the older writing is more canonical.
|
||||
3. Separate "public launch voice" from "private working voice" if the source set clearly splits.
|
||||
4. If live X access is available, use `x-api` to pull recent original posts before drafting.
|
||||
5. If site copy matters, include the current ECC landing page and repo/plugin framing.
|
||||
|
||||
## What to Extract
|
||||
|
||||
- rhythm and sentence length
|
||||
- compression vs explanation
|
||||
- capitalization norms
|
||||
- parenthetical use
|
||||
- question frequency and purpose
|
||||
- how sharply claims are made
|
||||
- how often numbers, mechanisms, or receipts show up
|
||||
- how transitions work
|
||||
- what the author never does
|
||||
|
||||
## Output Contract
|
||||
|
||||
Produce a reusable `VOICE PROFILE` block that downstream skills can consume directly. Use the schema in [references/voice-profile-schema.md](references/voice-profile-schema.md).
|
||||
|
||||
Keep the profile structured and short enough to reuse in session context. The point is not literary criticism. The point is operational reuse.
|
||||
|
||||
## Affaan / ECC Defaults
|
||||
|
||||
If the user wants Affaan / ECC voice and live sources are thin, start here unless newer source material overrides it:
|
||||
|
||||
- direct, compressed, concrete
|
||||
- specifics, mechanisms, receipts, and numbers beat adjectives
|
||||
- parentheticals are for qualification, narrowing, or over-clarification
|
||||
- capitalization is conventional unless there is a real reason to break it
|
||||
- questions are rare and should not be used as bait
|
||||
- tone can be sharp, blunt, skeptical, or dry
|
||||
- transitions should feel earned, not smoothed over
|
||||
|
||||
## Hard Bans
|
||||
|
||||
Delete and rewrite any of these:
|
||||
|
||||
- fake curiosity hooks
|
||||
- "not X, just Y"
|
||||
- "no fluff"
|
||||
- forced lowercase
|
||||
- LinkedIn thought-leader cadence
|
||||
- bait questions
|
||||
- "Excited to share"
|
||||
- generic founder-journey filler
|
||||
- corny parentheticals
|
||||
|
||||
## Persistence Rules
|
||||
|
||||
- Reuse the latest confirmed `VOICE PROFILE` across related tasks in the same session.
|
||||
- If the user asks for a durable artifact, save the profile in the requested workspace location or memory surface.
|
||||
- Do not create repo-tracked files that store personal voice fingerprints unless the user explicitly asks for that.
|
||||
|
||||
## Downstream Use
|
||||
|
||||
Use this skill before or inside:
|
||||
|
||||
- `content-engine`
|
||||
- `crosspost`
|
||||
- `lead-intelligence`
|
||||
- article or launch writing
|
||||
- cold or warm outbound across X, LinkedIn, and email
|
||||
|
||||
If another skill already has a partial voice capture section, this skill is the canonical source of truth.
|
||||
55
skills/brand-voice/references/voice-profile-schema.md
Normal file
55
skills/brand-voice/references/voice-profile-schema.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Voice Profile Schema
|
||||
|
||||
Use this exact structure when building a reusable voice profile:
|
||||
|
||||
```text
|
||||
VOICE PROFILE
|
||||
=============
|
||||
Author:
|
||||
Goal:
|
||||
Confidence:
|
||||
|
||||
Source Set
|
||||
- source 1
|
||||
- source 2
|
||||
- source 3
|
||||
|
||||
Rhythm
|
||||
- short note on sentence length, pacing, and fragmentation
|
||||
|
||||
Compression
|
||||
- how dense or explanatory the writing is
|
||||
|
||||
Capitalization
|
||||
- conventional, mixed, or situational
|
||||
|
||||
Parentheticals
|
||||
- how they are used and how they are not used
|
||||
|
||||
Question Use
|
||||
- rare, frequent, rhetorical, direct, or mostly absent
|
||||
|
||||
Claim Style
|
||||
- how claims are framed, supported, and sharpened
|
||||
|
||||
Preferred Moves
|
||||
- concrete moves the author does use
|
||||
|
||||
Banned Moves
|
||||
- specific patterns the author does not use
|
||||
|
||||
CTA Rules
|
||||
- how, when, or whether to close with asks
|
||||
|
||||
Channel Notes
|
||||
- X:
|
||||
- LinkedIn:
|
||||
- Email:
|
||||
```
|
||||
|
||||
Guidelines:
|
||||
|
||||
- Keep the profile concrete and source-backed.
|
||||
- Use short bullets, not essay paragraphs.
|
||||
- Every banned move should be observable in the source set or explicitly requested by the user.
|
||||
- If the source set conflicts, call out the split instead of averaging it into mush.
|
||||
@@ -8,8 +8,7 @@
|
||||
* exit 0: success exit 1: no projects
|
||||
*/
|
||||
|
||||
import { readProjects, loadContext, today, CONTEXTS_DIR } from './shared.mjs';
|
||||
import { renderListTable } from './shared.mjs';
|
||||
import { readProjects, loadContext, today, renderListTable } from './shared.mjs';
|
||||
|
||||
const cwd = process.env.PWD || process.cwd();
|
||||
const projects = readProjects();
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
* exit 0: success exit 1: error
|
||||
*/
|
||||
|
||||
import { readFileSync, writeFileSync, existsSync, renameSync } from 'fs';
|
||||
import { readFileSync, existsSync, renameSync } from 'fs';
|
||||
import { resolve } from 'path';
|
||||
import { readProjects, writeProjects, saveContext, today, shortId, CONTEXTS_DIR } from './shared.mjs';
|
||||
|
||||
@@ -112,7 +112,11 @@ for (const [projectPath, info] of Object.entries(projects)) {
|
||||
const contextMd = existsSync(contextMdPath) ? readFileSync(contextMdPath, 'utf8') : '';
|
||||
let meta = {};
|
||||
if (existsSync(metaPath)) {
|
||||
try { meta = JSON.parse(readFileSync(metaPath, 'utf8')); } catch {}
|
||||
try {
|
||||
meta = JSON.parse(readFileSync(metaPath, 'utf8'));
|
||||
} catch (e) {
|
||||
console.warn(` ! ${contextDir}: invalid meta.json, continuing with defaults (${e.message})`);
|
||||
}
|
||||
}
|
||||
|
||||
// Extract fields from CONTEXT.md
|
||||
|
||||
@@ -20,8 +20,8 @@ import { readFileSync, mkdirSync, writeFileSync } from 'fs';
|
||||
import { resolve } from 'path';
|
||||
import {
|
||||
readProjects, writeProjects, loadContext, saveContext,
|
||||
today, shortId, gitSummary, nativeMemoryDir, encodeProjectPath,
|
||||
CONTEXTS_DIR, CURRENT_SESSION,
|
||||
today, shortId, gitSummary, nativeMemoryDir,
|
||||
CURRENT_SESSION,
|
||||
} from './shared.mjs';
|
||||
|
||||
const isInit = process.argv.includes('--init');
|
||||
|
||||
@@ -5,8 +5,8 @@
|
||||
* No external dependencies. Node.js stdlib only.
|
||||
*/
|
||||
|
||||
import { readFileSync, writeFileSync, existsSync, mkdirSync, readdirSync } from 'fs';
|
||||
import { resolve, basename } from 'path';
|
||||
import { readFileSync, writeFileSync, existsSync, mkdirSync } from 'fs';
|
||||
import { resolve } from 'path';
|
||||
import { homedir } from 'os';
|
||||
import { spawnSync } from 'child_process';
|
||||
import { randomBytes } from 'crypto';
|
||||
@@ -270,7 +270,7 @@ export function renderContextMd(ctx) {
|
||||
}
|
||||
|
||||
/** Render the bordered briefing box used by /ck:resume */
|
||||
export function renderBriefingBox(ctx, meta = {}) {
|
||||
export function renderBriefingBox(ctx, _meta = {}) {
|
||||
const latest = ctx.sessions?.[ctx.sessions.length - 1] || {};
|
||||
const W = 57;
|
||||
const pad = (str, w) => {
|
||||
@@ -344,7 +344,7 @@ export function renderInfoBlock(ctx) {
|
||||
}
|
||||
|
||||
/** Render ASCII list table used by /ck:list */
|
||||
export function renderListTable(entries, cwd, todayStr) {
|
||||
export function renderListTable(entries, cwd, _todayStr) {
|
||||
// entries: [{name, contextDir, path, context, lastUpdated}]
|
||||
// Sorted alphabetically by contextDir before calling
|
||||
const rows = entries.map((e, i) => {
|
||||
|
||||
189
skills/connections-optimizer/SKILL.md
Normal file
189
skills/connections-optimizer/SKILL.md
Normal file
@@ -0,0 +1,189 @@
|
||||
---
|
||||
name: connections-optimizer
|
||||
description: Reorganize the user's X and LinkedIn network with review-first pruning, add/follow recommendations, and channel-specific warm outreach drafted in the user's real voice. Use when the user wants to clean up following lists, grow toward current priorities, or rebalance a social graph around higher-signal relationships.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Connections Optimizer
|
||||
|
||||
Reorganize the user's network instead of treating outbound as a one-way prospecting list.
|
||||
|
||||
This skill handles:
|
||||
|
||||
- X following cleanup and expansion
|
||||
- LinkedIn follow and connection analysis
|
||||
- review-first prune queues
|
||||
- add and follow recommendations
|
||||
- warm-path identification
|
||||
- Apple Mail, X DM, and LinkedIn draft generation in the user's real voice
|
||||
|
||||
## When to Activate
|
||||
|
||||
- the user wants to prune their X following
|
||||
- the user wants to rebalance who they follow or stay connected to
|
||||
- the user says "clean up my network", "who should I unfollow", "who should I follow", "who should I reconnect with"
|
||||
- outreach quality depends on network structure, not just cold list generation
|
||||
|
||||
## Required Inputs
|
||||
|
||||
Collect or infer:
|
||||
|
||||
- current priorities and active work
|
||||
- target roles, industries, geos, or ecosystems
|
||||
- platform selection: X, LinkedIn, or both
|
||||
- do-not-touch list
|
||||
- mode: `light-pass`, `default`, or `aggressive`
|
||||
|
||||
If the user does not specify a mode, use `default`.
|
||||
|
||||
## Tool Requirements
|
||||
|
||||
### Preferred
|
||||
|
||||
- `x-api` for X graph inspection and recent activity
|
||||
- `lead-intelligence` for target discovery and warm-path ranking
|
||||
- `social-graph-ranker` when the user wants bridge value scored independently of the broader lead workflow
|
||||
- Exa / deep research for person and company enrichment
|
||||
- `brand-voice` before drafting outbound
|
||||
|
||||
### Fallbacks
|
||||
|
||||
- browser control for LinkedIn analysis and drafting
|
||||
- browser control for X if API coverage is constrained
|
||||
- Apple Mail or Mail.app drafting via desktop automation when email is the right channel
|
||||
|
||||
## Safety Defaults
|
||||
|
||||
- default is review-first, never blind auto-pruning
|
||||
- X: prune only accounts the user follows, never followers
|
||||
- LinkedIn: treat 1st-degree connection removal as manual-review-first
|
||||
- do not auto-send DMs, invites, or emails
|
||||
- emit a ranked action plan and drafts before any apply step
|
||||
|
||||
## Platform Rules
|
||||
|
||||
### X
|
||||
|
||||
- mutuals are stickier than one-way follows
|
||||
- non-follow-backs can be pruned more aggressively
|
||||
- heavily inactive or disappeared accounts should surface quickly
|
||||
- engagement, signal quality, and bridge value matter more than raw follower count
|
||||
|
||||
### LinkedIn
|
||||
|
||||
- API-first if the user actually has LinkedIn API access
|
||||
- browser workflow must work when API access is missing
|
||||
- distinguish outbound follows from accepted 1st-degree connections
|
||||
- outbound follows can be pruned more freely
|
||||
- accepted 1st-degree connections should default to review, not auto-remove
|
||||
|
||||
## Modes
|
||||
|
||||
### `light-pass`
|
||||
|
||||
- prune only high-confidence low-value one-way follows
|
||||
- surface the rest for review
|
||||
- generate a small add/follow list
|
||||
|
||||
### `default`
|
||||
|
||||
- balanced prune queue
|
||||
- balanced keep list
|
||||
- ranked add/follow queue
|
||||
- draft warm intros or direct outreach where useful
|
||||
|
||||
### `aggressive`
|
||||
|
||||
- larger prune queue
|
||||
- lower tolerance for stale non-follow-backs
|
||||
- still review-gated before apply
|
||||
|
||||
## Scoring Model
|
||||
|
||||
Use these positive signals:
|
||||
|
||||
- reciprocity
|
||||
- recent activity
|
||||
- alignment to current priorities
|
||||
- network bridge value
|
||||
- role relevance
|
||||
- real engagement history
|
||||
- recent presence and responsiveness
|
||||
|
||||
Use these negative signals:
|
||||
|
||||
- disappeared or abandoned account
|
||||
- stale one-way follow
|
||||
- off-priority topic cluster
|
||||
- low-value noise
|
||||
- repeated non-response
|
||||
- no follow-back when many better replacements exist
|
||||
|
||||
Mutuals and real warm-path bridges should be penalized less aggressively than one-way follows.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Capture priorities, do-not-touch constraints, and selected platforms.
|
||||
2. Pull the current following / connection inventory.
|
||||
3. Score prune candidates with explicit reasons.
|
||||
4. Score keep candidates with explicit reasons.
|
||||
5. Use `lead-intelligence` plus research surfaces to rank expansion candidates.
|
||||
6. Match the right channel:
|
||||
- X DM for warm, fast social touch points
|
||||
- LinkedIn message for professional graph adjacency
|
||||
- Apple Mail draft for higher-context intros or outreach
|
||||
7. Run `brand-voice` before drafting messages.
|
||||
8. Return a review pack before any apply step.
|
||||
|
||||
## Review Pack Format
|
||||
|
||||
```text
|
||||
CONNECTIONS OPTIMIZER REPORT
|
||||
============================
|
||||
|
||||
Mode:
|
||||
Platforms:
|
||||
Priority Set:
|
||||
|
||||
Prune Queue
|
||||
- handle / profile
|
||||
reason:
|
||||
confidence:
|
||||
action:
|
||||
|
||||
Review Queue
|
||||
- handle / profile
|
||||
reason:
|
||||
risk:
|
||||
|
||||
Keep / Protect
|
||||
- handle / profile
|
||||
bridge value:
|
||||
|
||||
Add / Follow Targets
|
||||
- person
|
||||
why now:
|
||||
warm path:
|
||||
preferred channel:
|
||||
|
||||
Drafts
|
||||
- X DM:
|
||||
- LinkedIn:
|
||||
- Apple Mail:
|
||||
```
|
||||
|
||||
## Outbound Rules
|
||||
|
||||
- Default email path is Apple Mail / Mail.app draft creation.
|
||||
- Do not send automatically.
|
||||
- Choose the channel based on warmth, relevance, and context depth.
|
||||
- Do not force a DM when an email or no outreach is the right move.
|
||||
- Drafts should sound like the user, not like automated sales copy.
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for the reusable voice profile
|
||||
- `social-graph-ranker` for the standalone bridge-scoring and warm-path math
|
||||
- `lead-intelligence` for weighted target and warm-path discovery
|
||||
- `x-api` for X graph access, drafting, and optional apply flows
|
||||
- `content-engine` when the user also wants public launch content around network moves
|
||||
@@ -36,27 +36,24 @@ Before drafting, identify the source set:
|
||||
- prior posts from the same author
|
||||
|
||||
If the user wants a specific voice, build a voice profile from real examples before writing.
|
||||
Use `brand-voice` as the canonical workflow when voice consistency matters across more than one output.
|
||||
|
||||
## Voice Capture Workflow
|
||||
|
||||
Collect 5 to 20 examples when available. Good sources:
|
||||
- articles or essays
|
||||
- X posts or threads
|
||||
- docs or release notes
|
||||
- newsletters
|
||||
- previous launch posts
|
||||
Run `brand-voice` first when:
|
||||
|
||||
If live X access is available, use `x-api` to pull recent original posts before drafting. If not, use the examples already provided or present in the repo.
|
||||
- there are multiple downstream outputs
|
||||
- the user explicitly cares about writing style
|
||||
- the content is launch, outreach, or reputation-sensitive
|
||||
|
||||
Extract and write down:
|
||||
- sentence length and rhythm
|
||||
- how compressed or explanatory the writing is
|
||||
- whether capitalization is conventional, mixed, or situational
|
||||
- how parentheses are used
|
||||
- whether the writer uses fragments, lists, or abrupt pivots
|
||||
- how often the writer asks questions
|
||||
- how sharp, formal, opinionated, or dry the voice is
|
||||
- what the writer never does
|
||||
At minimum, produce a compact `VOICE PROFILE` covering:
|
||||
- rhythm
|
||||
- compression
|
||||
- capitalization
|
||||
- parenthetical use
|
||||
- question use
|
||||
- preferred moves
|
||||
- banned moves
|
||||
|
||||
Do not start drafting until the voice profile is clear enough to enforce.
|
||||
|
||||
@@ -148,3 +145,9 @@ Before delivering:
|
||||
- no fake engagement bait remains
|
||||
- no duplicated copy across platforms unless requested
|
||||
- any CTA is earned and user-approved
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for source-derived voice profiles
|
||||
- `crosspost` for platform-specific distribution
|
||||
- `x-api` for sourcing recent posts and publishing approved X output
|
||||
|
||||
@@ -37,6 +37,8 @@ Use `content-engine` first if the source still needs voice shaping.
|
||||
|
||||
### Step 2: Capture the Voice Fingerprint
|
||||
|
||||
Run `brand-voice` first if the source voice is not already captured in the current session.
|
||||
|
||||
Before adapting, note:
|
||||
- how blunt or explanatory the source is
|
||||
- whether the source uses fragments, lists, or longer transitions
|
||||
@@ -110,5 +112,6 @@ Before delivering:
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for reusable source-derived voice capture
|
||||
- `content-engine` for voice capture and source shaping
|
||||
- `x-api` for X publishing workflows
|
||||
|
||||
@@ -21,7 +21,7 @@ Agent-powered lead intelligence pipeline that finds, scores, and reaches high-va
|
||||
|
||||
### Required
|
||||
- **Exa MCP** — Deep web search for people, companies, and signals (`web_search_exa`)
|
||||
- **X API** — Follower/following graph, mutual analysis, recent activity (`X_BEARER_TOKEN`, `X_ACCESS_TOKEN`)
|
||||
- **X API** — Follower/following graph, mutual analysis, recent activity (`X_BEARER_TOKEN`, plus write-context credentials such as `X_CONSUMER_KEY`, `X_CONSUMER_SECRET`, `X_ACCESS_TOKEN`, `X_ACCESS_TOKEN_SECRET`)
|
||||
|
||||
### Optional (enhance results)
|
||||
- **LinkedIn** — Direct API if available, otherwise browser control for search, profile inspection, and drafting
|
||||
@@ -43,36 +43,10 @@ Agent-powered lead intelligence pipeline that finds, scores, and reaches high-va
|
||||
|
||||
Do not draft outbound from generic sales copy.
|
||||
|
||||
Before writing a message, build a voice profile from real source material. Prefer:
|
||||
|
||||
- recent X posts and threads
|
||||
- published articles, memos, or launch notes
|
||||
- prior outreach emails that actually worked
|
||||
- docs, changelogs, or product writing if those are the strongest signals
|
||||
Run `brand-voice` first whenever the user's voice matters. Reuse its `VOICE PROFILE` instead of re-deriving style ad hoc inside this skill.
|
||||
|
||||
If live X access is available, pull recent original posts before drafting. If not, use supplied examples or the best repo/site material available.
|
||||
|
||||
Extract:
|
||||
|
||||
- sentence length and rhythm
|
||||
- how compressed or explanatory the writing is
|
||||
- how parentheses are used
|
||||
- whether capitalization is conventional or situational
|
||||
- how often questions are used
|
||||
- phrases or transitions the author never uses
|
||||
|
||||
For Affaan / ECC style specifically:
|
||||
|
||||
- direct, compressed, concrete
|
||||
- strong preference for specifics, mechanisms, and receipts
|
||||
- parentheticals are for qualification or over-clarification, not jokes
|
||||
- lowercase is optional, not mandatory
|
||||
- no fake curiosity hooks
|
||||
- no "not X, just Y"
|
||||
- no "no fluff"
|
||||
- no LinkedIn thought-leader cadence
|
||||
- no bait question at the end
|
||||
|
||||
## Stage 1: Signal Scoring
|
||||
|
||||
Search for high-signal people in target verticals. Assign a weight to each based on:
|
||||
@@ -115,11 +89,12 @@ x_search = search_recent_tweets(
|
||||
|
||||
For each scored target, analyze the user's social graph to find the warmest path.
|
||||
|
||||
### Algorithm
|
||||
### Ranking Model
|
||||
|
||||
1. Pull user's X following list and LinkedIn connections
|
||||
2. For each high-signal target, check for shared connections
|
||||
3. Rank mutuals by:
|
||||
3. Apply the `social-graph-ranker` model to score bridge value
|
||||
4. Rank mutuals by:
|
||||
|
||||
| Factor | Weight |
|
||||
|--------|--------|
|
||||
@@ -129,47 +104,20 @@ For each scored target, analyze the user's social graph to find the warmest path
|
||||
| Industry alignment | 15% — same vertical = natural intro |
|
||||
| Mutual's X handle / LinkedIn | 10% — identifiability for outreach |
|
||||
|
||||
### Weighted Bridge Ranking
|
||||
Canonical rule:
|
||||
|
||||
Treat this as the canonical network-ranking stage for lead intelligence. Do not run a separate graph skill when this stage is enough.
|
||||
```text
|
||||
Use social-graph-ranker when the user wants the graph math itself,
|
||||
the bridge ranking as a standalone report, or explicit decay-model tuning.
|
||||
```
|
||||
|
||||
Given:
|
||||
- `T` = target leads
|
||||
- `M` = your mutuals / existing connections
|
||||
- `d(m, t)` = shortest hop distance from mutual `m` to target `t`
|
||||
- `w(t)` = target weight from signal scoring
|
||||
|
||||
Compute the base bridge score for each mutual:
|
||||
Inside this skill, use the same weighted bridge model:
|
||||
|
||||
```text
|
||||
B(m) = Σ_{t ∈ T} w(t) · λ^(d(m,t) - 1)
|
||||
```
|
||||
|
||||
Where:
|
||||
- `λ` is the decay factor, usually `0.5`
|
||||
- a direct connection contributes full value
|
||||
- each extra hop halves the contribution
|
||||
|
||||
For second-order reach, expand one level into the mutual's own network:
|
||||
|
||||
```text
|
||||
B_ext(m) = B(m) + α · Σ_{m' ∈ N(m) \\ M} Σ_{t ∈ T} w(t) · λ^(d(m',t))
|
||||
```
|
||||
|
||||
Where:
|
||||
- `N(m) \\ M` is the set of people the mutual knows that you do not
|
||||
- `α` is the second-order discount, usually `0.3`
|
||||
|
||||
Then rank by response-adjusted bridge value:
|
||||
|
||||
```text
|
||||
R(m) = B_ext(m) · (1 + β · engagement(m))
|
||||
```
|
||||
|
||||
Where:
|
||||
- `engagement(m)` is a normalized responsiveness score
|
||||
- `β` is the engagement bonus, usually `0.2`
|
||||
|
||||
Interpretation:
|
||||
- Tier 1: high `R(m)` and direct bridge paths -> warm intro asks
|
||||
- Tier 2: medium `R(m)` and one-hop bridge paths -> conditional intro asks
|
||||
@@ -178,6 +126,8 @@ Interpretation:
|
||||
### Output Format
|
||||
|
||||
```
|
||||
|
||||
If the user explicitly wants the ranking engine broken out, the math visualized, or the network scored outside the full lead workflow, run `social-graph-ranker` as a standalone pass first and feed the result back into this pipeline.
|
||||
MUTUAL RANKING REPORT
|
||||
=====================
|
||||
|
||||
@@ -333,8 +283,8 @@ Users should set these environment variables:
|
||||
export X_BEARER_TOKEN="..."
|
||||
export X_ACCESS_TOKEN="..."
|
||||
export X_ACCESS_TOKEN_SECRET="..."
|
||||
export X_API_KEY="..."
|
||||
export X_API_SECRET="..."
|
||||
export X_CONSUMER_KEY="..."
|
||||
export X_CONSUMER_SECRET="..."
|
||||
export EXA_API_KEY="..."
|
||||
|
||||
# Optional
|
||||
@@ -364,3 +314,8 @@ Agent workflow:
|
||||
|
||||
Output: Ranked list with warm paths, voice profile summary, and channel-specific outreach drafts or drafts-in-app
|
||||
```
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for canonical voice capture
|
||||
- `connections-optimizer` for review-first network pruning and expansion before outreach
|
||||
|
||||
89
skills/manim-video/SKILL.md
Normal file
89
skills/manim-video/SKILL.md
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
name: manim-video
|
||||
description: Build reusable Manim explainers for technical concepts, graphs, system diagrams, and product walkthroughs, then hand off to the wider ECC video stack if needed. Use when the user wants a clean animated explainer rather than a generic talking-head script.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Manim Video
|
||||
|
||||
Use Manim for technical explainers where motion, structure, and clarity matter more than photorealism.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- the user wants a technical explainer animation
|
||||
- the concept is a graph, workflow, architecture, metric progression, or system diagram
|
||||
- the user wants a short product or launch explainer for X or a landing page
|
||||
- the visual should feel precise instead of generically cinematic
|
||||
|
||||
## Tool Requirements
|
||||
|
||||
- `manim` CLI for scene rendering
|
||||
- `ffmpeg` for post-processing if needed
|
||||
- `video-editing` for final assembly or polish
|
||||
- `remotion-video-creation` when the final package needs composited UI, captions, or additional motion layers
|
||||
|
||||
## Default Output
|
||||
|
||||
- short 16:9 MP4
|
||||
- one thumbnail or poster frame
|
||||
- storyboard plus scene plan
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Define the core visual thesis in one sentence.
|
||||
2. Break the concept into 3 to 6 scenes.
|
||||
3. Decide what each scene proves.
|
||||
4. Write the scene outline before writing Manim code.
|
||||
5. Render the smallest working version first.
|
||||
6. Tighten typography, spacing, color, and pacing after the render works.
|
||||
7. Hand off to the wider video stack only if it adds value.
|
||||
|
||||
## Scene Planning Rules
|
||||
|
||||
- each scene should prove one thing
|
||||
- avoid overstuffed diagrams
|
||||
- prefer progressive reveal over full-screen clutter
|
||||
- use motion to explain state change, not just to keep the screen busy
|
||||
- title cards should be short and loaded with meaning
|
||||
|
||||
## Network Graph Default
|
||||
|
||||
For social-graph and network-optimization explainers:
|
||||
|
||||
- show the current graph before showing the optimized graph
|
||||
- distinguish low-signal follow clutter from high-signal bridges
|
||||
- highlight warm-path nodes and target clusters
|
||||
- if useful, add a final scene showing the self-improvement lineage that informed the skill
|
||||
|
||||
## Render Conventions
|
||||
|
||||
- default to 16:9 landscape unless the user asks for vertical
|
||||
- start with a low-quality smoke test render
|
||||
- only push to higher quality after composition and timing are stable
|
||||
- export one clean thumbnail frame that reads at social size
|
||||
|
||||
## Reusable Starter
|
||||
|
||||
Use [assets/network_graph_scene.py](assets/network_graph_scene.py) as a starting point for network-graph explainers.
|
||||
|
||||
Example smoke test:
|
||||
|
||||
```bash
|
||||
manim -ql assets/network_graph_scene.py NetworkGraphExplainer
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
Return:
|
||||
|
||||
- core visual thesis
|
||||
- storyboard
|
||||
- scene outline
|
||||
- render plan
|
||||
- any follow-on polish recommendations
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `video-editing` for final polish
|
||||
- `remotion-video-creation` for motion-heavy post-processing or compositing
|
||||
- `content-engine` when the animation is part of a broader launch
|
||||
52
skills/manim-video/assets/network_graph_scene.py
Normal file
52
skills/manim-video/assets/network_graph_scene.py
Normal file
@@ -0,0 +1,52 @@
|
||||
from manim import DOWN, LEFT, RIGHT, UP, Circle, Create, FadeIn, FadeOut, Scene, Text, VGroup, CurvedArrow
|
||||
|
||||
|
||||
class NetworkGraphExplainer(Scene):
|
||||
def construct(self):
|
||||
title = Text("Connections Optimizer", font_size=40).to_edge(UP)
|
||||
subtitle = Text("Prune low-signal follows. Strengthen warm paths.", font_size=20).next_to(title, DOWN)
|
||||
|
||||
you = Circle(radius=0.45, color="#4F8EF7").shift(LEFT * 4 + DOWN * 0.5)
|
||||
you_label = Text("You", font_size=22).move_to(you.get_center())
|
||||
|
||||
stale_a = Circle(radius=0.32, color="#7A7A7A").shift(LEFT * 1.6 + UP * 1.2)
|
||||
stale_b = Circle(radius=0.32, color="#7A7A7A").shift(LEFT * 1.2 + DOWN * 1.4)
|
||||
bridge = Circle(radius=0.38, color="#21A179").shift(RIGHT * 0.2 + UP * 0.2)
|
||||
target = Circle(radius=0.42, color="#FF9F1C").shift(RIGHT * 3.2 + UP * 0.7)
|
||||
new_target = Circle(radius=0.42, color="#FF9F1C").shift(RIGHT * 3.0 + DOWN * 1.4)
|
||||
|
||||
stale_a_label = Text("stale", font_size=18).move_to(stale_a.get_center())
|
||||
stale_b_label = Text("noise", font_size=18).move_to(stale_b.get_center())
|
||||
bridge_label = Text("bridge", font_size=18).move_to(bridge.get_center())
|
||||
target_label = Text("target", font_size=18).move_to(target.get_center())
|
||||
new_target_label = Text("add", font_size=18).move_to(new_target.get_center())
|
||||
|
||||
edge_stale_a = CurvedArrow(you.get_right(), stale_a.get_left(), angle=0.2, color="#7A7A7A")
|
||||
edge_stale_b = CurvedArrow(you.get_right(), stale_b.get_left(), angle=-0.2, color="#7A7A7A")
|
||||
edge_bridge = CurvedArrow(you.get_right(), bridge.get_left(), angle=0.0, color="#21A179")
|
||||
edge_target = CurvedArrow(bridge.get_right(), target.get_left(), angle=0.1, color="#21A179")
|
||||
edge_new_target = CurvedArrow(bridge.get_right(), new_target.get_left(), angle=-0.12, color="#21A179")
|
||||
|
||||
self.play(FadeIn(title), FadeIn(subtitle))
|
||||
self.play(
|
||||
Create(you),
|
||||
FadeIn(you_label),
|
||||
Create(stale_a),
|
||||
Create(stale_b),
|
||||
Create(bridge),
|
||||
Create(target),
|
||||
FadeIn(stale_a_label),
|
||||
FadeIn(stale_b_label),
|
||||
FadeIn(bridge_label),
|
||||
FadeIn(target_label),
|
||||
)
|
||||
self.play(Create(edge_stale_a), Create(edge_stale_b), Create(edge_bridge), Create(edge_target))
|
||||
|
||||
optimize = Text("Optimize the graph", font_size=24).to_edge(DOWN)
|
||||
self.play(FadeIn(optimize))
|
||||
self.play(FadeOut(stale_a), FadeOut(stale_b), FadeOut(stale_a_label), FadeOut(stale_b_label), FadeOut(edge_stale_a), FadeOut(edge_stale_b))
|
||||
self.play(Create(new_target), FadeIn(new_target_label), Create(edge_new_target))
|
||||
|
||||
final_group = VGroup(you, you_label, bridge, bridge_label, target, target_label, new_target, new_target_label)
|
||||
self.play(final_group.animate.shift(UP * 0.1))
|
||||
self.wait(1)
|
||||
154
skills/social-graph-ranker/SKILL.md
Normal file
154
skills/social-graph-ranker/SKILL.md
Normal file
@@ -0,0 +1,154 @@
|
||||
---
|
||||
name: social-graph-ranker
|
||||
description: Weighted social-graph ranking for warm intro discovery, bridge scoring, and network gap analysis across X and LinkedIn. Use when the user wants the reusable graph-ranking engine itself, not the broader outreach or network-maintenance workflow layered on top of it.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Social Graph Ranker
|
||||
|
||||
Canonical weighted graph-ranking layer for network-aware outreach.
|
||||
|
||||
Use this when the user needs to:
|
||||
|
||||
- rank existing mutuals or connections by intro value
|
||||
- map warm paths to a target list
|
||||
- measure bridge value across first- and second-order connections
|
||||
- decide which targets deserve warm intros versus direct cold outreach
|
||||
- understand the graph math independently from `lead-intelligence` or `connections-optimizer`
|
||||
|
||||
## When To Use This Standalone
|
||||
|
||||
Choose this skill when the user primarily wants the ranking engine:
|
||||
|
||||
- "who in my network is best positioned to introduce me?"
|
||||
- "rank my mutuals by who can get me to these people"
|
||||
- "map my graph against this ICP"
|
||||
- "show me the bridge math"
|
||||
|
||||
Do not use this by itself when the user really wants:
|
||||
|
||||
- full lead generation and outbound sequencing -> use `lead-intelligence`
|
||||
- pruning, rebalancing, and growing the network -> use `connections-optimizer`
|
||||
|
||||
## Inputs
|
||||
|
||||
Collect or infer:
|
||||
|
||||
- target people, companies, or ICP definition
|
||||
- the user's current graph on X, LinkedIn, or both
|
||||
- weighting priorities such as role, industry, geography, and responsiveness
|
||||
- traversal depth and decay tolerance
|
||||
|
||||
## Core Model
|
||||
|
||||
Given:
|
||||
|
||||
- `T` = weighted target set
|
||||
- `M` = your current mutuals / direct connections
|
||||
- `d(m, t)` = shortest hop distance from mutual `m` to target `t`
|
||||
- `w(t)` = target weight from signal scoring
|
||||
|
||||
Base bridge score:
|
||||
|
||||
```text
|
||||
B(m) = Σ_{t ∈ T} w(t) · λ^(d(m,t) - 1)
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `λ` is the decay factor, usually `0.5`
|
||||
- a direct path contributes full value
|
||||
- each extra hop halves the contribution
|
||||
|
||||
Second-order expansion:
|
||||
|
||||
```text
|
||||
B_ext(m) = B(m) + α · Σ_{m' ∈ N(m) \\ M} Σ_{t ∈ T} w(t) · λ^(d(m',t))
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `N(m) \\ M` is the set of people the mutual knows that you do not
|
||||
- `α` discounts second-order reach, usually `0.3`
|
||||
|
||||
Response-adjusted final ranking:
|
||||
|
||||
```text
|
||||
R(m) = B_ext(m) · (1 + β · engagement(m))
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `engagement(m)` is normalized responsiveness or relationship strength
|
||||
- `β` is the engagement bonus, usually `0.2`
|
||||
|
||||
Interpretation:
|
||||
|
||||
- Tier 1: high `R(m)` and direct bridge paths -> warm intro asks
|
||||
- Tier 2: medium `R(m)` and one-hop bridge paths -> conditional intro asks
|
||||
- Tier 3: low `R(m)` or no viable bridge -> direct outreach or follow-gap fill
|
||||
|
||||
## Scoring Signals
|
||||
|
||||
Weight targets before graph traversal with whatever matters for the current priority set:
|
||||
|
||||
- role or title alignment
|
||||
- company or industry fit
|
||||
- current activity and recency
|
||||
- geographic relevance
|
||||
- influence or reach
|
||||
- likelihood of response
|
||||
|
||||
Weight mutuals after traversal with:
|
||||
|
||||
- number of weighted paths into the target set
|
||||
- directness of those paths
|
||||
- responsiveness or prior interaction history
|
||||
- contextual fit for making the intro
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Build the weighted target set.
|
||||
2. Pull the user's graph from X, LinkedIn, or both.
|
||||
3. Compute direct bridge scores.
|
||||
4. Expand second-order candidates for the highest-value mutuals.
|
||||
5. Rank by `R(m)`.
|
||||
6. Return:
|
||||
- best warm intro asks
|
||||
- conditional bridge paths
|
||||
- graph gaps where no warm path exists
|
||||
|
||||
## Output Shape
|
||||
|
||||
```text
|
||||
SOCIAL GRAPH RANKING
|
||||
====================
|
||||
|
||||
Priority Set:
|
||||
Platforms:
|
||||
Decay Model:
|
||||
|
||||
Top Bridges
|
||||
- mutual / connection
|
||||
base_score:
|
||||
extended_score:
|
||||
best_targets:
|
||||
path_summary:
|
||||
recommended_action:
|
||||
|
||||
Conditional Paths
|
||||
- mutual / connection
|
||||
reason:
|
||||
extra hop cost:
|
||||
|
||||
No Warm Path
|
||||
- target
|
||||
recommendation: direct outreach / fill graph gap
|
||||
```
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `lead-intelligence` uses this ranking model inside the broader target-discovery and outreach pipeline
|
||||
- `connections-optimizer` uses the same bridge logic when deciding who to keep, prune, or add
|
||||
- `brand-voice` should run before drafting any intro request or direct outreach
|
||||
- `x-api` provides X graph access and optional execution paths
|
||||
@@ -46,25 +46,27 @@ tweets = resp.json()
|
||||
|
||||
### OAuth 1.0a (User Context)
|
||||
|
||||
Required for: posting tweets, managing account, DMs.
|
||||
Required for: posting tweets, managing account, DMs, and any write flow.
|
||||
|
||||
```bash
|
||||
# Environment setup — source before use
|
||||
export X_API_KEY="your-api-key"
|
||||
export X_API_SECRET="your-api-secret"
|
||||
export X_CONSUMER_KEY="your-consumer-key"
|
||||
export X_CONSUMER_SECRET="your-consumer-secret"
|
||||
export X_ACCESS_TOKEN="your-access-token"
|
||||
export X_ACCESS_SECRET="your-access-secret"
|
||||
export X_ACCESS_TOKEN_SECRET="your-access-token-secret"
|
||||
```
|
||||
|
||||
Legacy aliases such as `X_API_KEY`, `X_API_SECRET`, and `X_ACCESS_SECRET` may exist in older setups. Prefer the `X_CONSUMER_*` and `X_ACCESS_TOKEN_SECRET` names when documenting or wiring new flows.
|
||||
|
||||
```python
|
||||
import os
|
||||
from requests_oauthlib import OAuth1Session
|
||||
|
||||
oauth = OAuth1Session(
|
||||
os.environ["X_API_KEY"],
|
||||
client_secret=os.environ["X_API_SECRET"],
|
||||
os.environ["X_CONSUMER_KEY"],
|
||||
client_secret=os.environ["X_CONSUMER_SECRET"],
|
||||
resource_owner_key=os.environ["X_ACCESS_TOKEN"],
|
||||
resource_owner_secret=os.environ["X_ACCESS_SECRET"],
|
||||
resource_owner_secret=os.environ["X_ACCESS_TOKEN_SECRET"],
|
||||
)
|
||||
```
|
||||
|
||||
@@ -125,6 +127,21 @@ resp = requests.get(
|
||||
)
|
||||
```
|
||||
|
||||
### Pull Recent Original Posts for Voice Modeling
|
||||
|
||||
```python
|
||||
resp = requests.get(
|
||||
"https://api.x.com/2/tweets/search/recent",
|
||||
headers=headers,
|
||||
params={
|
||||
"query": "from:affaanmustafa -is:retweet -is:reply",
|
||||
"max_results": 25,
|
||||
"tweet.fields": "created_at,public_metrics",
|
||||
}
|
||||
)
|
||||
voice_samples = resp.json()
|
||||
```
|
||||
|
||||
### Get User by Username
|
||||
|
||||
```python
|
||||
@@ -196,13 +213,18 @@ else:
|
||||
|
||||
## Integration with Content Engine
|
||||
|
||||
Use `content-engine` skill to generate platform-native content, then post via X API:
|
||||
1. Generate content with content-engine (X platform format)
|
||||
2. Validate length (280 chars for single tweet)
|
||||
3. Post via X API using patterns above
|
||||
4. Track engagement via public_metrics
|
||||
Use `brand-voice` plus `content-engine` to generate platform-native content, then post via X API:
|
||||
1. Pull recent original posts when voice matching matters
|
||||
2. Build or reuse a `VOICE PROFILE`
|
||||
3. Generate content with `content-engine` in X-native format
|
||||
4. Validate length and thread structure
|
||||
5. Return the draft for approval unless the user explicitly asked to post now
|
||||
6. Post via X API only after approval
|
||||
7. Track engagement via public_metrics
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` — Build a reusable voice profile from real X and site/source material
|
||||
- `content-engine` — Generate platform-native content for X
|
||||
- `crosspost` — Distribute content across X, LinkedIn, and other platforms
|
||||
- `connections-optimizer` — Reorganize the X graph before drafting network-driven outreach
|
||||
|
||||
@@ -24,7 +24,7 @@ async function runTests() {
|
||||
let store
|
||||
try {
|
||||
store = await import(pathToFileURL(storePath).href)
|
||||
} catch (err) {
|
||||
} catch (_err) {
|
||||
console.log('\n[warn] Skipping: build .opencode first (cd .opencode && npm run build)\n')
|
||||
process.exit(0)
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ const fs = require('fs');
|
||||
const os = require('os');
|
||||
const path = require('path');
|
||||
const { spawnSync } = require('child_process');
|
||||
const TOML = require('@iarna/toml');
|
||||
|
||||
const repoRoot = path.join(__dirname, '..', '..');
|
||||
const installScript = path.join(repoRoot, 'scripts', 'codex', 'install-global-git-hooks.sh');
|
||||
@@ -93,29 +94,16 @@ if (os.platform() === 'win32') {
|
||||
else failed++;
|
||||
|
||||
if (
|
||||
test('sync preserves baseline config and accepts the legacy context7 MCP section', () => {
|
||||
test('sync installs the missing Codex baseline and accepts the legacy context7 MCP section', () => {
|
||||
const homeDir = createTempDir('codex-sync-home-');
|
||||
const codexDir = path.join(homeDir, '.codex');
|
||||
const configPath = path.join(codexDir, 'config.toml');
|
||||
const agentsPath = path.join(codexDir, 'AGENTS.md');
|
||||
const config = [
|
||||
'approval_policy = "on-request"',
|
||||
'sandbox_mode = "workspace-write"',
|
||||
'web_search = "live"',
|
||||
'persistent_instructions = ""',
|
||||
'',
|
||||
'[features]',
|
||||
'multi_agent = true',
|
||||
'',
|
||||
'[profiles.strict]',
|
||||
'approval_policy = "on-request"',
|
||||
'sandbox_mode = "read-only"',
|
||||
'web_search = "cached"',
|
||||
'',
|
||||
'[profiles.yolo]',
|
||||
'approval_policy = "never"',
|
||||
'sandbox_mode = "workspace-write"',
|
||||
'web_search = "live"',
|
||||
'[agents]',
|
||||
'explorer = { description = "Read-only codebase explorer for gathering evidence before changes are proposed." }',
|
||||
'',
|
||||
'[mcp_servers.context7]',
|
||||
'command = "npx"',
|
||||
@@ -147,13 +135,63 @@ if (
|
||||
assert.match(syncedAgents, /^# Codex Supplement \(From ECC \.codex\/AGENTS\.md\)/m);
|
||||
|
||||
const syncedConfig = fs.readFileSync(configPath, 'utf8');
|
||||
assert.match(syncedConfig, /^multi_agent\s*=\s*true$/m);
|
||||
assert.match(syncedConfig, /^\[profiles\.strict\]$/m);
|
||||
assert.match(syncedConfig, /^\[profiles\.yolo\]$/m);
|
||||
assert.match(syncedConfig, /^\[mcp_servers\.github\]$/m);
|
||||
assert.match(syncedConfig, /^\[mcp_servers\.memory\]$/m);
|
||||
assert.match(syncedConfig, /^\[mcp_servers\.sequential-thinking\]$/m);
|
||||
assert.match(syncedConfig, /^\[mcp_servers\.context7\]$/m);
|
||||
const parsedConfig = TOML.parse(syncedConfig);
|
||||
assert.strictEqual(parsedConfig.approval_policy, 'on-request');
|
||||
assert.strictEqual(parsedConfig.sandbox_mode, 'workspace-write');
|
||||
assert.strictEqual(parsedConfig.web_search, 'live');
|
||||
assert.ok(!Object.prototype.hasOwnProperty.call(parsedConfig, 'multi_agent'));
|
||||
assert.ok(parsedConfig.features);
|
||||
assert.strictEqual(parsedConfig.features.multi_agent, true);
|
||||
assert.ok(parsedConfig.profiles);
|
||||
assert.strictEqual(parsedConfig.profiles.strict.approval_policy, 'on-request');
|
||||
assert.strictEqual(parsedConfig.profiles.yolo.approval_policy, 'never');
|
||||
assert.ok(parsedConfig.agents);
|
||||
assert.strictEqual(parsedConfig.agents.max_threads, 6);
|
||||
assert.strictEqual(parsedConfig.agents.max_depth, 1);
|
||||
assert.strictEqual(parsedConfig.agents.explorer.config_file, 'agents/explorer.toml');
|
||||
assert.strictEqual(parsedConfig.agents.reviewer.config_file, 'agents/reviewer.toml');
|
||||
assert.strictEqual(parsedConfig.agents.docs_researcher.config_file, 'agents/docs-researcher.toml');
|
||||
assert.ok(parsedConfig.mcp_servers.exa);
|
||||
assert.ok(parsedConfig.mcp_servers.github);
|
||||
assert.ok(parsedConfig.mcp_servers.memory);
|
||||
assert.ok(parsedConfig.mcp_servers['sequential-thinking']);
|
||||
assert.ok(parsedConfig.mcp_servers.context7);
|
||||
|
||||
for (const roleFile of ['explorer.toml', 'reviewer.toml', 'docs-researcher.toml']) {
|
||||
assert.ok(fs.existsSync(path.join(codexDir, 'agents', roleFile)));
|
||||
}
|
||||
} finally {
|
||||
cleanup(homeDir);
|
||||
}
|
||||
})
|
||||
)
|
||||
passed++;
|
||||
else failed++;
|
||||
|
||||
if (
|
||||
test('sync adds parent-table keys when the target only declares an implicit parent table', () => {
|
||||
const homeDir = createTempDir('codex-sync-implicit-parent-home-');
|
||||
const codexDir = path.join(homeDir, '.codex');
|
||||
const configPath = path.join(codexDir, 'config.toml');
|
||||
const config = [
|
||||
'persistent_instructions = ""',
|
||||
'',
|
||||
'[agents.explorer]',
|
||||
'description = "Read-only codebase explorer for gathering evidence before changes are proposed."',
|
||||
'',
|
||||
].join('\n');
|
||||
|
||||
try {
|
||||
fs.mkdirSync(codexDir, { recursive: true });
|
||||
fs.writeFileSync(configPath, config);
|
||||
|
||||
const syncResult = runBash(syncScript, [], makeHermeticCodexEnv(homeDir, codexDir));
|
||||
assert.strictEqual(syncResult.status, 0, `${syncResult.stdout}\n${syncResult.stderr}`);
|
||||
|
||||
const parsedConfig = TOML.parse(fs.readFileSync(configPath, 'utf8'));
|
||||
assert.strictEqual(parsedConfig.agents.max_threads, 6);
|
||||
assert.strictEqual(parsedConfig.agents.max_depth, 1);
|
||||
assert.strictEqual(parsedConfig.agents.explorer.config_file, 'agents/explorer.toml');
|
||||
} finally {
|
||||
cleanup(homeDir);
|
||||
}
|
||||
|
||||
@@ -26,6 +26,7 @@ function runHook(mode, payload, homeDir) {
|
||||
env: {
|
||||
...process.env,
|
||||
HOME: homeDir,
|
||||
USERPROFILE: homeDir,
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
@@ -29,7 +29,7 @@ function runInstall(options = {}) {
|
||||
},
|
||||
encoding: 'utf8',
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
timeout: 20000,
|
||||
timeout: 60000,
|
||||
});
|
||||
}
|
||||
|
||||
@@ -43,7 +43,7 @@ function runUninstall(options = {}) {
|
||||
encoding: 'utf8',
|
||||
input: options.input || 'y\n',
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
timeout: 20000,
|
||||
timeout: 60000,
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user