mirror of
https://github.com/affaan-m/everything-claude-code.git
synced 2026-04-03 23:53:29 +08:00
Compare commits
15 Commits
ecc-tools/
...
ecc-tools/
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
16c21dbc4c | ||
|
|
b384eba56a | ||
|
|
0dff4a2a0b | ||
|
|
974083e1fb | ||
|
|
ee70286227 | ||
|
|
766a295be3 | ||
|
|
31fcb5a12b | ||
|
|
d98df98645 | ||
|
|
43ee6a56ed | ||
|
|
fad829651e | ||
|
|
6d5fbac2d3 | ||
|
|
e4ddd5291e | ||
|
|
69c9ac390c | ||
|
|
29a8ce8665 | ||
|
|
b66ddcf5db |
@@ -23,23 +23,50 @@ Write long-form content that sounds like an actual person with a point of view,
|
||||
4. Use proof instead of adjectives.
|
||||
5. Never invent facts, credibility, or customer evidence.
|
||||
|
||||
## Voice Handling
|
||||
## Voice Capture Workflow
|
||||
|
||||
If the user wants a specific voice, run `brand-voice` first and reuse its `VOICE PROFILE`.
|
||||
Do not duplicate a second style-analysis pass here unless the user explicitly asks for one.
|
||||
If the user wants a specific voice, collect one or more of:
|
||||
- published articles
|
||||
- newsletters
|
||||
- X posts or threads
|
||||
- docs or memos
|
||||
- launch notes
|
||||
- a style guide
|
||||
|
||||
Then extract:
|
||||
- sentence length and rhythm
|
||||
- whether the writing is compressed, explanatory, sharp, or formal
|
||||
- how parentheses are used
|
||||
- how often the writer asks questions
|
||||
- whether the writer uses fragments, lists, or hard pivots
|
||||
- formatting habits such as headers, bullets, code blocks, pull quotes
|
||||
- what the writer clearly avoids
|
||||
|
||||
If no voice references are given, default to a sharp operator voice: concrete, unsentimental, useful.
|
||||
|
||||
## Affaan / ECC Voice Reference
|
||||
|
||||
When matching Affaan / ECC voice, bias toward:
|
||||
- direct claims over scene-setting
|
||||
- high specificity
|
||||
- parentheticals used for qualification or over-clarification, not comedy
|
||||
- capitalization chosen situationally, not as a gimmick
|
||||
- very low tolerance for fake thought-leadership cadence
|
||||
- almost no bait questions
|
||||
|
||||
## Banned Patterns
|
||||
|
||||
Delete and rewrite any of these:
|
||||
- "In today's rapidly evolving landscape"
|
||||
- "game-changer", "cutting-edge", "revolutionary"
|
||||
- "no fluff"
|
||||
- "not X, just Y"
|
||||
- "here's why this matters" as a standalone bridge
|
||||
- fake vulnerability arcs
|
||||
- a closing question added only to juice engagement
|
||||
- forced lowercase
|
||||
- corny parenthetical asides
|
||||
- biography padding that does not move the argument
|
||||
- generic AI throat-clearing that delays the point
|
||||
|
||||
## Writing Process
|
||||
|
||||
@@ -74,6 +101,6 @@ Delete and rewrite any of these:
|
||||
Before delivering:
|
||||
- factual claims are backed by provided sources
|
||||
- generic AI transitions are gone
|
||||
- the voice matches the supplied examples or the agreed `VOICE PROFILE`
|
||||
- the voice matches the supplied examples
|
||||
- every section adds something new
|
||||
- formatting matches the intended medium
|
||||
|
||||
@@ -1,97 +0,0 @@
|
||||
---
|
||||
name: brand-voice
|
||||
description: Build a source-derived writing style profile from real posts, essays, launch notes, docs, or site copy, then reuse that profile across content, outreach, and social workflows. Use when the user wants voice consistency without generic AI writing tropes.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Brand Voice
|
||||
|
||||
Build a durable voice profile from real source material, then use that profile everywhere instead of re-deriving style from scratch or defaulting to generic AI copy.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- the user wants content or outreach in a specific voice
|
||||
- writing for X, LinkedIn, email, launch posts, threads, or product updates
|
||||
- adapting a known author's tone across channels
|
||||
- the existing content lane needs a reusable style system instead of one-off mimicry
|
||||
|
||||
## Source Priority
|
||||
|
||||
Use the strongest real source set available, in this order:
|
||||
|
||||
1. recent original X posts and threads
|
||||
2. articles, essays, memos, launch notes, or newsletters
|
||||
3. real outbound emails or DMs that worked
|
||||
4. product docs, changelogs, README framing, and site copy
|
||||
|
||||
Do not use generic platform exemplars as source material.
|
||||
|
||||
## Collection Workflow
|
||||
|
||||
1. Gather 5 to 20 representative samples when available.
|
||||
2. Prefer recent material over old material unless the user says the older writing is more canonical.
|
||||
3. Separate "public launch voice" from "private working voice" if the source set clearly splits.
|
||||
4. If live X access is available, use `x-api` to pull recent original posts before drafting.
|
||||
5. If site copy matters, include the current ECC landing page and repo/plugin framing.
|
||||
|
||||
## What to Extract
|
||||
|
||||
- rhythm and sentence length
|
||||
- compression vs explanation
|
||||
- capitalization norms
|
||||
- parenthetical use
|
||||
- question frequency and purpose
|
||||
- how sharply claims are made
|
||||
- how often numbers, mechanisms, or receipts show up
|
||||
- how transitions work
|
||||
- what the author never does
|
||||
|
||||
## Output Contract
|
||||
|
||||
Produce a reusable `VOICE PROFILE` block that downstream skills can consume directly. Use the schema in [references/voice-profile-schema.md](references/voice-profile-schema.md).
|
||||
|
||||
Keep the profile structured and short enough to reuse in session context. The point is not literary criticism. The point is operational reuse.
|
||||
|
||||
## Affaan / ECC Defaults
|
||||
|
||||
If the user wants Affaan / ECC voice and live sources are thin, start here unless newer source material overrides it:
|
||||
|
||||
- direct, compressed, concrete
|
||||
- specifics, mechanisms, receipts, and numbers beat adjectives
|
||||
- parentheticals are for qualification, narrowing, or over-clarification
|
||||
- capitalization is conventional unless there is a real reason to break it
|
||||
- questions are rare and should not be used as bait
|
||||
- tone can be sharp, blunt, skeptical, or dry
|
||||
- transitions should feel earned, not smoothed over
|
||||
|
||||
## Hard Bans
|
||||
|
||||
Delete and rewrite any of these:
|
||||
|
||||
- fake curiosity hooks
|
||||
- "not X, just Y"
|
||||
- "no fluff"
|
||||
- forced lowercase
|
||||
- LinkedIn thought-leader cadence
|
||||
- bait questions
|
||||
- "Excited to share"
|
||||
- generic founder-journey filler
|
||||
- corny parentheticals
|
||||
|
||||
## Persistence Rules
|
||||
|
||||
- Reuse the latest confirmed `VOICE PROFILE` across related tasks in the same session.
|
||||
- If the user asks for a durable artifact, save the profile in the requested workspace location or memory surface.
|
||||
- Do not create repo-tracked files that store personal voice fingerprints unless the user explicitly asks for that.
|
||||
|
||||
## Downstream Use
|
||||
|
||||
Use this skill before or inside:
|
||||
|
||||
- `content-engine`
|
||||
- `crosspost`
|
||||
- `lead-intelligence`
|
||||
- article or launch writing
|
||||
- cold or warm outbound across X, LinkedIn, and email
|
||||
|
||||
If another skill already has a partial voice capture section, this skill is the canonical source of truth.
|
||||
@@ -1,55 +0,0 @@
|
||||
# Voice Profile Schema
|
||||
|
||||
Use this exact structure when building a reusable voice profile:
|
||||
|
||||
```text
|
||||
VOICE PROFILE
|
||||
=============
|
||||
Author:
|
||||
Goal:
|
||||
Confidence:
|
||||
|
||||
Source Set
|
||||
- source 1
|
||||
- source 2
|
||||
- source 3
|
||||
|
||||
Rhythm
|
||||
- short note on sentence length, pacing, and fragmentation
|
||||
|
||||
Compression
|
||||
- how dense or explanatory the writing is
|
||||
|
||||
Capitalization
|
||||
- conventional, mixed, or situational
|
||||
|
||||
Parentheticals
|
||||
- how they are used and how they are not used
|
||||
|
||||
Question Use
|
||||
- rare, frequent, rhetorical, direct, or mostly absent
|
||||
|
||||
Claim Style
|
||||
- how claims are framed, supported, and sharpened
|
||||
|
||||
Preferred Moves
|
||||
- concrete moves the author does use
|
||||
|
||||
Banned Moves
|
||||
- specific patterns the author does not use
|
||||
|
||||
CTA Rules
|
||||
- how, when, or whether to close with asks
|
||||
|
||||
Channel Notes
|
||||
- X:
|
||||
- LinkedIn:
|
||||
- Email:
|
||||
```
|
||||
|
||||
Guidelines:
|
||||
|
||||
- Keep the profile concrete and source-backed.
|
||||
- Use short bullets, not essay paragraphs.
|
||||
- Every banned move should be observable in the source set or explicitly requested by the user.
|
||||
- If the source set conflicts, call out the split instead of averaging it into mush.
|
||||
@@ -36,30 +36,55 @@ Before drafting, identify the source set:
|
||||
- prior posts from the same author
|
||||
|
||||
If the user wants a specific voice, build a voice profile from real examples before writing.
|
||||
Use `brand-voice` as the canonical workflow when voice consistency matters across more than one output.
|
||||
|
||||
## Voice Handling
|
||||
## Voice Capture Workflow
|
||||
|
||||
`brand-voice` is the canonical voice layer.
|
||||
Collect 5 to 20 examples when available. Good sources:
|
||||
- articles or essays
|
||||
- X posts or threads
|
||||
- docs or release notes
|
||||
- newsletters
|
||||
- previous launch posts
|
||||
|
||||
Run it first when:
|
||||
If live X access is available, use `x-api` to pull recent original posts before drafting. If not, use the examples already provided or present in the repo.
|
||||
|
||||
- there are multiple downstream outputs
|
||||
- the user explicitly cares about writing style
|
||||
- the content is launch, outreach, or reputation-sensitive
|
||||
Extract and write down:
|
||||
- sentence length and rhythm
|
||||
- how compressed or explanatory the writing is
|
||||
- whether capitalization is conventional, mixed, or situational
|
||||
- how parentheses are used
|
||||
- whether the writer uses fragments, lists, or abrupt pivots
|
||||
- how often the writer asks questions
|
||||
- how sharp, formal, opinionated, or dry the voice is
|
||||
- what the writer never does
|
||||
|
||||
Reuse the resulting `VOICE PROFILE` here instead of rebuilding a second voice model.
|
||||
If the user wants Affaan / ECC voice specifically, still treat `brand-voice` as the source of truth and feed it the best live or source-derived material available.
|
||||
Do not start drafting until the voice profile is clear enough to enforce.
|
||||
|
||||
## Affaan / ECC Voice Reference
|
||||
|
||||
When the user wants Affaan / ECC voice specifically, default to this unless newer examples clearly override it:
|
||||
- direct, compressed, concrete
|
||||
- strong preference for specific claims, numbers, mechanisms, and receipts
|
||||
- parentheticals used to qualify, narrow, or over-clarify, not to do corny bits
|
||||
- lowercase is optional, not mandatory
|
||||
- questions are rare and should not be added as bait
|
||||
- transitions should feel earned, not polished
|
||||
- tone can be sharp or blunt, but should not sound like a content marketer
|
||||
|
||||
## Hard Bans
|
||||
|
||||
Delete and rewrite any of these:
|
||||
- "In today's rapidly evolving landscape"
|
||||
- "game-changer", "revolutionary", "cutting-edge"
|
||||
- "no fluff"
|
||||
- "not X, just Y"
|
||||
- "here's why this matters" unless it is followed immediately by something concrete
|
||||
- "Excited to share"
|
||||
- fake curiosity gaps
|
||||
- ending with a LinkedIn-style question just to farm replies
|
||||
- forced lowercase when the source voice does not call for it
|
||||
- forced casualness on LinkedIn
|
||||
- fake engagement padding that was not present in the source material
|
||||
- parenthetical jokes that were not present in the source voice
|
||||
|
||||
## Platform Adaptation Rules
|
||||
|
||||
@@ -123,9 +148,3 @@ Before delivering:
|
||||
- no fake engagement bait remains
|
||||
- no duplicated copy across platforms unless requested
|
||||
- any CTA is earned and user-approved
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for source-derived voice profiles
|
||||
- `crosspost` for platform-specific distribution
|
||||
- `x-api` for sourcing recent posts and publishing approved X output
|
||||
|
||||
@@ -37,10 +37,13 @@ Use `content-engine` first if the source still needs voice shaping.
|
||||
|
||||
### Step 2: Capture the Voice Fingerprint
|
||||
|
||||
Run `brand-voice` first if the source voice is not already captured in the current session.
|
||||
Before adapting, note:
|
||||
- how blunt or explanatory the source is
|
||||
- whether the source uses fragments, lists, or longer transitions
|
||||
- whether the source uses parentheses
|
||||
- whether the source avoids questions, hashtags, or CTA language
|
||||
|
||||
Reuse the resulting `VOICE PROFILE` directly.
|
||||
Do not build a second ad hoc voice checklist here unless the user explicitly wants a fresh override for this campaign.
|
||||
The adaptation should preserve that fingerprint.
|
||||
|
||||
### Step 3: Adapt by Platform Constraint
|
||||
|
||||
@@ -86,6 +89,7 @@ Delete and rewrite any of these:
|
||||
- "Here's what I learned"
|
||||
- "What do you think?"
|
||||
- "link in bio" unless that is literally true
|
||||
- "not X, just Y"
|
||||
- generic "professional takeaway" paragraphs that were not in the source
|
||||
|
||||
## Output Format
|
||||
@@ -106,6 +110,5 @@ Before delivering:
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for reusable source-derived voice capture
|
||||
- `content-engine` for voice capture and source shaping
|
||||
- `x-api` for X publishing workflows
|
||||
|
||||
@@ -1,194 +1,175 @@
|
||||
---
|
||||
name: everything-claude-code-conventions
|
||||
description: Development conventions and patterns for everything-claude-code. TypeScript project with conventional commits.
|
||||
---
|
||||
```markdown
|
||||
# everything-claude-code Development Patterns
|
||||
|
||||
# Everything Claude Code Conventions
|
||||
|
||||
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-04-03
|
||||
> Auto-generated skill from repository analysis
|
||||
|
||||
## Overview
|
||||
|
||||
This skill teaches Claude the development patterns and conventions used in everything-claude-code.
|
||||
This skill teaches the core development patterns, coding conventions, and common workflows used in the `everything-claude-code` repository. The codebase is primarily JavaScript, with no major framework, and is organized around modular skills, agent definitions, command workflows, and extensible install targets. The repository emphasizes clear documentation, conventional commits, and maintainable architecture for agent and workflow development.
|
||||
|
||||
## Tech Stack
|
||||
## Coding Conventions
|
||||
|
||||
- **Primary Language**: TypeScript
|
||||
- **Architecture**: hybrid module organization
|
||||
- **Test Location**: separate
|
||||
- **File Naming:**
|
||||
Use `camelCase` for JavaScript files and directories.
|
||||
*Example:*
|
||||
```
|
||||
scripts/lib/installTargets.js
|
||||
agentPrompts.md
|
||||
```
|
||||
|
||||
## When to Use This Skill
|
||||
- **Import Style:**
|
||||
Use **relative imports** for modules.
|
||||
*Example:*
|
||||
```js
|
||||
const installTarget = require('../lib/installTargets');
|
||||
```
|
||||
|
||||
Activate this skill when:
|
||||
- Making changes to this repository
|
||||
- Adding new features following established patterns
|
||||
- Writing tests that match project conventions
|
||||
- Creating commits with proper message format
|
||||
- **Export Style:**
|
||||
Both CommonJS (`module.exports`) and ES module (`export`) styles are present.
|
||||
*Example (CommonJS):*
|
||||
```js
|
||||
module.exports = function mySkill() { ... };
|
||||
```
|
||||
*Example (ES Module):*
|
||||
```js
|
||||
export function mySkill() { ... }
|
||||
```
|
||||
|
||||
## Commit Conventions
|
||||
- **Commit Messages:**
|
||||
Use [Conventional Commits](https://www.conventionalcommits.org/).
|
||||
Prefixes: `fix`, `feat`, `docs`, `chore`
|
||||
*Example:*
|
||||
```
|
||||
feat: add Gemini install target support
|
||||
fix: correct agent prompt registration logic
|
||||
```
|
||||
|
||||
Follow these commit message conventions based on 10 analyzed commits.
|
||||
- **Documentation:**
|
||||
Each skill or agent should have a `SKILL.md` or `.md` documentation file explaining its purpose and usage.
|
||||
|
||||
### Commit Style: Conventional Commits
|
||||
## Workflows
|
||||
|
||||
### Prefixes Used
|
||||
### Add New Skill
|
||||
**Trigger:** When introducing a new skill for agents or workflows
|
||||
**Command:** `/add-skill`
|
||||
|
||||
- `feat`
|
||||
1. Create a new `SKILL.md` file under `skills/`, `.agents/skills/`, or `.claude/skills/`.
|
||||
2. Document the skill's purpose, usage, and configuration.
|
||||
3. Optionally update manifests or documentation if the skill is significant.
|
||||
|
||||
### Message Guidelines
|
||||
|
||||
- Average message length: ~94 characters
|
||||
- Keep first line concise and descriptive
|
||||
- Use imperative mood ("Add feature" not "Added feature")
|
||||
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code-conventions ECC bundle (.claude/ecc-tools.json)
|
||||
*Example:*
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code-conventions ECC bundle (.claude/identity.json)
|
||||
skills/myNewSkill/SKILL.md
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code-conventions ECC bundle (.codex/agents/explorer.toml)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code-conventions ECC bundle (.codex/agents/reviewer.toml)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code-conventions ECC bundle (.codex/agents/docs-researcher.toml)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code-conventions ECC bundle (.claude/commands/feature-development.md)
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### Project Structure: Single Package
|
||||
|
||||
This project uses **hybrid** module organization.
|
||||
|
||||
### Guidelines
|
||||
|
||||
- This project uses a hybrid organization
|
||||
- Follow existing patterns when adding new code
|
||||
|
||||
## Code Style
|
||||
|
||||
### Language: TypeScript
|
||||
|
||||
### Naming Conventions
|
||||
|
||||
| Element | Convention |
|
||||
|---------|------------|
|
||||
| Files | camelCase |
|
||||
| Functions | camelCase |
|
||||
| Classes | PascalCase |
|
||||
| Constants | SCREAMING_SNAKE_CASE |
|
||||
|
||||
### Import Style: Relative Imports
|
||||
|
||||
### Export Style: Named Exports
|
||||
|
||||
|
||||
*Preferred import style*
|
||||
|
||||
```typescript
|
||||
// Use relative imports
|
||||
import { Button } from '../components/Button'
|
||||
import { useAuth } from './hooks/useAuth'
|
||||
```
|
||||
|
||||
*Preferred export style*
|
||||
|
||||
```typescript
|
||||
// Use named exports
|
||||
export function calculateTotal() { ... }
|
||||
export const TAX_RATE = 0.1
|
||||
export interface Order { ... }
|
||||
```
|
||||
|
||||
## Common Workflows
|
||||
|
||||
These workflows were detected from analyzing commit patterns.
|
||||
|
||||
### Feature Development
|
||||
|
||||
Standard feature implementation workflow
|
||||
|
||||
**Frequency**: ~30 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Add feature implementation
|
||||
2. Add tests for feature
|
||||
3. Update documentation
|
||||
|
||||
**Files typically involved**:
|
||||
- `.claude/commands/*`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
feat: add everything-claude-code-conventions ECC bundle (.claude/ecc-tools.json)
|
||||
feat: add everything-claude-code-conventions ECC bundle (.claude/skills/everything-claude-code/SKILL.md)
|
||||
feat: add everything-claude-code-conventions ECC bundle (.agents/skills/everything-claude-code/SKILL.md)
|
||||
```
|
||||
|
||||
### Add Ecc Bundle Component
|
||||
|
||||
Adds a new component to the everything-claude-code-conventions ECC bundle, such as a tool definition, skill file, identity, or command documentation.
|
||||
|
||||
**Frequency**: ~5 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create or update a file under a relevant ECC bundle directory (e.g., .claude/ecc-tools.json, .claude/skills/everything-claude-code/SKILL.md, .agents/skills/everything-claude-code/SKILL.md, .claude/identity.json, .claude/commands/*.md)
|
||||
2. Commit the file with a message indicating addition to the ECC bundle
|
||||
|
||||
**Files typically involved**:
|
||||
- `.claude/ecc-tools.json`
|
||||
- `.claude/skills/everything-claude-code/SKILL.md`
|
||||
- `.agents/skills/everything-claude-code/SKILL.md`
|
||||
- `.claude/identity.json`
|
||||
- `.claude/commands/feature-development.md`
|
||||
- `.claude/commands/add-ecc-bundle-component.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Create or update a file under a relevant ECC bundle directory (e.g., .claude/ecc-tools.json, .claude/skills/everything-claude-code/SKILL.md, .agents/skills/everything-claude-code/SKILL.md, .claude/identity.json, .claude/commands/*.md)
|
||||
Commit the file with a message indicating addition to the ECC bundle
|
||||
```
|
||||
|
||||
|
||||
## Best Practices
|
||||
|
||||
Based on analysis of the codebase, follow these practices:
|
||||
|
||||
### Do
|
||||
|
||||
- Use conventional commit format (feat:, fix:, etc.)
|
||||
- Use camelCase for file names
|
||||
- Prefer named exports
|
||||
|
||||
### Don't
|
||||
|
||||
- Don't write vague commit messages
|
||||
- Don't deviate from established patterns without discussion
|
||||
|
||||
---
|
||||
|
||||
*This skill was auto-generated by [ECC Tools](https://ecc.tools). Review and customize as needed for your team.*
|
||||
### Add New Agent or Agent Prompt
|
||||
**Trigger:** When introducing a new agent persona or capability
|
||||
**Command:** `/add-agent`
|
||||
|
||||
1. Create a new agent definition (e.g., `agents/myAgent.md` or `.opencode/prompts/agents/myAgent.txt`).
|
||||
2. Register the agent in the relevant configuration file (e.g., `.opencode/opencode.json`).
|
||||
3. Update `AGENTS.md` or related documentation.
|
||||
|
||||
*Example:*
|
||||
```
|
||||
agents/supportAgent.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Add or Update Command Workflow
|
||||
**Trigger:** When introducing or improving a workflow command
|
||||
**Command:** `/add-command`
|
||||
|
||||
1. Create or update a command file under `commands/` (e.g., `commands/review.md`).
|
||||
2. Optionally update related documentation or scripts.
|
||||
3. Address review feedback and refine command logic.
|
||||
|
||||
---
|
||||
|
||||
### Add Install Target or Adapter
|
||||
**Trigger:** When supporting a new platform or integration
|
||||
**Command:** `/add-install-target`
|
||||
|
||||
1. Add new install scripts and documentation under a dot-directory (e.g., `.gemini/`, `.codebuddy/`).
|
||||
2. Update `manifests/install-modules.json` and relevant schema files.
|
||||
3. Implement or update `scripts/lib/install-targets/*.js` for the new target.
|
||||
4. Add or update tests for the install target.
|
||||
|
||||
*Example:*
|
||||
```
|
||||
.gemini/install.sh
|
||||
scripts/lib/install-targets/gemini.js
|
||||
tests/lib/install-targets.test.js
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Refactor or Collapse Commands to Skills
|
||||
**Trigger:** When modernizing command logic under the skills architecture
|
||||
**Command:** `/refactor-to-skill`
|
||||
|
||||
1. Update or remove `commands/*.md` files.
|
||||
2. Add or update `skills/*/SKILL.md` files.
|
||||
3. Update documentation (`README.md`, `AGENTS.md`, `WORKING-CONTEXT.md`, etc.).
|
||||
4. Update manifests if required.
|
||||
|
||||
---
|
||||
|
||||
### Documentation and Guidance Update
|
||||
**Trigger:** When clarifying, updating, or adding documentation
|
||||
**Command:** `/update-docs`
|
||||
|
||||
1. Edit or create documentation files (`README.md`, `WORKING-CONTEXT.md`, `docs/*.md`, etc.).
|
||||
2. Sync or update guidance in multiple language versions if needed.
|
||||
3. Optionally update related configuration or manifest files.
|
||||
|
||||
---
|
||||
|
||||
### CI/CD Workflow Update
|
||||
**Trigger:** When improving or fixing CI/CD processes
|
||||
**Command:** `/update-ci`
|
||||
|
||||
1. Edit `.github/workflows/*.yml` files.
|
||||
2. Update lockfiles (`package-lock.json`, `yarn.lock`) or validation scripts.
|
||||
3. Update or add test files for CI/CD logic.
|
||||
|
||||
---
|
||||
|
||||
## Testing Patterns
|
||||
|
||||
- **Test Files:**
|
||||
Test files follow the pattern `*.test.js`.
|
||||
|
||||
- **Framework:**
|
||||
No specific testing framework detected; structure your tests in standard Node.js or your preferred test runner.
|
||||
|
||||
*Example:*
|
||||
```
|
||||
tests/lib/install-targets.test.js
|
||||
```
|
||||
|
||||
- **Test Example:**
|
||||
```js
|
||||
// install-targets.test.js
|
||||
const installTarget = require('../../scripts/lib/install-targets/gemini');
|
||||
|
||||
test('should install Gemini target', () => {
|
||||
expect(installTarget()).toBe(true);
|
||||
});
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
| Command | Purpose |
|
||||
|--------------------|----------------------------------------------------------------|
|
||||
| /add-skill | Add a new skill module and document it |
|
||||
| /add-agent | Add a new agent definition or prompt |
|
||||
| /add-command | Add or update a command workflow |
|
||||
| /add-install-target| Add a new install target or adapter |
|
||||
| /refactor-to-skill | Refactor commands into skill-based modules |
|
||||
| /update-docs | Update documentation or guidance files |
|
||||
| /update-ci | Update CI/CD workflows, lockfiles, or validation scripts |
|
||||
```
|
||||
|
||||
@@ -24,11 +24,6 @@ Write investor communication that is short, concrete, and easy to act on.
|
||||
4. Stay concise.
|
||||
5. Never send copy that could go to any investor.
|
||||
|
||||
## Voice Handling
|
||||
|
||||
If the user's voice matters, run `brand-voice` first and reuse its `VOICE PROFILE`.
|
||||
This skill should keep the investor-specific structure and ask discipline, not recreate its own parallel voice system.
|
||||
|
||||
## Hard Bans
|
||||
|
||||
Delete and rewrite any of these:
|
||||
@@ -36,6 +31,7 @@ Delete and rewrite any of these:
|
||||
- "excited to share"
|
||||
- generic thesis praise without a real tie-in
|
||||
- vague founder adjectives
|
||||
- "no fluff"
|
||||
- begging language
|
||||
- soft closing questions when a direct ask is clearer
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ Programmatic interaction with X (Twitter) for posting, reading, searching, and a
|
||||
|
||||
## Authentication
|
||||
|
||||
### OAuth 2.0 Bearer Token (App-Only)
|
||||
### OAuth 2.0 (App-Only / User Context)
|
||||
|
||||
Best for: read-heavy operations, search, public data.
|
||||
|
||||
@@ -46,27 +46,25 @@ tweets = resp.json()
|
||||
|
||||
### OAuth 1.0a (User Context)
|
||||
|
||||
Required for: posting tweets, managing account, DMs, and any write flow.
|
||||
Required for: posting tweets, managing account, DMs.
|
||||
|
||||
```bash
|
||||
# Environment setup — source before use
|
||||
export X_CONSUMER_KEY="your-consumer-key"
|
||||
export X_CONSUMER_SECRET="your-consumer-secret"
|
||||
export X_API_KEY="your-api-key"
|
||||
export X_API_SECRET="your-api-secret"
|
||||
export X_ACCESS_TOKEN="your-access-token"
|
||||
export X_ACCESS_TOKEN_SECRET="your-access-token-secret"
|
||||
export X_ACCESS_SECRET="your-access-secret"
|
||||
```
|
||||
|
||||
Legacy aliases such as `X_API_KEY`, `X_API_SECRET`, and `X_ACCESS_SECRET` may exist in older setups. Prefer the `X_CONSUMER_*` and `X_ACCESS_TOKEN_SECRET` names when documenting or wiring new flows.
|
||||
|
||||
```python
|
||||
import os
|
||||
from requests_oauthlib import OAuth1Session
|
||||
|
||||
oauth = OAuth1Session(
|
||||
os.environ["X_CONSUMER_KEY"],
|
||||
client_secret=os.environ["X_CONSUMER_SECRET"],
|
||||
os.environ["X_API_KEY"],
|
||||
client_secret=os.environ["X_API_SECRET"],
|
||||
resource_owner_key=os.environ["X_ACCESS_TOKEN"],
|
||||
resource_owner_secret=os.environ["X_ACCESS_TOKEN_SECRET"],
|
||||
resource_owner_secret=os.environ["X_ACCESS_SECRET"],
|
||||
)
|
||||
```
|
||||
|
||||
@@ -94,6 +92,7 @@ def post_thread(oauth, tweets: list[str]) -> list[str]:
|
||||
if reply_to:
|
||||
payload["reply"] = {"in_reply_to_tweet_id": reply_to}
|
||||
resp = oauth.post("https://api.x.com/2/tweets", json=payload)
|
||||
resp.raise_for_status()
|
||||
tweet_id = resp.json()["data"]["id"]
|
||||
ids.append(tweet_id)
|
||||
reply_to = tweet_id
|
||||
@@ -127,21 +126,6 @@ resp = requests.get(
|
||||
)
|
||||
```
|
||||
|
||||
### Pull Recent Original Posts for Voice Modeling
|
||||
|
||||
```python
|
||||
resp = requests.get(
|
||||
"https://api.x.com/2/tweets/search/recent",
|
||||
headers=headers,
|
||||
params={
|
||||
"query": "from:affaanmustafa -is:retweet -is:reply",
|
||||
"max_results": 25,
|
||||
"tweet.fields": "created_at,public_metrics",
|
||||
}
|
||||
)
|
||||
voice_samples = resp.json()
|
||||
```
|
||||
|
||||
### Get User by Username
|
||||
|
||||
```python
|
||||
@@ -171,12 +155,17 @@ resp = oauth.post(
|
||||
)
|
||||
```
|
||||
|
||||
## Rate Limits
|
||||
## Rate Limits Reference
|
||||
|
||||
X API rate limits vary by endpoint, auth method, and account tier, and they change over time. Always:
|
||||
- Check the current X developer docs before hardcoding assumptions
|
||||
- Read `x-rate-limit-remaining` and `x-rate-limit-reset` headers at runtime
|
||||
- Back off automatically instead of relying on static tables in code
|
||||
| Endpoint | Limit | Window |
|
||||
|----------|-------|--------|
|
||||
| POST /2/tweets | 200 | 15 min |
|
||||
| GET /2/tweets/search/recent | 450 | 15 min |
|
||||
| GET /2/users/:id/tweets | 1500 | 15 min |
|
||||
| GET /2/users/by/username | 300 | 15 min |
|
||||
| POST media/upload | 415 | 15 min |
|
||||
|
||||
Always check `x-rate-limit-remaining` and `x-rate-limit-reset` headers.
|
||||
|
||||
```python
|
||||
import time
|
||||
@@ -213,18 +202,13 @@ else:
|
||||
|
||||
## Integration with Content Engine
|
||||
|
||||
Use `brand-voice` plus `content-engine` to generate platform-native content, then post via X API:
|
||||
1. Pull recent original posts when voice matching matters
|
||||
2. Build or reuse a `VOICE PROFILE`
|
||||
3. Generate content with `content-engine` in X-native format
|
||||
4. Validate length and thread structure
|
||||
5. Return the draft for approval unless the user explicitly asked to post now
|
||||
6. Post via X API only after approval
|
||||
7. Track engagement via public_metrics
|
||||
Use `content-engine` skill to generate platform-native content, then post via X API:
|
||||
1. Generate content with content-engine (X platform format)
|
||||
2. Validate length (280 chars for single tweet)
|
||||
3. Post via X API using patterns above
|
||||
4. Track engagement via public_metrics
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` — Build a reusable voice profile from real X and site/source material
|
||||
- `content-engine` — Generate platform-native content for X
|
||||
- `crosspost` — Distribute content across X, LinkedIn, and other platforms
|
||||
- `connections-optimizer` — Reorganize the X graph before drafting network-driven outreach
|
||||
|
||||
@@ -1,39 +0,0 @@
|
||||
---
|
||||
name: add-ecc-bundle-component
|
||||
description: Workflow command scaffold for add-ecc-bundle-component in everything-claude-code.
|
||||
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
|
||||
---
|
||||
|
||||
# /add-ecc-bundle-component
|
||||
|
||||
Use this workflow when working on **add-ecc-bundle-component** in `everything-claude-code`.
|
||||
|
||||
## Goal
|
||||
|
||||
Adds a new component to the everything-claude-code-conventions ECC bundle, such as a tool definition, skill file, identity, or command documentation.
|
||||
|
||||
## Common Files
|
||||
|
||||
- `.claude/ecc-tools.json`
|
||||
- `.claude/skills/everything-claude-code/SKILL.md`
|
||||
- `.agents/skills/everything-claude-code/SKILL.md`
|
||||
- `.claude/identity.json`
|
||||
- `.claude/commands/feature-development.md`
|
||||
- `.claude/commands/add-ecc-bundle-component.md`
|
||||
|
||||
## Suggested Sequence
|
||||
|
||||
1. Understand the current state and failure mode before editing.
|
||||
2. Make the smallest coherent change that satisfies the workflow goal.
|
||||
3. Run the most relevant verification for touched files.
|
||||
4. Summarize what changed and what still needs review.
|
||||
|
||||
## Typical Commit Signals
|
||||
|
||||
- Create or update a file under a relevant ECC bundle directory (e.g., .claude/ecc-tools.json, .claude/skills/everything-claude-code/SKILL.md, .agents/skills/everything-claude-code/SKILL.md, .claude/identity.json, .claude/commands/*.md)
|
||||
- Commit the file with a message indicating addition to the ECC bundle
|
||||
|
||||
## Notes
|
||||
|
||||
- Treat this as a scaffold, not a hard-coded script.
|
||||
- Update the command if the workflow evolves materially.
|
||||
37
.claude/commands/add-new-skill.md
Normal file
37
.claude/commands/add-new-skill.md
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
name: add-new-skill
|
||||
description: Workflow command scaffold for add-new-skill in everything-claude-code.
|
||||
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
|
||||
---
|
||||
|
||||
# /add-new-skill
|
||||
|
||||
Use this workflow when working on **add-new-skill** in `everything-claude-code`.
|
||||
|
||||
## Goal
|
||||
|
||||
Adds a new skill to the codebase, typically as a new agent capability or workflow module.
|
||||
|
||||
## Common Files
|
||||
|
||||
- `skills/*/SKILL.md`
|
||||
- `.agents/skills/*/SKILL.md`
|
||||
- `.claude/skills/*/SKILL.md`
|
||||
|
||||
## Suggested Sequence
|
||||
|
||||
1. Understand the current state and failure mode before editing.
|
||||
2. Make the smallest coherent change that satisfies the workflow goal.
|
||||
3. Run the most relevant verification for touched files.
|
||||
4. Summarize what changed and what still needs review.
|
||||
|
||||
## Typical Commit Signals
|
||||
|
||||
- Create a new SKILL.md file under skills/ or .agents/skills/ or .claude/skills/
|
||||
- Document the skill's purpose, usage, and configuration.
|
||||
- Optionally update manifests or documentation if the skill is significant.
|
||||
|
||||
## Notes
|
||||
|
||||
- Treat this as a scaffold, not a hard-coded script.
|
||||
- Update the command if the workflow evolves materially.
|
||||
@@ -14,7 +14,10 @@ Standard feature implementation workflow
|
||||
|
||||
## Common Files
|
||||
|
||||
- `.claude/commands/*`
|
||||
- `.opencode/*`
|
||||
- `.opencode/plugins/*`
|
||||
- `.opencode/plugins/lib/*`
|
||||
- `**/*.test.*`
|
||||
|
||||
## Suggested Sequence
|
||||
|
||||
|
||||
35
.claude/commands/refactoring.md
Normal file
35
.claude/commands/refactoring.md
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
name: refactoring
|
||||
description: Workflow command scaffold for refactoring in everything-claude-code.
|
||||
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
|
||||
---
|
||||
|
||||
# /refactoring
|
||||
|
||||
Use this workflow when working on **refactoring** in `everything-claude-code`.
|
||||
|
||||
## Goal
|
||||
|
||||
Code refactoring and cleanup workflow
|
||||
|
||||
## Common Files
|
||||
|
||||
- `src/**/*`
|
||||
|
||||
## Suggested Sequence
|
||||
|
||||
1. Understand the current state and failure mode before editing.
|
||||
2. Make the smallest coherent change that satisfies the workflow goal.
|
||||
3. Run the most relevant verification for touched files.
|
||||
4. Summarize what changed and what still needs review.
|
||||
|
||||
## Typical Commit Signals
|
||||
|
||||
- Ensure tests pass before refactor
|
||||
- Refactor code structure
|
||||
- Verify tests still pass
|
||||
|
||||
## Notes
|
||||
|
||||
- Treat this as a scaffold, not a hard-coded script.
|
||||
- Update the command if the workflow evolves materially.
|
||||
@@ -2,28 +2,36 @@
|
||||
"version": "1.3",
|
||||
"schemaVersion": "1.0",
|
||||
"generatedBy": "ecc-tools",
|
||||
"generatedAt": "2026-04-03T10:14:32.602Z",
|
||||
"generatedAt": "2026-04-01T22:51:43.152Z",
|
||||
"repo": "https://github.com/affaan-m/everything-claude-code",
|
||||
"profiles": {
|
||||
"requested": "developer",
|
||||
"recommended": "developer",
|
||||
"effective": "developer",
|
||||
"requestedAlias": "developer",
|
||||
"recommendedAlias": "developer",
|
||||
"effectiveAlias": "developer"
|
||||
"requested": "full",
|
||||
"recommended": "full",
|
||||
"effective": "full",
|
||||
"requestedAlias": "full",
|
||||
"recommendedAlias": "full",
|
||||
"effectiveAlias": "full"
|
||||
},
|
||||
"requestedProfile": "developer",
|
||||
"profile": "developer",
|
||||
"recommendedProfile": "developer",
|
||||
"effectiveProfile": "developer",
|
||||
"requestedProfile": "full",
|
||||
"profile": "full",
|
||||
"recommendedProfile": "full",
|
||||
"effectiveProfile": "full",
|
||||
"tier": "enterprise",
|
||||
"requestedComponents": [
|
||||
"repo-baseline",
|
||||
"workflow-automation"
|
||||
"workflow-automation",
|
||||
"security-audits",
|
||||
"research-tooling",
|
||||
"team-rollout",
|
||||
"governance-controls"
|
||||
],
|
||||
"selectedComponents": [
|
||||
"repo-baseline",
|
||||
"workflow-automation"
|
||||
"workflow-automation",
|
||||
"security-audits",
|
||||
"research-tooling",
|
||||
"team-rollout",
|
||||
"governance-controls"
|
||||
],
|
||||
"requestedAddComponents": [],
|
||||
"requestedRemoveComponents": [],
|
||||
@@ -31,25 +39,45 @@
|
||||
"tierFilteredComponents": [],
|
||||
"requestedRootPackages": [
|
||||
"runtime-core",
|
||||
"workflow-pack"
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"selectedRootPackages": [
|
||||
"runtime-core",
|
||||
"workflow-pack"
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"requestedPackages": [
|
||||
"runtime-core",
|
||||
"workflow-pack"
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"requestedAddPackages": [],
|
||||
"requestedRemovePackages": [],
|
||||
"selectedPackages": [
|
||||
"runtime-core",
|
||||
"workflow-pack"
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"packages": [
|
||||
"runtime-core",
|
||||
"workflow-pack"
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"blockedRemovalPackages": [],
|
||||
"tierFilteredRootPackages": [],
|
||||
@@ -59,23 +87,51 @@
|
||||
"runtime-core": [],
|
||||
"workflow-pack": [
|
||||
"runtime-core"
|
||||
],
|
||||
"agentshield-pack": [
|
||||
"workflow-pack"
|
||||
],
|
||||
"research-pack": [
|
||||
"workflow-pack"
|
||||
],
|
||||
"team-config-sync": [
|
||||
"runtime-core"
|
||||
],
|
||||
"enterprise-controls": [
|
||||
"team-config-sync"
|
||||
]
|
||||
},
|
||||
"resolutionOrder": [
|
||||
"runtime-core",
|
||||
"workflow-pack"
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"requestedModules": [
|
||||
"runtime-core",
|
||||
"workflow-pack"
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"selectedModules": [
|
||||
"runtime-core",
|
||||
"workflow-pack"
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"modules": [
|
||||
"runtime-core",
|
||||
"workflow-pack"
|
||||
"workflow-pack",
|
||||
"agentshield-pack",
|
||||
"research-pack",
|
||||
"team-config-sync",
|
||||
"enterprise-controls"
|
||||
],
|
||||
"managedFiles": [
|
||||
".claude/skills/everything-claude-code/SKILL.md",
|
||||
@@ -88,8 +144,13 @@
|
||||
".codex/agents/reviewer.toml",
|
||||
".codex/agents/docs-researcher.toml",
|
||||
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml",
|
||||
".claude/rules/everything-claude-code-guardrails.md",
|
||||
".claude/research/everything-claude-code-research-playbook.md",
|
||||
".claude/team/everything-claude-code-team-config.json",
|
||||
".claude/enterprise/controls.md",
|
||||
".claude/commands/feature-development.md",
|
||||
".claude/commands/add-ecc-bundle-component.md"
|
||||
".claude/commands/refactoring.md",
|
||||
".claude/commands/add-new-skill.md"
|
||||
],
|
||||
"packageFiles": {
|
||||
"runtime-core": [
|
||||
@@ -104,9 +165,22 @@
|
||||
".codex/agents/docs-researcher.toml",
|
||||
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml"
|
||||
],
|
||||
"agentshield-pack": [
|
||||
".claude/rules/everything-claude-code-guardrails.md"
|
||||
],
|
||||
"research-pack": [
|
||||
".claude/research/everything-claude-code-research-playbook.md"
|
||||
],
|
||||
"team-config-sync": [
|
||||
".claude/team/everything-claude-code-team-config.json"
|
||||
],
|
||||
"enterprise-controls": [
|
||||
".claude/enterprise/controls.md"
|
||||
],
|
||||
"workflow-pack": [
|
||||
".claude/commands/feature-development.md",
|
||||
".claude/commands/add-ecc-bundle-component.md"
|
||||
".claude/commands/refactoring.md",
|
||||
".claude/commands/add-new-skill.md"
|
||||
]
|
||||
},
|
||||
"moduleFiles": {
|
||||
@@ -122,9 +196,22 @@
|
||||
".codex/agents/docs-researcher.toml",
|
||||
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml"
|
||||
],
|
||||
"agentshield-pack": [
|
||||
".claude/rules/everything-claude-code-guardrails.md"
|
||||
],
|
||||
"research-pack": [
|
||||
".claude/research/everything-claude-code-research-playbook.md"
|
||||
],
|
||||
"team-config-sync": [
|
||||
".claude/team/everything-claude-code-team-config.json"
|
||||
],
|
||||
"enterprise-controls": [
|
||||
".claude/enterprise/controls.md"
|
||||
],
|
||||
"workflow-pack": [
|
||||
".claude/commands/feature-development.md",
|
||||
".claude/commands/add-ecc-bundle-component.md"
|
||||
".claude/commands/refactoring.md",
|
||||
".claude/commands/add-new-skill.md"
|
||||
]
|
||||
},
|
||||
"files": [
|
||||
@@ -178,6 +265,26 @@
|
||||
"path": ".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml",
|
||||
"description": "Continuous-learning instincts derived from repository patterns."
|
||||
},
|
||||
{
|
||||
"moduleId": "agentshield-pack",
|
||||
"path": ".claude/rules/everything-claude-code-guardrails.md",
|
||||
"description": "Repository guardrails distilled from analysis for security and workflow review."
|
||||
},
|
||||
{
|
||||
"moduleId": "research-pack",
|
||||
"path": ".claude/research/everything-claude-code-research-playbook.md",
|
||||
"description": "Research workflow playbook for source attribution and long-context tasks."
|
||||
},
|
||||
{
|
||||
"moduleId": "team-config-sync",
|
||||
"path": ".claude/team/everything-claude-code-team-config.json",
|
||||
"description": "Team config scaffold that points collaborators at the shared ECC bundle."
|
||||
},
|
||||
{
|
||||
"moduleId": "enterprise-controls",
|
||||
"path": ".claude/enterprise/controls.md",
|
||||
"description": "Enterprise governance scaffold for approvals, audit posture, and escalation."
|
||||
},
|
||||
{
|
||||
"moduleId": "workflow-pack",
|
||||
"path": ".claude/commands/feature-development.md",
|
||||
@@ -185,8 +292,13 @@
|
||||
},
|
||||
{
|
||||
"moduleId": "workflow-pack",
|
||||
"path": ".claude/commands/add-ecc-bundle-component.md",
|
||||
"description": "Workflow command scaffold for add-ecc-bundle-component."
|
||||
"path": ".claude/commands/refactoring.md",
|
||||
"description": "Workflow command scaffold for refactoring."
|
||||
},
|
||||
{
|
||||
"moduleId": "workflow-pack",
|
||||
"path": ".claude/commands/add-new-skill.md",
|
||||
"description": "Workflow command scaffold for add-new-skill."
|
||||
}
|
||||
],
|
||||
"workflows": [
|
||||
@@ -195,8 +307,12 @@
|
||||
"path": ".claude/commands/feature-development.md"
|
||||
},
|
||||
{
|
||||
"command": "add-ecc-bundle-component",
|
||||
"path": ".claude/commands/add-ecc-bundle-component.md"
|
||||
"command": "refactoring",
|
||||
"path": ".claude/commands/refactoring.md"
|
||||
},
|
||||
{
|
||||
"command": "add-new-skill",
|
||||
"path": ".claude/commands/add-new-skill.md"
|
||||
}
|
||||
],
|
||||
"adapters": {
|
||||
@@ -205,7 +321,8 @@
|
||||
"identityPath": ".claude/identity.json",
|
||||
"commandPaths": [
|
||||
".claude/commands/feature-development.md",
|
||||
".claude/commands/add-ecc-bundle-component.md"
|
||||
".claude/commands/refactoring.md",
|
||||
".claude/commands/add-new-skill.md"
|
||||
]
|
||||
},
|
||||
"codex": {
|
||||
|
||||
@@ -2,13 +2,13 @@
|
||||
"version": "2.0",
|
||||
"technicalLevel": "technical",
|
||||
"preferredStyle": {
|
||||
"verbosity": "moderate",
|
||||
"verbosity": "minimal",
|
||||
"codeComments": true,
|
||||
"explanations": true
|
||||
},
|
||||
"domains": [
|
||||
"typescript"
|
||||
"javascript"
|
||||
],
|
||||
"suggestedBy": "ecc-tools-repo-analysis",
|
||||
"createdAt": "2026-04-03T12:33:26.494Z"
|
||||
"createdAt": "2026-04-01T22:52:48.730Z"
|
||||
}
|
||||
@@ -18,4 +18,4 @@ Use this when the task is documentation-heavy, source-sensitive, or requires bro
|
||||
|
||||
- Primary language: JavaScript
|
||||
- Framework: Not detected
|
||||
- Workflows detected: 10
|
||||
- Workflows detected: 9
|
||||
@@ -4,7 +4,7 @@ Generated by ECC Tools from repository history. Review before treating it as a h
|
||||
|
||||
## Commit Workflow
|
||||
|
||||
- Prefer `conventional` commit messaging with prefixes such as fix, test, feat, docs.
|
||||
- Prefer `conventional` commit messaging with prefixes such as fix, feat, docs, chore.
|
||||
- Keep new changes aligned with the existing pull-request and review flow already present in the repo.
|
||||
|
||||
## Architecture
|
||||
@@ -24,9 +24,9 @@ Generated by ECC Tools from repository history. Review before treating it as a h
|
||||
|
||||
## Detected Workflows
|
||||
|
||||
- database-migration: Database schema changes with migration files
|
||||
- feature-development: Standard feature implementation workflow
|
||||
- add-language-rules: Adds a new programming language to the rules system, including coding style, hooks, patterns, security, and testing guidelines.
|
||||
- refactoring: Code refactoring and cleanup workflow
|
||||
- add-new-skill: Adds a new skill to the codebase, typically as a new agent capability or workflow module.
|
||||
|
||||
## Review Reminder
|
||||
|
||||
|
||||
@@ -1,194 +1,175 @@
|
||||
---
|
||||
name: everything-claude-code-conventions
|
||||
description: Development conventions and patterns for everything-claude-code. TypeScript project with conventional commits.
|
||||
---
|
||||
```markdown
|
||||
# everything-claude-code Development Patterns
|
||||
|
||||
# Everything Claude Code Conventions
|
||||
|
||||
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-04-03
|
||||
> Auto-generated skill from repository analysis
|
||||
|
||||
## Overview
|
||||
|
||||
This skill teaches Claude the development patterns and conventions used in everything-claude-code.
|
||||
This skill teaches the core development patterns, coding conventions, and common workflows used in the `everything-claude-code` repository. The codebase is primarily JavaScript, with no major framework, and is organized around modular skills, agent definitions, command workflows, and extensible install targets. The repository emphasizes clear documentation, conventional commits, and maintainable architecture for agent and workflow development.
|
||||
|
||||
## Tech Stack
|
||||
## Coding Conventions
|
||||
|
||||
- **Primary Language**: TypeScript
|
||||
- **Architecture**: hybrid module organization
|
||||
- **Test Location**: separate
|
||||
- **File Naming:**
|
||||
Use `camelCase` for JavaScript files and directories.
|
||||
*Example:*
|
||||
```
|
||||
scripts/lib/installTargets.js
|
||||
agentPrompts.md
|
||||
```
|
||||
|
||||
## When to Use This Skill
|
||||
- **Import Style:**
|
||||
Use **relative imports** for modules.
|
||||
*Example:*
|
||||
```js
|
||||
const installTarget = require('../lib/installTargets');
|
||||
```
|
||||
|
||||
Activate this skill when:
|
||||
- Making changes to this repository
|
||||
- Adding new features following established patterns
|
||||
- Writing tests that match project conventions
|
||||
- Creating commits with proper message format
|
||||
- **Export Style:**
|
||||
Both CommonJS (`module.exports`) and ES module (`export`) styles are present.
|
||||
*Example (CommonJS):*
|
||||
```js
|
||||
module.exports = function mySkill() { ... };
|
||||
```
|
||||
*Example (ES Module):*
|
||||
```js
|
||||
export function mySkill() { ... }
|
||||
```
|
||||
|
||||
## Commit Conventions
|
||||
- **Commit Messages:**
|
||||
Use [Conventional Commits](https://www.conventionalcommits.org/).
|
||||
Prefixes: `fix`, `feat`, `docs`, `chore`
|
||||
*Example:*
|
||||
```
|
||||
feat: add Gemini install target support
|
||||
fix: correct agent prompt registration logic
|
||||
```
|
||||
|
||||
Follow these commit message conventions based on 10 analyzed commits.
|
||||
- **Documentation:**
|
||||
Each skill or agent should have a `SKILL.md` or `.md` documentation file explaining its purpose and usage.
|
||||
|
||||
### Commit Style: Conventional Commits
|
||||
## Workflows
|
||||
|
||||
### Prefixes Used
|
||||
### Add New Skill
|
||||
**Trigger:** When introducing a new skill for agents or workflows
|
||||
**Command:** `/add-skill`
|
||||
|
||||
- `feat`
|
||||
1. Create a new `SKILL.md` file under `skills/`, `.agents/skills/`, or `.claude/skills/`.
|
||||
2. Document the skill's purpose, usage, and configuration.
|
||||
3. Optionally update manifests or documentation if the skill is significant.
|
||||
|
||||
### Message Guidelines
|
||||
|
||||
- Average message length: ~94 characters
|
||||
- Keep first line concise and descriptive
|
||||
- Use imperative mood ("Add feature" not "Added feature")
|
||||
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code-conventions ECC bundle (.claude/ecc-tools.json)
|
||||
*Example:*
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code-conventions ECC bundle (.claude/identity.json)
|
||||
skills/myNewSkill/SKILL.md
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code-conventions ECC bundle (.codex/agents/explorer.toml)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code-conventions ECC bundle (.codex/agents/reviewer.toml)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code-conventions ECC bundle (.codex/agents/docs-researcher.toml)
|
||||
```
|
||||
|
||||
*Commit message example*
|
||||
|
||||
```text
|
||||
feat: add everything-claude-code-conventions ECC bundle (.claude/commands/feature-development.md)
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### Project Structure: Single Package
|
||||
|
||||
This project uses **hybrid** module organization.
|
||||
|
||||
### Guidelines
|
||||
|
||||
- This project uses a hybrid organization
|
||||
- Follow existing patterns when adding new code
|
||||
|
||||
## Code Style
|
||||
|
||||
### Language: TypeScript
|
||||
|
||||
### Naming Conventions
|
||||
|
||||
| Element | Convention |
|
||||
|---------|------------|
|
||||
| Files | camelCase |
|
||||
| Functions | camelCase |
|
||||
| Classes | PascalCase |
|
||||
| Constants | SCREAMING_SNAKE_CASE |
|
||||
|
||||
### Import Style: Relative Imports
|
||||
|
||||
### Export Style: Named Exports
|
||||
|
||||
|
||||
*Preferred import style*
|
||||
|
||||
```typescript
|
||||
// Use relative imports
|
||||
import { Button } from '../components/Button'
|
||||
import { useAuth } from './hooks/useAuth'
|
||||
```
|
||||
|
||||
*Preferred export style*
|
||||
|
||||
```typescript
|
||||
// Use named exports
|
||||
export function calculateTotal() { ... }
|
||||
export const TAX_RATE = 0.1
|
||||
export interface Order { ... }
|
||||
```
|
||||
|
||||
## Common Workflows
|
||||
|
||||
These workflows were detected from analyzing commit patterns.
|
||||
|
||||
### Feature Development
|
||||
|
||||
Standard feature implementation workflow
|
||||
|
||||
**Frequency**: ~30 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Add feature implementation
|
||||
2. Add tests for feature
|
||||
3. Update documentation
|
||||
|
||||
**Files typically involved**:
|
||||
- `.claude/commands/*`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
feat: add everything-claude-code-conventions ECC bundle (.claude/ecc-tools.json)
|
||||
feat: add everything-claude-code-conventions ECC bundle (.claude/skills/everything-claude-code/SKILL.md)
|
||||
feat: add everything-claude-code-conventions ECC bundle (.agents/skills/everything-claude-code/SKILL.md)
|
||||
```
|
||||
|
||||
### Add Ecc Bundle Component
|
||||
|
||||
Adds a new component to the everything-claude-code-conventions ECC bundle, such as a tool definition, skill file, identity, or command documentation.
|
||||
|
||||
**Frequency**: ~5 times per month
|
||||
|
||||
**Steps**:
|
||||
1. Create or update a file under a relevant ECC bundle directory (e.g., .claude/ecc-tools.json, .claude/skills/everything-claude-code/SKILL.md, .agents/skills/everything-claude-code/SKILL.md, .claude/identity.json, .claude/commands/*.md)
|
||||
2. Commit the file with a message indicating addition to the ECC bundle
|
||||
|
||||
**Files typically involved**:
|
||||
- `.claude/ecc-tools.json`
|
||||
- `.claude/skills/everything-claude-code/SKILL.md`
|
||||
- `.agents/skills/everything-claude-code/SKILL.md`
|
||||
- `.claude/identity.json`
|
||||
- `.claude/commands/feature-development.md`
|
||||
- `.claude/commands/add-ecc-bundle-component.md`
|
||||
|
||||
**Example commit sequence**:
|
||||
```
|
||||
Create or update a file under a relevant ECC bundle directory (e.g., .claude/ecc-tools.json, .claude/skills/everything-claude-code/SKILL.md, .agents/skills/everything-claude-code/SKILL.md, .claude/identity.json, .claude/commands/*.md)
|
||||
Commit the file with a message indicating addition to the ECC bundle
|
||||
```
|
||||
|
||||
|
||||
## Best Practices
|
||||
|
||||
Based on analysis of the codebase, follow these practices:
|
||||
|
||||
### Do
|
||||
|
||||
- Use conventional commit format (feat:, fix:, etc.)
|
||||
- Use camelCase for file names
|
||||
- Prefer named exports
|
||||
|
||||
### Don't
|
||||
|
||||
- Don't write vague commit messages
|
||||
- Don't deviate from established patterns without discussion
|
||||
|
||||
---
|
||||
|
||||
*This skill was auto-generated by [ECC Tools](https://ecc.tools). Review and customize as needed for your team.*
|
||||
### Add New Agent or Agent Prompt
|
||||
**Trigger:** When introducing a new agent persona or capability
|
||||
**Command:** `/add-agent`
|
||||
|
||||
1. Create a new agent definition (e.g., `agents/myAgent.md` or `.opencode/prompts/agents/myAgent.txt`).
|
||||
2. Register the agent in the relevant configuration file (e.g., `.opencode/opencode.json`).
|
||||
3. Update `AGENTS.md` or related documentation.
|
||||
|
||||
*Example:*
|
||||
```
|
||||
agents/supportAgent.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Add or Update Command Workflow
|
||||
**Trigger:** When introducing or improving a workflow command
|
||||
**Command:** `/add-command`
|
||||
|
||||
1. Create or update a command file under `commands/` (e.g., `commands/review.md`).
|
||||
2. Optionally update related documentation or scripts.
|
||||
3. Address review feedback and refine command logic.
|
||||
|
||||
---
|
||||
|
||||
### Add Install Target or Adapter
|
||||
**Trigger:** When supporting a new platform or integration
|
||||
**Command:** `/add-install-target`
|
||||
|
||||
1. Add new install scripts and documentation under a dot-directory (e.g., `.gemini/`, `.codebuddy/`).
|
||||
2. Update `manifests/install-modules.json` and relevant schema files.
|
||||
3. Implement or update `scripts/lib/install-targets/*.js` for the new target.
|
||||
4. Add or update tests for the install target.
|
||||
|
||||
*Example:*
|
||||
```
|
||||
.gemini/install.sh
|
||||
scripts/lib/install-targets/gemini.js
|
||||
tests/lib/install-targets.test.js
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Refactor or Collapse Commands to Skills
|
||||
**Trigger:** When modernizing command logic under the skills architecture
|
||||
**Command:** `/refactor-to-skill`
|
||||
|
||||
1. Update or remove `commands/*.md` files.
|
||||
2. Add or update `skills/*/SKILL.md` files.
|
||||
3. Update documentation (`README.md`, `AGENTS.md`, `WORKING-CONTEXT.md`, etc.).
|
||||
4. Update manifests if required.
|
||||
|
||||
---
|
||||
|
||||
### Documentation and Guidance Update
|
||||
**Trigger:** When clarifying, updating, or adding documentation
|
||||
**Command:** `/update-docs`
|
||||
|
||||
1. Edit or create documentation files (`README.md`, `WORKING-CONTEXT.md`, `docs/*.md`, etc.).
|
||||
2. Sync or update guidance in multiple language versions if needed.
|
||||
3. Optionally update related configuration or manifest files.
|
||||
|
||||
---
|
||||
|
||||
### CI/CD Workflow Update
|
||||
**Trigger:** When improving or fixing CI/CD processes
|
||||
**Command:** `/update-ci`
|
||||
|
||||
1. Edit `.github/workflows/*.yml` files.
|
||||
2. Update lockfiles (`package-lock.json`, `yarn.lock`) or validation scripts.
|
||||
3. Update or add test files for CI/CD logic.
|
||||
|
||||
---
|
||||
|
||||
## Testing Patterns
|
||||
|
||||
- **Test Files:**
|
||||
Test files follow the pattern `*.test.js`.
|
||||
|
||||
- **Framework:**
|
||||
No specific testing framework detected; structure your tests in standard Node.js or your preferred test runner.
|
||||
|
||||
*Example:*
|
||||
```
|
||||
tests/lib/install-targets.test.js
|
||||
```
|
||||
|
||||
- **Test Example:**
|
||||
```js
|
||||
// install-targets.test.js
|
||||
const installTarget = require('../../scripts/lib/install-targets/gemini');
|
||||
|
||||
test('should install Gemini target', () => {
|
||||
expect(installTarget()).toBe(true);
|
||||
});
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
| Command | Purpose |
|
||||
|--------------------|----------------------------------------------------------------|
|
||||
| /add-skill | Add a new skill module and document it |
|
||||
| /add-agent | Add a new agent definition or prompt |
|
||||
| /add-command | Add or update a command workflow |
|
||||
| /add-install-target| Add a new install target or adapter |
|
||||
| /refactor-to-skill | Refactor commands into skill-based modules |
|
||||
| /update-docs | Update documentation or guidance files |
|
||||
| /update-ci | Update CI/CD workflows, lockfiles, or validation scripts |
|
||||
```
|
||||
|
||||
@@ -7,9 +7,9 @@
|
||||
".agents/skills/everything-claude-code/SKILL.md"
|
||||
],
|
||||
"commandFiles": [
|
||||
".claude/commands/database-migration.md",
|
||||
".claude/commands/feature-development.md",
|
||||
".claude/commands/add-language-rules.md"
|
||||
".claude/commands/refactoring.md",
|
||||
".claude/commands/add-new-skill.md"
|
||||
],
|
||||
"updatedAt": "2026-03-20T12:07:36.496Z"
|
||||
"updatedAt": "2026-04-01T22:51:43.152Z"
|
||||
}
|
||||
@@ -37,7 +37,7 @@
|
||||
{
|
||||
"command": "node .cursor/hooks/after-file-edit.js",
|
||||
"event": "afterFileEdit",
|
||||
"description": "Auto-format, TypeScript check, console.log warning, and frontend design-quality reminder"
|
||||
"description": "Auto-format, TypeScript check, console.log warning"
|
||||
}
|
||||
],
|
||||
"beforeMCPExecution": [
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
#!/usr/bin/env node
|
||||
const { hookEnabled, readStdin, runExistingHook, transformToClaude } = require('./adapter');
|
||||
const { readStdin, runExistingHook, transformToClaude } = require('./adapter');
|
||||
readStdin().then(raw => {
|
||||
try {
|
||||
const input = JSON.parse(raw);
|
||||
@@ -11,9 +11,6 @@ readStdin().then(raw => {
|
||||
// Accumulate edited paths for batch format+typecheck at stop time
|
||||
runExistingHook('post-edit-accumulator.js', claudeStr);
|
||||
runExistingHook('post-edit-console-warn.js', claudeStr);
|
||||
if (hookEnabled('post:edit:design-quality-check', ['standard', 'strict'])) {
|
||||
runExistingHook('design-quality-check.js', claudeStr);
|
||||
}
|
||||
} catch {}
|
||||
process.stdout.write(raw);
|
||||
}).catch(() => process.exit(0));
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Everything Claude Code (ECC) — Agent Instructions
|
||||
|
||||
This is a **production-ready AI coding plugin** providing 38 specialized agents, 156 skills, 72 commands, and automated hook workflows for software development.
|
||||
This is a **production-ready AI coding plugin** providing 36 specialized agents, 147 skills, 68 commands, and automated hook workflows for software development.
|
||||
|
||||
**Version:** 1.9.0
|
||||
|
||||
@@ -145,9 +145,9 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
agents/ — 38 specialized subagents
|
||||
skills/ — 156 workflow skills and domain knowledge
|
||||
commands/ — 72 slash commands
|
||||
agents/ — 36 specialized subagents
|
||||
skills/ — 147 workflow skills and domain knowledge
|
||||
commands/ — 68 slash commands
|
||||
hooks/ — Trigger-based automations
|
||||
rules/ — Always-follow guidelines (common + per-language)
|
||||
scripts/ — Cross-platform Node.js utilities
|
||||
|
||||
16
README.md
16
README.md
@@ -225,7 +225,7 @@ For manual install instructions see the README in the `rules/` folder. When copy
|
||||
/plugin list everything-claude-code@everything-claude-code
|
||||
```
|
||||
|
||||
**That's it!** You now have access to 38 agents, 156 skills, and 72 legacy command shims.
|
||||
**That's it!** You now have access to 36 agents, 147 skills, and 68 legacy command shims.
|
||||
|
||||
### Multi-model commands require additional setup
|
||||
|
||||
@@ -943,7 +943,7 @@ Please contribute! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
|
||||
### Ideas for Contributions
|
||||
|
||||
- Language-specific skills (Rust, C#, Kotlin, Java) — Go, Python, Perl, Swift, and TypeScript already included
|
||||
- Framework-specific configs (Rails, FastAPI) — Django, NestJS, Spring Boot, and Laravel already included
|
||||
- Framework-specific configs (Rails, FastAPI, NestJS) — Django, Spring Boot, Laravel already included
|
||||
- DevOps agents (Kubernetes, Terraform, AWS, Docker)
|
||||
- Testing strategies (different frameworks, visual regression)
|
||||
- Domain-specific knowledge (ML, data engineering, mobile)
|
||||
@@ -1118,9 +1118,9 @@ The configuration is automatically detected from `.opencode/opencode.json`.
|
||||
|
||||
| Feature | Claude Code | OpenCode | Status |
|
||||
|---------|-------------|----------|--------|
|
||||
| Agents | PASS: 38 agents | PASS: 12 agents | **Claude Code leads** |
|
||||
| Commands | PASS: 72 commands | PASS: 31 commands | **Claude Code leads** |
|
||||
| Skills | PASS: 156 skills | PASS: 37 skills | **Claude Code leads** |
|
||||
| Agents | PASS: 36 agents | PASS: 12 agents | **Claude Code leads** |
|
||||
| Commands | PASS: 68 commands | PASS: 31 commands | **Claude Code leads** |
|
||||
| Skills | PASS: 147 skills | PASS: 37 skills | **Claude Code leads** |
|
||||
| Hooks | PASS: 8 event types | PASS: 11 events | **OpenCode has more!** |
|
||||
| Rules | PASS: 29 rules | PASS: 13 instructions | **Claude Code leads** |
|
||||
| MCP Servers | PASS: 14 servers | PASS: Full | **Full parity** |
|
||||
@@ -1227,9 +1227,9 @@ ECC is the **first plugin to maximize every major AI coding tool**. Here's how e
|
||||
|
||||
| Feature | Claude Code | Cursor IDE | Codex CLI | OpenCode |
|
||||
|---------|------------|------------|-----------|----------|
|
||||
| **Agents** | 38 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
|
||||
| **Commands** | 72 | Shared | Instruction-based | 31 |
|
||||
| **Skills** | 156 | Shared | 10 (native format) | 37 |
|
||||
| **Agents** | 36 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
|
||||
| **Commands** | 68 | Shared | Instruction-based | 31 |
|
||||
| **Skills** | 147 | Shared | 10 (native format) | 37 |
|
||||
| **Hook Events** | 8 types | 15 types | None yet | 11 types |
|
||||
| **Hook Scripts** | 20+ scripts | 16 scripts (DRY adapter) | N/A | Plugin hooks |
|
||||
| **Rules** | 34 (common + lang) | 34 (YAML frontmatter) | Instruction-based | 13 instructions |
|
||||
|
||||
@@ -106,7 +106,7 @@ cp -r everything-claude-code/rules/perl ~/.claude/rules/
|
||||
/plugin list everything-claude-code@everything-claude-code
|
||||
```
|
||||
|
||||
**完成!** 你现在可以使用 38 个代理、156 个技能和 72 个命令。
|
||||
**完成!** 你现在可以使用 36 个代理、147 个技能和 68 个命令。
|
||||
|
||||
### multi-* 命令需要额外配置
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Working Context
|
||||
|
||||
Last updated: 2026-04-02
|
||||
Last updated: 2026-04-01
|
||||
|
||||
## Purpose
|
||||
|
||||
@@ -13,7 +13,7 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
|
||||
- Local full suite status after fix: `1723/1723` passing
|
||||
- Main active operational work:
|
||||
- keep default branch green
|
||||
- continue issue-driven fixes from `main` now that the public PR backlog is at zero
|
||||
- audit and classify remaining open PR backlog by full diff
|
||||
- continue ECC 2.0 control-plane and operator-surface buildout
|
||||
|
||||
## Current Constraints
|
||||
@@ -24,7 +24,7 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
|
||||
|
||||
## Active Queues
|
||||
|
||||
- PR backlog: currently cleared on the public queue; new work should land through direct mainline fixes or fresh narrowly scoped PRs
|
||||
- PR backlog: audit and classify remaining open PRs into merge, port/rebuild, close, or park
|
||||
- Product:
|
||||
- selective install cleanup
|
||||
- control plane primitives
|
||||
@@ -36,7 +36,6 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
|
||||
- continue one-by-one audit of overlapping or low-signal skill content
|
||||
- move repo guidance and contribution flow to skills-first, leaving commands only as explicit compatibility shims
|
||||
- add operator skills that wrap connected surfaces instead of exposing only raw APIs or disconnected primitives
|
||||
- land the canonical voice system, network-optimization lane, and reusable Manim explainer lane
|
||||
- Security:
|
||||
- keep dependency posture clean
|
||||
- preserve self-contained hook and MCP behavior
|
||||
@@ -60,10 +59,6 @@ Public ECC plugin repo for agents, skills, commands, hooks, rules, install surfa
|
||||
- Direct-port candidates landed after audit:
|
||||
- `#1078` hook-id dedupe for managed Claude hook reinstalls
|
||||
- `#844` ui-demo skill
|
||||
- `#1110` install-time Claude hook root resolution
|
||||
- `#1106` portable Codex Context7 key extraction
|
||||
- `#1107` Codex baseline merge and sample agent-role sync
|
||||
- `#1119` stale CI/lint cleanup that still contained safe low-risk fixes
|
||||
- Port or rebuild inside ECC after full audit:
|
||||
- `#894` Jira integration
|
||||
- `#814` + `#808` rebuild as a single consolidated notifications lane for Opencode and cross-harness surfaces
|
||||
@@ -84,15 +79,6 @@ Keep this file detailed for only the current sprint, blockers, and next actions.
|
||||
|
||||
## Latest Execution Notes
|
||||
|
||||
- 2026-04-02: `ECC-Tools/main` shipped `9566637` (`fix: prefer commit lookup over git ref resolution`). The PR-analysis fire is now fixed in the app repo by preferring explicit commit resolution before `git.getRef`, with regression coverage for pull refs and plain branch refs. Mirrored public tracking issue `#1184` in this repo was closed as resolved upstream.
|
||||
- 2026-04-02: Direct-ported the clean native-support core of `#1043` into `main`: `agents/csharp-reviewer.md`, `skills/dotnet-patterns/SKILL.md`, and `skills/csharp-testing/SKILL.md`. This fills the gap between existing C# rule/docs mentions and actual shipped C# review/testing guidance.
|
||||
- 2026-04-02: Direct-ported the clean native-support core of `#1055` into `main`: `agents/dart-build-resolver.md`, `commands/flutter-build.md`, `commands/flutter-review.md`, `commands/flutter-test.md`, `rules/dart/*`, and `skills/dart-flutter-patterns/SKILL.md`. The skill paths were wired into the current `framework-language` module instead of replaying the older PR's separate `flutter-dart` module layout.
|
||||
- 2026-04-02: Closed `#1081` after diff audit. The PR only added vendor-marketing docs for an external X/Twitter backend (`Xquik` / `x-twitter-scraper`) to the canonical `x-api` skill instead of contributing an ECC-native capability.
|
||||
- 2026-04-02: Direct-ported the useful Jira lane from `#894`, but sanitized it to match current supply-chain policy. `commands/jira.md`, `skills/jira-integration/SKILL.md`, and the pinned `jira` MCP template in `mcp-configs/mcp-servers.json` are in-tree, while the skill no longer tells users to install `uv` via `curl | bash`. `jira-integration` is classified under `operator-workflows` for selective installs.
|
||||
- 2026-04-02: Closed `#1125` after full diff audit. The bundle/skill-router lane hardcoded many non-existent or non-canonical surfaces and created a second routing abstraction instead of a small ECC-native index layer.
|
||||
- 2026-04-02: Closed `#1124` after full diff audit. The added agent roster was thoughtfully written, but it duplicated the existing ECC agent surface with a second competing catalog (`dispatch`, `explore`, `verifier`, `executor`, etc.) instead of strengthening canonical agents already in-tree.
|
||||
- 2026-04-02: Closed the full Argus cluster `#1098`, `#1099`, `#1100`, `#1101`, and `#1102` after full diff audit. The common failure mode was the same across all five PRs: external multi-CLI dispatch was treated as a first-class runtime dependency of shipped ECC surfaces. Any useful protocol ideas should be re-ported later into ECC-native orchestration, review, or reflection lanes without external CLI fan-out assumptions.
|
||||
- 2026-04-02: The previously open native-support / integration queue (`#1081`, `#1055`, `#1043`, `#894`) has now been fully resolved by direct-port or closure policy. The active public PR queue is currently zero; next focus stays on issue-driven mainline fixes and CI health, not backlog PR intake.
|
||||
- 2026-04-01: `main` CI was restored locally with `1723/1723` tests passing after lockfile and hook validation fixes.
|
||||
- 2026-04-01: Auto-generated ECC bundle PRs `#1068` and `#1069` were closed instead of merged; useful ideas must be ported manually after explicit diff audit.
|
||||
- 2026-04-01: Major-version ESLint bump PRs `#1063` and `#1064` were closed; revisit only inside a planned ESLint 10 migration lane.
|
||||
@@ -111,21 +97,3 @@ Keep this file detailed for only the current sprint, blockers, and next actions.
|
||||
- 2026-04-01: Collapsed the obvious command/skill duplicates into thin legacy shims so `skills/` now hold the maintained bodies for NanoClaw, context-budget, DevFleet, docs lookup, E2E, evals, orchestration, prompt optimization, rules distillation, TDD, and verification.
|
||||
- 2026-04-01: Ported the self-contained core of `#844` directly into `main` as `skills/ui-demo/SKILL.md` and registered it under the `media-generation` install module instead of merging the PR wholesale.
|
||||
- 2026-04-01: Added the first connected-workflow operator lane as ECC-native skills instead of leaving the surface as raw plugins or APIs: `workspace-surface-audit`, `customer-billing-ops`, `project-flow-ops`, and `google-workspace-ops`. These are tracked under the new `operator-workflows` install module.
|
||||
- 2026-04-01: Direct-ported the real fix from the unresolved hook-path PR lane into the active installer. Claude installs now replace `${CLAUDE_PLUGIN_ROOT}` with the concrete install root in both `settings.json` and the copied `hooks/hooks.json`, which keeps PreToolUse/PostToolUse hooks working outside plugin-managed env injection.
|
||||
- 2026-04-01: Replaced the GNU-only `grep -P` parser in `scripts/sync-ecc-to-codex.sh` with a portable Node parser for Context7 key extraction. Added source-level regression coverage so BSD/macOS syncs do not drift back to non-portable parsing.
|
||||
- 2026-04-01: Targeted regression suite after the direct ports is green: `tests/scripts/install-apply.test.js`, `tests/scripts/sync-ecc-to-codex.test.js`, and `tests/scripts/codex-hooks.test.js`.
|
||||
- 2026-04-01: Ported the useful core of `#1107` directly into `main` as an add-only Codex baseline merge. `scripts/sync-ecc-to-codex.sh` now fills missing non-MCP defaults from `.codex/config.toml`, syncs sample agent role files into `~/.codex/agents`, and preserves user config instead of replacing it. Added regression coverage for sparse configs and implicit parent tables.
|
||||
- 2026-04-01: Ported the safe low-risk cleanup from `#1119` directly into `main` instead of keeping an obsolete CI PR open. This included `.mjs` eslint handling, stricter null checks, Windows home-dir coverage in bash-log tests, and longer Trae shell-test timeouts.
|
||||
- 2026-04-01: Added `brand-voice` as the canonical source-derived writing-style system and wired the content lane to treat it as the shared voice source of truth instead of duplicating partial style heuristics across skills.
|
||||
- 2026-04-01: Added `connections-optimizer` as the review-first social-graph reorganization workflow for X and LinkedIn, with explicit pruning modes, browser fallback expectations, and Apple Mail drafting guidance.
|
||||
- 2026-04-01: Added `manim-video` as the reusable technical explainer lane and seeded it with a starter network-graph scene so launch and systems animations do not depend on one-off scratch scripts.
|
||||
- 2026-04-02: Re-extracted `social-graph-ranker` as a standalone primitive because the weighted bridge-decay model is reusable outside the full lead workflow. `lead-intelligence` now points to it for canonical graph ranking instead of carrying the full algorithm explanation inline, while `connections-optimizer` stays the broader operator layer for pruning, adds, and outbound review packs.
|
||||
- 2026-04-02: Applied the same consolidation rule to the writing lane. `brand-voice` remains the canonical voice system, while `content-engine`, `crosspost`, `article-writing`, and `investor-outreach` now keep only workflow-specific guidance instead of duplicating a second Affaan/ECC voice model or repeating the full ban list in multiple places.
|
||||
- 2026-04-02: Closed fresh auto-generated bundle PRs `#1182` and `#1183` under the existing policy. Useful ideas from generator output must be ported manually into canonical repo surfaces instead of merging `.claude`/bundle PRs wholesale.
|
||||
- 2026-04-02: Ported the safe one-file macOS observer fix from `#1164` directly into `main` as a POSIX `mkdir` fallback for `continuous-learning-v2` lazy-start locking, then closed the PR as superseded by direct port.
|
||||
- 2026-04-02: Ported the safe core of `#1153` directly into `main`: markdownlint cleanup for orchestration/docs surfaces plus the Windows `USERPROFILE` and path-normalization fixes in `install-apply` / `repair` tests. Local validation after installing repo deps: `node tests/scripts/install-apply.test.js`, `node tests/scripts/repair.test.js`, and targeted `yarn markdownlint` all passed.
|
||||
- 2026-04-02: Direct-ported the safe web/frontend rules lane from `#1122` into `rules/web/`, but adapted `rules/web/hooks.md` to prefer project-local tooling and avoid remote one-off package execution examples.
|
||||
- 2026-04-02: Adapted the design-quality reminder from `#1127` into the current ECC hook architecture with a local `scripts/hooks/design-quality-check.js`, Claude `hooks/hooks.json` wiring, Cursor `after-file-edit.js` wiring, and dedicated hook coverage in `tests/hooks/design-quality-check.test.js`.
|
||||
- 2026-04-02: Fixed `#1141` on `main` in `16e9b17`. The observer lifecycle is now session-aware instead of purely detached: `SessionStart` writes a project-scoped lease, `SessionEnd` removes that lease and stops the observer when the final lease disappears, `observe.sh` records project activity, and `observer-loop.sh` now exits on idle when no leases remain. Targeted validation passed with `bash -n`, `node tests/hooks/observer-memory.test.js`, `node tests/integration/hooks.test.js`, `node scripts/ci/validate-hooks.js hooks/hooks.json`, and `node scripts/ci/check-unicode-safety.js`.
|
||||
- 2026-04-02: Fixed the remaining Windows-only hook regression behind `#1070` by making `scripts/lib/utils.js#getHomeDir()` honor explicit `HOME` / `USERPROFILE` overrides before falling back to `os.homedir()`. This restores test-isolated observer state paths for hook integration runs on Windows. Added regression coverage in `tests/lib/utils.test.js`. Targeted validation passed with `node tests/lib/utils.test.js`, `node tests/integration/hooks.test.js`, `node tests/hooks/observer-memory.test.js`, and `node scripts/ci/check-unicode-safety.js`.
|
||||
- 2026-04-02: Direct-ported NestJS support for `#1022` into `main` as `skills/nestjs-patterns/SKILL.md` and wired it into the `framework-language` install module. Synced the repo catalog afterward (`38` agents, `72` commands, `156` skills) and updated the docs so NestJS is no longer listed as an unfilled framework gap.
|
||||
|
||||
@@ -1,101 +0,0 @@
|
||||
---
|
||||
name: csharp-reviewer
|
||||
description: Expert C# code reviewer specializing in .NET conventions, async patterns, security, nullable reference types, and performance. Use for all C# code changes. MUST BE USED for C# projects.
|
||||
tools: ["Read", "Grep", "Glob", "Bash"]
|
||||
model: sonnet
|
||||
---
|
||||
|
||||
You are a senior C# code reviewer ensuring high standards of idiomatic .NET code and best practices.
|
||||
|
||||
When invoked:
|
||||
1. Run `git diff -- '*.cs'` to see recent C# file changes
|
||||
2. Run `dotnet build` and `dotnet format --verify-no-changes` if available
|
||||
3. Focus on modified `.cs` files
|
||||
4. Begin review immediately
|
||||
|
||||
## Review Priorities
|
||||
|
||||
### CRITICAL — Security
|
||||
- **SQL Injection**: String concatenation/interpolation in queries — use parameterized queries or EF Core
|
||||
- **Command Injection**: Unvalidated input in `Process.Start` — validate and sanitize
|
||||
- **Path Traversal**: User-controlled file paths — use `Path.GetFullPath` + prefix check
|
||||
- **Insecure Deserialization**: `BinaryFormatter`, `JsonSerializer` with `TypeNameHandling.All`
|
||||
- **Hardcoded secrets**: API keys, connection strings in source — use configuration/secret manager
|
||||
- **CSRF/XSS**: Missing `[ValidateAntiForgeryToken]`, unencoded output in Razor
|
||||
|
||||
### CRITICAL — Error Handling
|
||||
- **Empty catch blocks**: `catch { }` or `catch (Exception) { }` — handle or rethrow
|
||||
- **Swallowed exceptions**: `catch { return null; }` — log context, throw specific
|
||||
- **Missing `using`/`await using`**: Manual disposal of `IDisposable`/`IAsyncDisposable`
|
||||
- **Blocking async**: `.Result`, `.Wait()`, `.GetAwaiter().GetResult()` — use `await`
|
||||
|
||||
### HIGH — Async Patterns
|
||||
- **Missing CancellationToken**: Public async APIs without cancellation support
|
||||
- **Fire-and-forget**: `async void` except event handlers — return `Task`
|
||||
- **ConfigureAwait misuse**: Library code missing `ConfigureAwait(false)`
|
||||
- **Sync-over-async**: Blocking calls in async context causing deadlocks
|
||||
|
||||
### HIGH — Type Safety
|
||||
- **Nullable reference types**: Nullable warnings ignored or suppressed with `!`
|
||||
- **Unsafe casts**: `(T)obj` without type check — use `obj is T t` or `obj as T`
|
||||
- **Raw strings as identifiers**: Magic strings for config keys, routes — use constants or `nameof`
|
||||
- **`dynamic` usage**: Avoid `dynamic` in application code — use generics or explicit models
|
||||
|
||||
### HIGH — Code Quality
|
||||
- **Large methods**: Over 50 lines — extract helper methods
|
||||
- **Deep nesting**: More than 4 levels — use early returns, guard clauses
|
||||
- **God classes**: Classes with too many responsibilities — apply SRP
|
||||
- **Mutable shared state**: Static mutable fields — use `ConcurrentDictionary`, `Interlocked`, or DI scoping
|
||||
|
||||
### MEDIUM — Performance
|
||||
- **String concatenation in loops**: Use `StringBuilder` or `string.Join`
|
||||
- **LINQ in hot paths**: Excessive allocations — consider `for` loops with pre-allocated buffers
|
||||
- **N+1 queries**: EF Core lazy loading in loops — use `Include`/`ThenInclude`
|
||||
- **Missing `AsNoTracking`**: Read-only queries tracking entities unnecessarily
|
||||
|
||||
### MEDIUM — Best Practices
|
||||
- **Naming conventions**: PascalCase for public members, `_camelCase` for private fields
|
||||
- **Record vs class**: Value-like immutable models should be `record` or `record struct`
|
||||
- **Dependency injection**: `new`-ing services instead of injecting — use constructor injection
|
||||
- **`IEnumerable` multiple enumeration**: Materialize with `.ToList()` when enumerated more than once
|
||||
- **Missing `sealed`**: Non-inherited classes should be `sealed` for clarity and performance
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
```bash
|
||||
dotnet build # Compilation check
|
||||
dotnet format --verify-no-changes # Format check
|
||||
dotnet test --no-build # Run tests
|
||||
dotnet test --collect:"XPlat Code Coverage" # Coverage
|
||||
```
|
||||
|
||||
## Review Output Format
|
||||
|
||||
```text
|
||||
[SEVERITY] Issue title
|
||||
File: path/to/File.cs:42
|
||||
Issue: Description
|
||||
Fix: What to change
|
||||
```
|
||||
|
||||
## Approval Criteria
|
||||
|
||||
- **Approve**: No CRITICAL or HIGH issues
|
||||
- **Warning**: MEDIUM issues only (can merge with caution)
|
||||
- **Block**: CRITICAL or HIGH issues found
|
||||
|
||||
## Framework Checks
|
||||
|
||||
- **ASP.NET Core**: Model validation, auth policies, middleware order, `IOptions<T>` pattern
|
||||
- **EF Core**: Migration safety, `Include` for eager loading, `AsNoTracking` for reads
|
||||
- **Minimal APIs**: Route grouping, endpoint filters, proper `TypedResults`
|
||||
- **Blazor**: Component lifecycle, `StateHasChanged` usage, JS interop disposal
|
||||
|
||||
## Reference
|
||||
|
||||
For detailed C# patterns, see skill: `dotnet-patterns`.
|
||||
For testing guidelines, see skill: `csharp-testing`.
|
||||
|
||||
---
|
||||
|
||||
Review with the mindset: "Would this code pass review at a top .NET shop or open-source project?"
|
||||
@@ -1,201 +0,0 @@
|
||||
---
|
||||
name: dart-build-resolver
|
||||
description: Dart/Flutter build, analysis, and dependency error resolution specialist. Fixes `dart analyze` errors, Flutter compilation failures, pub dependency conflicts, and build_runner issues with minimal, surgical changes. Use when Dart/Flutter builds fail.
|
||||
tools: ["Read", "Write", "Edit", "Bash", "Grep", "Glob"]
|
||||
model: sonnet
|
||||
---
|
||||
|
||||
# Dart/Flutter Build Error Resolver
|
||||
|
||||
You are an expert Dart/Flutter build error resolution specialist. Your mission is to fix Dart analyzer errors, Flutter compilation issues, pub dependency conflicts, and build_runner failures with **minimal, surgical changes**.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. Diagnose `dart analyze` and `flutter analyze` errors
|
||||
2. Fix Dart type errors, null safety violations, and missing imports
|
||||
3. Resolve `pubspec.yaml` dependency conflicts and version constraints
|
||||
4. Fix `build_runner` code generation failures
|
||||
5. Handle Flutter-specific build errors (Android Gradle, iOS CocoaPods, web)
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
Run these in order:
|
||||
|
||||
```bash
|
||||
# Check Dart/Flutter analysis errors
|
||||
flutter analyze 2>&1
|
||||
# or for pure Dart projects
|
||||
dart analyze 2>&1
|
||||
|
||||
# Check pub dependency resolution
|
||||
flutter pub get 2>&1
|
||||
|
||||
# Check if code generation is stale
|
||||
dart run build_runner build --delete-conflicting-outputs 2>&1
|
||||
|
||||
# Flutter build for target platform
|
||||
flutter build apk 2>&1 # Android
|
||||
flutter build ipa --no-codesign 2>&1 # iOS (CI without signing)
|
||||
flutter build web 2>&1 # Web
|
||||
```
|
||||
|
||||
## Resolution Workflow
|
||||
|
||||
```text
|
||||
1. flutter analyze -> Parse error messages
|
||||
2. Read affected file -> Understand context
|
||||
3. Apply minimal fix -> Only what's needed
|
||||
4. flutter analyze -> Verify fix
|
||||
5. flutter test -> Ensure nothing broke
|
||||
```
|
||||
|
||||
## Common Fix Patterns
|
||||
|
||||
| Error | Cause | Fix |
|
||||
|-------|-------|-----|
|
||||
| `The name 'X' isn't defined` | Missing import or typo | Add correct `import` or fix name |
|
||||
| `A value of type 'X?' can't be assigned to type 'X'` | Null safety — nullable not handled | Add `!`, `?? default`, or null check |
|
||||
| `The argument type 'X' can't be assigned to 'Y'` | Type mismatch | Fix type, add explicit cast, or correct API call |
|
||||
| `Non-nullable instance field 'x' must be initialized` | Missing initializer | Add initializer, mark `late`, or make nullable |
|
||||
| `The method 'X' isn't defined for type 'Y'` | Wrong type or wrong import | Check type and imports |
|
||||
| `'await' applied to non-Future` | Awaiting a non-async value | Remove `await` or make function async |
|
||||
| `Missing concrete implementation of 'X'` | Abstract interface not fully implemented | Add missing method implementations |
|
||||
| `The class 'X' doesn't implement 'Y'` | Missing `implements` or missing method | Add method or fix class signature |
|
||||
| `Because X depends on Y >=A and Z depends on Y <B, version solving failed` | Pub version conflict | Adjust version constraints or add `dependency_overrides` |
|
||||
| `Could not find a file named "pubspec.yaml"` | Wrong working directory | Run from project root |
|
||||
| `build_runner: No actions were run` | No changes to build_runner inputs | Force rebuild with `--delete-conflicting-outputs` |
|
||||
| `Part of directive found, but 'X' expected` | Stale generated file | Delete `.g.dart` file and re-run build_runner |
|
||||
|
||||
## Pub Dependency Troubleshooting
|
||||
|
||||
```bash
|
||||
# Show full dependency tree
|
||||
flutter pub deps
|
||||
|
||||
# Check why a specific package version was chosen
|
||||
flutter pub deps --style=compact | grep <package>
|
||||
|
||||
# Upgrade packages to latest compatible versions
|
||||
flutter pub upgrade
|
||||
|
||||
# Upgrade specific package
|
||||
flutter pub upgrade <package_name>
|
||||
|
||||
# Clear pub cache if metadata is corrupted
|
||||
flutter pub cache repair
|
||||
|
||||
# Verify pubspec.lock is consistent
|
||||
flutter pub get --enforce-lockfile
|
||||
```
|
||||
|
||||
## Null Safety Fix Patterns
|
||||
|
||||
```dart
|
||||
// Error: A value of type 'String?' can't be assigned to type 'String'
|
||||
// BAD — force unwrap
|
||||
final name = user.name!;
|
||||
|
||||
// GOOD — provide fallback
|
||||
final name = user.name ?? 'Unknown';
|
||||
|
||||
// GOOD — guard and return early
|
||||
if (user.name == null) return;
|
||||
final name = user.name!; // safe after null check
|
||||
|
||||
// GOOD — Dart 3 pattern matching
|
||||
final name = switch (user.name) {
|
||||
final n? => n,
|
||||
null => 'Unknown',
|
||||
};
|
||||
```
|
||||
|
||||
## Type Error Fix Patterns
|
||||
|
||||
```dart
|
||||
// Error: The argument type 'List<dynamic>' can't be assigned to 'List<String>'
|
||||
// BAD
|
||||
final ids = jsonList; // inferred as List<dynamic>
|
||||
|
||||
// GOOD
|
||||
final ids = List<String>.from(jsonList);
|
||||
// or
|
||||
final ids = (jsonList as List).cast<String>();
|
||||
```
|
||||
|
||||
## build_runner Troubleshooting
|
||||
|
||||
```bash
|
||||
# Clean and regenerate all files
|
||||
dart run build_runner clean
|
||||
dart run build_runner build --delete-conflicting-outputs
|
||||
|
||||
# Watch mode for development
|
||||
dart run build_runner watch --delete-conflicting-outputs
|
||||
|
||||
# Check for missing build_runner dependencies in pubspec.yaml
|
||||
# Required: build_runner, json_serializable / freezed / riverpod_generator (as dev_dependencies)
|
||||
```
|
||||
|
||||
## Android Build Troubleshooting
|
||||
|
||||
```bash
|
||||
# Clean Android build cache
|
||||
cd android && ./gradlew clean && cd ..
|
||||
|
||||
# Invalidate Flutter tool cache
|
||||
flutter clean
|
||||
|
||||
# Rebuild
|
||||
flutter pub get && flutter build apk
|
||||
|
||||
# Check Gradle/JDK version compatibility
|
||||
cd android && ./gradlew --version
|
||||
```
|
||||
|
||||
## iOS Build Troubleshooting
|
||||
|
||||
```bash
|
||||
# Update CocoaPods
|
||||
cd ios && pod install --repo-update && cd ..
|
||||
|
||||
# Clean iOS build
|
||||
flutter clean && cd ios && pod deintegrate && pod install && cd ..
|
||||
|
||||
# Check for platform version mismatches in Podfile
|
||||
# Ensure ios platform version >= minimum required by all pods
|
||||
```
|
||||
|
||||
## Key Principles
|
||||
|
||||
- **Surgical fixes only** — don't refactor, just fix the error
|
||||
- **Never** add `// ignore:` suppressions without approval
|
||||
- **Never** use `dynamic` to silence type errors
|
||||
- **Always** run `flutter analyze` after each fix to verify
|
||||
- Fix root cause over suppressing symptoms
|
||||
- Prefer null-safe patterns over bang operators (`!`)
|
||||
|
||||
## Stop Conditions
|
||||
|
||||
Stop and report if:
|
||||
- Same error persists after 3 fix attempts
|
||||
- Fix introduces more errors than it resolves
|
||||
- Requires architectural changes or package upgrades that change behavior
|
||||
- Conflicting platform constraints need user decision
|
||||
|
||||
## Output Format
|
||||
|
||||
```text
|
||||
[FIXED] lib/features/cart/data/cart_repository_impl.dart:42
|
||||
Error: A value of type 'String?' can't be assigned to type 'String'
|
||||
Fix: Changed `final id = response.id` to `final id = response.id ?? ''`
|
||||
Remaining errors: 2
|
||||
|
||||
[FIXED] pubspec.yaml
|
||||
Error: Version solving failed — http >=0.13.0 required by dio and <0.13.0 required by retrofit
|
||||
Fix: Upgraded dio to ^5.3.0 which allows http >=0.13.0
|
||||
Remaining errors: 0
|
||||
```
|
||||
|
||||
Final: `Build Status: SUCCESS/FAILED | Errors Fixed: N | Files Modified: list`
|
||||
|
||||
For detailed Dart patterns and code examples, see `skill: flutter-dart-code-review`.
|
||||
@@ -1,164 +0,0 @@
|
||||
---
|
||||
description: Fix Dart analyzer errors and Flutter build failures incrementally. Invokes the dart-build-resolver agent for minimal, surgical fixes.
|
||||
---
|
||||
|
||||
# Flutter Build and Fix
|
||||
|
||||
This command invokes the **dart-build-resolver** agent to incrementally fix Dart/Flutter build errors with minimal changes.
|
||||
|
||||
## What This Command Does
|
||||
|
||||
1. **Run Diagnostics**: Execute `flutter analyze`, `flutter pub get`
|
||||
2. **Parse Errors**: Group by file and sort by severity
|
||||
3. **Fix Incrementally**: One error at a time
|
||||
4. **Verify Each Fix**: Re-run analysis after each change
|
||||
5. **Report Summary**: Show what was fixed and what remains
|
||||
|
||||
## When to Use
|
||||
|
||||
Use `/flutter-build` when:
|
||||
- `flutter analyze` reports errors
|
||||
- `flutter build` fails for any platform
|
||||
- `dart pub get` / `flutter pub get` fails with version conflicts
|
||||
- `build_runner` fails to generate code
|
||||
- After pulling changes that break the build
|
||||
|
||||
## Diagnostic Commands Run
|
||||
|
||||
```bash
|
||||
# Analysis
|
||||
flutter analyze 2>&1
|
||||
|
||||
# Dependencies
|
||||
flutter pub get 2>&1
|
||||
|
||||
# Code generation (if project uses build_runner)
|
||||
dart run build_runner build --delete-conflicting-outputs 2>&1
|
||||
|
||||
# Platform builds
|
||||
flutter build apk 2>&1
|
||||
flutter build web 2>&1
|
||||
```
|
||||
|
||||
## Example Session
|
||||
|
||||
````text
|
||||
User: /flutter-build
|
||||
|
||||
Agent:
|
||||
# Flutter Build Resolution
|
||||
|
||||
## Initial Diagnostics
|
||||
|
||||
```
|
||||
$ flutter analyze
|
||||
Analyzing project...
|
||||
|
||||
error • lib/features/user/data/user_repository_impl.dart:34:16 •
|
||||
A value of type 'String?' can't be assigned to a variable of type 'String'. •
|
||||
invalid_assignment
|
||||
|
||||
error • lib/features/cart/presentation/cart_page.dart:58:5 •
|
||||
The method 'add' isn't defined for the type 'List<Item>'.
|
||||
Try correcting the name to the name of an existing method, or defining a method named 'add'. •
|
||||
undefined_method
|
||||
|
||||
2 errors found.
|
||||
```
|
||||
|
||||
## Fix 1: Null Safety
|
||||
|
||||
File: lib/features/user/data/user_repository_impl.dart:34
|
||||
Error: A value of type 'String?' can't be assigned to type 'String'
|
||||
|
||||
Changed:
|
||||
```dart
|
||||
final id = response.id;
|
||||
```
|
||||
To:
|
||||
```dart
|
||||
final id = response.id ?? '';
|
||||
```
|
||||
|
||||
```
|
||||
$ flutter analyze
|
||||
1 error found.
|
||||
```
|
||||
|
||||
## Fix 2: Immutable List
|
||||
|
||||
File: lib/features/cart/presentation/cart_page.dart:58
|
||||
Error: The method 'add' isn't defined for the type 'List<Item>'
|
||||
Cause: State holds an unmodifiable list; mutation goes through Cubit
|
||||
|
||||
Changed:
|
||||
```dart
|
||||
state.items.add(item);
|
||||
```
|
||||
To:
|
||||
```dart
|
||||
context.read<CartCubit>().addItem(item);
|
||||
// Note: Cubit exposes named methods (addItem, removeItem);
|
||||
// .add(event) is the BLoC event API — don't mix them.
|
||||
```
|
||||
|
||||
```
|
||||
$ flutter analyze
|
||||
No issues found!
|
||||
```
|
||||
|
||||
## Final Verification
|
||||
|
||||
```
|
||||
$ flutter test
|
||||
All tests passed.
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
| Metric | Count |
|
||||
|--------|-------|
|
||||
| Analysis errors fixed | 2 |
|
||||
| Files modified | 2 |
|
||||
| Remaining issues | 0 |
|
||||
|
||||
Build Status: PASS ✓
|
||||
````
|
||||
|
||||
## Common Errors Fixed
|
||||
|
||||
| Error | Typical Fix |
|
||||
|-------|-------------|
|
||||
| `A value of type 'X?' can't be assigned to 'X'` | Add `?? default` or null guard |
|
||||
| `The name 'X' isn't defined` | Add import or fix typo |
|
||||
| `Non-nullable instance field must be initialized` | Add initializer or `late` |
|
||||
| `Version solving failed` | Adjust version constraints in pubspec.yaml |
|
||||
| `Missing concrete implementation of 'X'` | Implement missing interface method |
|
||||
| `build_runner: Part of X expected` | Delete stale `.g.dart` and rebuild |
|
||||
|
||||
## Fix Strategy
|
||||
|
||||
1. **Analysis errors first** — code must be error-free
|
||||
2. **Warning triage second** — fix warnings that could cause runtime bugs
|
||||
3. **pub conflicts third** — fix dependency resolution
|
||||
4. **One fix at a time** — verify each change
|
||||
5. **Minimal changes** — don't refactor, just fix
|
||||
|
||||
## Stop Conditions
|
||||
|
||||
The agent will stop and report if:
|
||||
- Same error persists after 3 attempts
|
||||
- Fix introduces more errors
|
||||
- Requires architectural changes
|
||||
- Package upgrade conflicts need user decision
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/flutter-test` — Run tests after build succeeds
|
||||
- `/flutter-review` — Review code quality
|
||||
- `/verify` — Full verification loop
|
||||
|
||||
## Related
|
||||
|
||||
- Agent: `agents/dart-build-resolver.md`
|
||||
- Skill: `skills/flutter-dart-code-review/`
|
||||
@@ -1,116 +0,0 @@
|
||||
---
|
||||
description: Review Flutter/Dart code for idiomatic patterns, widget best practices, state management, performance, accessibility, and security. Invokes the flutter-reviewer agent.
|
||||
---
|
||||
|
||||
# Flutter Code Review
|
||||
|
||||
This command invokes the **flutter-reviewer** agent to review Flutter/Dart code changes.
|
||||
|
||||
## What This Command Does
|
||||
|
||||
1. **Gather Context**: Review `git diff --staged` and `git diff`
|
||||
2. **Inspect Project**: Check `pubspec.yaml`, `analysis_options.yaml`, state management solution
|
||||
3. **Security Pre-scan**: Check for hardcoded secrets and critical security issues
|
||||
4. **Full Review**: Apply the complete review checklist
|
||||
5. **Report Findings**: Output issues grouped by severity with fix guidance
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before running `/flutter-review`, ensure:
|
||||
1. **Build passes** — run `/flutter-build` first; a review on broken code is incomplete
|
||||
2. **Tests pass** — run `/flutter-test` to confirm no regressions
|
||||
3. **No merge conflicts** — resolve all conflicts so the diff reflects only intentional changes
|
||||
4. **`flutter analyze` is clean** — fix analyzer warnings before review
|
||||
|
||||
## When to Use
|
||||
|
||||
Use `/flutter-review` when:
|
||||
- Before submitting a PR with Flutter/Dart changes (after build and tests pass)
|
||||
- After implementing a new feature to catch issues early
|
||||
- When reviewing someone else's Flutter code
|
||||
- To audit a widget, state management component, or service class
|
||||
- Before a production release
|
||||
|
||||
## Review Areas
|
||||
|
||||
| Area | Severity |
|
||||
|------|----------|
|
||||
| Hardcoded secrets, cleartext HTTP | CRITICAL |
|
||||
| Architecture violations, state management anti-patterns | CRITICAL |
|
||||
| Widget rebuild issues, resource leaks | HIGH |
|
||||
| Missing `dispose()`, `BuildContext` after await | HIGH |
|
||||
| Dart null safety, missing error/loading states | HIGH |
|
||||
| Const propagation, widget composition | HIGH |
|
||||
| Performance: expensive work in `build()` | HIGH |
|
||||
| Accessibility, semantic labels | MEDIUM |
|
||||
| Missing tests for state transitions | HIGH |
|
||||
| Hardcoded strings (l10n) | MEDIUM |
|
||||
| Pub dependency hygiene | LOW |
|
||||
|
||||
## Example Session
|
||||
|
||||
````text
|
||||
User: /flutter-review
|
||||
|
||||
Agent:
|
||||
# Flutter Code Review
|
||||
|
||||
## Context
|
||||
|
||||
Changed files:
|
||||
- lib/features/auth/presentation/login_page.dart
|
||||
- lib/features/auth/data/auth_repository_impl.dart
|
||||
|
||||
State management: Riverpod (detected from pubspec.yaml)
|
||||
Architecture: feature-first
|
||||
|
||||
## Security Pre-scan
|
||||
|
||||
✓ No hardcoded secrets detected
|
||||
✓ No cleartext HTTP calls
|
||||
|
||||
## Review Findings
|
||||
|
||||
[HIGH] BuildContext used after async gap without mounted check
|
||||
File: lib/features/auth/presentation/login_page.dart:67
|
||||
Issue: `context.go('/home')` called after `await auth.login(...)` with no `mounted` check.
|
||||
Fix: Add `if (!context.mounted) return;` before any navigation after awaits (Flutter 3.7+).
|
||||
|
||||
[HIGH] AsyncValue error state not handled
|
||||
File: lib/features/auth/presentation/login_page.dart:42
|
||||
Issue: `ref.watch(authProvider)` switches on loading/data but has no `error` branch.
|
||||
Fix: Add error case to the switch expression or `when()` call to show a user-facing error message.
|
||||
|
||||
[MEDIUM] Hardcoded string not localized
|
||||
File: lib/features/auth/presentation/login_page.dart:89
|
||||
Issue: `Text('Login')` — user-visible string not using localization system.
|
||||
Fix: Use the project's l10n accessor: `Text(context.l10n.loginButton)`.
|
||||
|
||||
## Review Summary
|
||||
|
||||
| Severity | Count | Status |
|
||||
|----------|-------|--------|
|
||||
| CRITICAL | 0 | pass |
|
||||
| HIGH | 2 | block |
|
||||
| MEDIUM | 1 | info |
|
||||
| LOW | 0 | note |
|
||||
|
||||
Verdict: BLOCK — HIGH issues must be fixed before merge.
|
||||
````
|
||||
|
||||
## Approval Criteria
|
||||
|
||||
- **Approve**: No CRITICAL or HIGH issues
|
||||
- **Block**: Any CRITICAL or HIGH issues must be fixed before merge
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/flutter-build` — Fix build errors first
|
||||
- `/flutter-test` — Run tests before reviewing
|
||||
- `/code-review` — General code review (language-agnostic)
|
||||
|
||||
## Related
|
||||
|
||||
- Agent: `agents/flutter-reviewer.md`
|
||||
- Skill: `skills/flutter-dart-code-review/`
|
||||
- Rules: `rules/dart/`
|
||||
@@ -1,144 +0,0 @@
|
||||
---
|
||||
description: Run Flutter/Dart tests, report failures, and incrementally fix test issues. Covers unit, widget, golden, and integration tests.
|
||||
---
|
||||
|
||||
# Flutter Test
|
||||
|
||||
This command runs the Flutter test suite and reports results. When failures occur, it diagnoses and fixes issues incrementally.
|
||||
|
||||
## What This Command Does
|
||||
|
||||
1. **Run Tests**: Execute `flutter test` (or scope to changed files)
|
||||
2. **Parse Failures**: Identify failing tests by type and cause
|
||||
3. **Fix Incrementally**: One failure at a time where possible
|
||||
4. **Verify**: Re-run after each fix
|
||||
5. **Report**: Show coverage summary and remaining failures
|
||||
|
||||
## When to Use
|
||||
|
||||
Use `/flutter-test` when:
|
||||
- After implementing a feature to verify nothing broke
|
||||
- After running `/flutter-build` to ensure tests pass
|
||||
- To check test coverage on new code
|
||||
- When a specific test file is failing
|
||||
- Before submitting a PR
|
||||
|
||||
## Commands Run
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
flutter test 2>&1
|
||||
|
||||
# Run with coverage
|
||||
flutter test --coverage 2>&1
|
||||
|
||||
# Run specific test file
|
||||
flutter test test/unit/domain/usecases/get_user_test.dart 2>&1
|
||||
|
||||
# Run tests matching a name pattern
|
||||
flutter test --name "CartBloc" 2>&1
|
||||
|
||||
# Run integration tests (requires device/emulator)
|
||||
flutter test integration_test/ 2>&1
|
||||
|
||||
# Update golden files when intentional visual changes are made
|
||||
flutter test --update-goldens 2>&1
|
||||
```
|
||||
|
||||
## Example Session
|
||||
|
||||
````text
|
||||
User: /flutter-test
|
||||
|
||||
Agent:
|
||||
# Flutter Test Run
|
||||
|
||||
## Test Execution
|
||||
|
||||
```
|
||||
$ flutter test
|
||||
00:05 +42 -2: Some tests failed.
|
||||
|
||||
FAILED: test/unit/data/user_repository_test.dart
|
||||
UserRepository getById
|
||||
returns null when user not found
|
||||
Expected: null
|
||||
Actual: User(id: 'test-id', name: 'Test User')
|
||||
|
||||
FAILED: test/widget/cart_page_test.dart
|
||||
CartPage
|
||||
shows empty state when cart is empty
|
||||
Expected: exactly one widget with text 'Your cart is empty'
|
||||
Found: no widget with text 'Your cart is empty'
|
||||
```
|
||||
|
||||
## Fix 1: Repository Test
|
||||
|
||||
File: test/unit/data/user_repository_test.dart
|
||||
Failure: `getById` returns a user when it should return null for missing ID
|
||||
|
||||
Root cause: Test setup adds a user with ID 'test-id' but queries with 'missing-id'.
|
||||
Fix: Updated test to query with 'missing-id' — setup was correct, query was wrong.
|
||||
|
||||
```
|
||||
$ flutter test test/unit/data/user_repository_test.dart
|
||||
1 test passed.
|
||||
```
|
||||
|
||||
## Fix 2: Widget Test
|
||||
|
||||
File: test/widget/cart_page_test.dart
|
||||
Failure: Empty state text widget not found
|
||||
|
||||
Root cause: Empty state message was renamed from 'Your cart is empty' to 'Cart is empty' in the widget.
|
||||
Fix: Updated test string to match current widget copy.
|
||||
|
||||
```
|
||||
$ flutter test test/widget/cart_page_test.dart
|
||||
1 test passed.
|
||||
```
|
||||
|
||||
## Final Run
|
||||
|
||||
```
|
||||
$ flutter test --coverage
|
||||
All 44 tests passed.
|
||||
Coverage: 84.2% (target: 80%)
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total tests | 44 |
|
||||
| Passed | 44 |
|
||||
| Failed | 0 |
|
||||
| Coverage | 84.2% |
|
||||
|
||||
Test Status: PASS ✓
|
||||
````
|
||||
|
||||
## Common Test Failures
|
||||
|
||||
| Failure | Typical Fix |
|
||||
|---------|-------------|
|
||||
| `Expected: <X> Actual: <Y>` | Update assertion or fix implementation |
|
||||
| `Widget not found` | Fix finder selector or update test after widget rename |
|
||||
| `Golden file not found` | Run `flutter test --update-goldens` to generate |
|
||||
| `Golden mismatch` | Inspect diff; run `--update-goldens` if change was intentional |
|
||||
| `MissingPluginException` | Mock platform channel in test setup |
|
||||
| `LateInitializationError` | Initialize `late` fields in `setUp()` |
|
||||
| `pumpAndSettle timed out` | Replace with explicit `pump(Duration)` calls |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/flutter-build` — Fix build errors before running tests
|
||||
- `/flutter-review` — Review code after tests pass
|
||||
- `/tdd` — Test-driven development workflow
|
||||
|
||||
## Related
|
||||
|
||||
- Agent: `agents/flutter-reviewer.md`
|
||||
- Agent: `agents/dart-build-resolver.md`
|
||||
- Skill: `skills/flutter-dart-code-review/`
|
||||
- Rules: `rules/dart/testing.md`
|
||||
106
commands/jira.md
106
commands/jira.md
@@ -1,106 +0,0 @@
|
||||
---
|
||||
description: Retrieve a Jira ticket, analyze requirements, update status, or add comments. Uses the jira-integration skill and MCP or REST API.
|
||||
---
|
||||
|
||||
# Jira Command
|
||||
|
||||
Interact with Jira tickets directly from your workflow — fetch tickets, analyze requirements, add comments, and transition status.
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
/jira get <TICKET-KEY> # Fetch and analyze a ticket
|
||||
/jira comment <TICKET-KEY> # Add a progress comment
|
||||
/jira transition <TICKET-KEY> # Change ticket status
|
||||
/jira search <JQL> # Search issues with JQL
|
||||
```
|
||||
|
||||
## What This Command Does
|
||||
|
||||
1. **Get & Analyze** — Fetch a Jira ticket and extract requirements, acceptance criteria, test scenarios, and dependencies
|
||||
2. **Comment** — Add structured progress updates to a ticket
|
||||
3. **Transition** — Move a ticket through workflow states (To Do → In Progress → Done)
|
||||
4. **Search** — Find issues using JQL queries
|
||||
|
||||
## How It Works
|
||||
|
||||
### `/jira get <TICKET-KEY>`
|
||||
|
||||
1. Fetch the ticket from Jira (via MCP `jira_get_issue` or REST API)
|
||||
2. Extract all fields: summary, description, acceptance criteria, priority, labels, linked issues
|
||||
3. Optionally fetch comments for additional context
|
||||
4. Produce a structured analysis:
|
||||
|
||||
```
|
||||
Ticket: PROJ-1234
|
||||
Summary: [title]
|
||||
Status: [status]
|
||||
Priority: [priority]
|
||||
Type: [Story/Bug/Task]
|
||||
|
||||
Requirements:
|
||||
1. [extracted requirement]
|
||||
2. [extracted requirement]
|
||||
|
||||
Acceptance Criteria:
|
||||
- [ ] [criterion from ticket]
|
||||
|
||||
Test Scenarios:
|
||||
- Happy Path: [description]
|
||||
- Error Case: [description]
|
||||
- Edge Case: [description]
|
||||
|
||||
Dependencies:
|
||||
- [linked issues, APIs, services]
|
||||
|
||||
Recommended Next Steps:
|
||||
- /plan to create implementation plan
|
||||
- /tdd to implement with tests first
|
||||
```
|
||||
|
||||
### `/jira comment <TICKET-KEY>`
|
||||
|
||||
1. Summarize current session progress (what was built, tested, committed)
|
||||
2. Format as a structured comment
|
||||
3. Post to the Jira ticket
|
||||
|
||||
### `/jira transition <TICKET-KEY>`
|
||||
|
||||
1. Fetch available transitions for the ticket
|
||||
2. Show options to user
|
||||
3. Execute the selected transition
|
||||
|
||||
### `/jira search <JQL>`
|
||||
|
||||
1. Execute the JQL query against Jira
|
||||
2. Return a summary table of matching issues
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This command requires Jira credentials. Choose one:
|
||||
|
||||
**Option A — MCP Server (recommended):**
|
||||
Add `jira` to your `mcpServers` config (see `mcp-configs/mcp-servers.json` for the template).
|
||||
|
||||
**Option B — Environment variables:**
|
||||
```bash
|
||||
export JIRA_URL="https://yourorg.atlassian.net"
|
||||
export JIRA_EMAIL="your.email@example.com"
|
||||
export JIRA_API_TOKEN="your-api-token"
|
||||
```
|
||||
|
||||
If credentials are missing, stop and direct the user to set them up.
|
||||
|
||||
## Integration with Other Commands
|
||||
|
||||
After analyzing a ticket:
|
||||
- Use `/plan` to create an implementation plan from the requirements
|
||||
- Use `/tdd` to implement with test-driven development
|
||||
- Use `/code-review` after implementation
|
||||
- Use `/jira comment` to post progress back to the ticket
|
||||
- Use `/jira transition` to move the ticket when work is complete
|
||||
|
||||
## Related
|
||||
|
||||
- **Skill:** `skills/jira-integration/`
|
||||
- **MCP config:** `mcp-configs/mcp-servers.json` → `jira`
|
||||
@@ -24,20 +24,20 @@ Apply the orchestration skills instead of maintaining a second workflow spec her
|
||||
- Keep handoffs structured, but let the skills define the maintained sequencing rules.
|
||||
Security Reviewer: [summary]
|
||||
|
||||
### FILES CHANGED
|
||||
|
||||
FILES CHANGED
|
||||
-------------
|
||||
[List all files modified]
|
||||
|
||||
### TEST RESULTS
|
||||
|
||||
TEST RESULTS
|
||||
------------
|
||||
[Test pass/fail summary]
|
||||
|
||||
### SECURITY STATUS
|
||||
|
||||
SECURITY STATUS
|
||||
---------------
|
||||
[Security findings]
|
||||
|
||||
### RECOMMENDATION
|
||||
|
||||
RECOMMENDATION
|
||||
--------------
|
||||
[SHIP / NEEDS WORK / BLOCKED]
|
||||
```
|
||||
|
||||
@@ -111,7 +111,7 @@ Telemetry:
|
||||
|
||||
This keeps planner, implementer, reviewer, and loop workers legible from the operator surface.
|
||||
|
||||
## Workflow Arguments
|
||||
## Arguments
|
||||
|
||||
$ARGUMENTS:
|
||||
- `feature <description>` - Full feature workflow
|
||||
|
||||
@@ -177,7 +177,7 @@ Next steps:
|
||||
|
||||
## Edge Cases
|
||||
|
||||
- **No `gh` CLI**: Stop with: "GitHub CLI (`gh`) is required. Install: <https://cli.github.com/>"
|
||||
- **No `gh` CLI**: Stop with: "GitHub CLI (`gh`) is required. Install: https://cli.github.com/"
|
||||
- **Not authenticated**: Stop with: "Run `gh auth login` first."
|
||||
- **Force push needed**: If remote has diverged and rebase was done, use `git push --force-with-lease` (never `--force`).
|
||||
- **Multiple PR templates**: If `.github/PULL_REQUEST_TEMPLATE/` has multiple files, list them and ask user to choose.
|
||||
|
||||
@@ -600,7 +600,7 @@ node tests/hooks/hooks.test.js
|
||||
### 貢献アイデア
|
||||
|
||||
- 言語固有のスキル(Rust、C#、Swift、Kotlin) — Go、Python、Javaは既に含まれています
|
||||
- フレームワーク固有の設定(Rails、Laravel、FastAPI) — Django、NestJS、Spring Bootは既に含まれています
|
||||
- フレームワーク固有の設定(Rails、Laravel、FastAPI、NestJS) — Django、Spring Bootは既に含まれています
|
||||
- DevOpsエージェント(Kubernetes、Terraform、AWS、Docker)
|
||||
- テスト戦略(異なるフレームワーク、ビジュアルリグレッション)
|
||||
- 専門領域の知識(ML、データエンジニアリング、モバイル開発)
|
||||
|
||||
@@ -631,7 +631,7 @@ node tests/hooks/hooks.test.js
|
||||
### 기여 아이디어
|
||||
|
||||
- 언어별 스킬 (Rust, C#, Swift, Kotlin) — Go, Python, Java는 이미 포함
|
||||
- 프레임워크별 설정 (Rails, Laravel, FastAPI) — Django, NestJS, Spring Boot는 이미 포함
|
||||
- 프레임워크별 설정 (Rails, Laravel, FastAPI, NestJS) — Django, Spring Boot는 이미 포함
|
||||
- DevOps 에이전트 (Kubernetes, Terraform, AWS, Docker)
|
||||
- 테스팅 전략 (다양한 프레임워크, 비주얼 리그레션)
|
||||
- 도메인별 지식 (ML, 데이터 엔지니어링, 모바일)
|
||||
|
||||
@@ -441,7 +441,7 @@ Lütfen katkıda bulunun! Rehber için [CONTRIBUTING.md](../../CONTRIBUTING.md)'
|
||||
### Katkı Fikirleri
|
||||
|
||||
- Dile özel skill'ler (Rust, C#, Kotlin, Java) — Go, Python, Perl, Swift ve TypeScript zaten dahil
|
||||
- Framework'e özel config'ler (Rails, FastAPI) — Django, NestJS, Spring Boot ve Laravel zaten dahil
|
||||
- Framework'e özel config'ler (Rails, FastAPI, NestJS) — Django, Spring Boot, Laravel zaten dahil
|
||||
- DevOps agent'ları (Kubernetes, Terraform, AWS, Docker)
|
||||
- Test stratejileri (farklı framework'ler, görsel regresyon)
|
||||
- Domain'e özel bilgi (ML, data engineering, mobile)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Everything Claude Code (ECC) — 智能体指令
|
||||
|
||||
这是一个**生产就绪的 AI 编码插件**,提供 38 个专业代理、156 项技能、72 条命令以及自动化钩子工作流,用于软件开发。
|
||||
这是一个**生产就绪的 AI 编码插件**,提供 36 个专业代理、147 项技能、68 条命令以及自动化钩子工作流,用于软件开发。
|
||||
|
||||
**版本:** 1.9.0
|
||||
|
||||
@@ -146,9 +146,9 @@
|
||||
## 项目结构
|
||||
|
||||
```
|
||||
agents/ — 38 个专业子代理
|
||||
skills/ — 156 个工作流技能和领域知识
|
||||
commands/ — 72 个斜杠命令
|
||||
agents/ — 36 个专业子代理
|
||||
skills/ — 147 个工作流技能和领域知识
|
||||
commands/ — 68 个斜杠命令
|
||||
hooks/ — 基于触发的自动化
|
||||
rules/ — 始终遵循的指导方针(通用 + 每种语言)
|
||||
scripts/ — 跨平台 Node.js 实用工具
|
||||
|
||||
@@ -209,7 +209,7 @@ npx ecc-install typescript
|
||||
/plugin list everything-claude-code@everything-claude-code
|
||||
```
|
||||
|
||||
**搞定!** 你现在可以使用 38 个智能体、156 项技能和 72 个命令了。
|
||||
**搞定!** 你现在可以使用 36 个智能体、147 项技能和 68 个命令了。
|
||||
|
||||
***
|
||||
|
||||
@@ -927,7 +927,7 @@ node tests/hooks/hooks.test.js
|
||||
### 贡献想法
|
||||
|
||||
* 特定语言技能 (Rust, C#, Kotlin, Java) — Go、Python、Perl、Swift 和 TypeScript 已包含在内
|
||||
* 特定框架配置 (Rails, FastAPI) — Django、NestJS、Spring Boot、Laravel 已包含在内
|
||||
* 特定框架配置 (Rails, FastAPI, NestJS) — Django、Spring Boot、Laravel 已包含在内
|
||||
* DevOps 智能体 (Kubernetes, Terraform, AWS, Docker)
|
||||
* 测试策略 (不同框架、视觉回归)
|
||||
* 领域特定知识 (ML, 数据工程, 移动端)
|
||||
@@ -1094,9 +1094,9 @@ opencode
|
||||
|
||||
| 功能特性 | Claude Code | OpenCode | 状态 |
|
||||
|---------|-------------|----------|--------|
|
||||
| 智能体 | PASS: 38 个 | PASS: 12 个 | **Claude Code 领先** |
|
||||
| 命令 | PASS: 72 个 | PASS: 31 个 | **Claude Code 领先** |
|
||||
| 技能 | PASS: 156 项 | PASS: 37 项 | **Claude Code 领先** |
|
||||
| 智能体 | PASS: 36 个 | PASS: 12 个 | **Claude Code 领先** |
|
||||
| 命令 | PASS: 68 个 | PASS: 31 个 | **Claude Code 领先** |
|
||||
| 技能 | PASS: 147 项 | PASS: 37 项 | **Claude Code 领先** |
|
||||
| 钩子 | PASS: 8 种事件类型 | PASS: 11 种事件 | **OpenCode 更多!** |
|
||||
| 规则 | PASS: 29 条 | PASS: 13 条指令 | **Claude Code 领先** |
|
||||
| MCP 服务器 | PASS: 14 个 | PASS: 完整 | **完全对等** |
|
||||
@@ -1206,9 +1206,9 @@ ECC 是**第一个最大化利用每个主要 AI 编码工具的插件**。以
|
||||
|
||||
| 功能特性 | Claude Code | Cursor IDE | Codex CLI | OpenCode |
|
||||
|---------|------------|------------|-----------|----------|
|
||||
| **智能体** | 38 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 |
|
||||
| **命令** | 72 | 共享 | 基于指令 | 31 |
|
||||
| **技能** | 156 | 共享 | 10 (原生格式) | 37 |
|
||||
| **智能体** | 36 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 |
|
||||
| **命令** | 68 | 共享 | 基于指令 | 31 |
|
||||
| **技能** | 147 | 共享 | 10 (原生格式) | 37 |
|
||||
| **钩子事件** | 8 种类型 | 15 种类型 | 暂无 | 11 种类型 |
|
||||
| **钩子脚本** | 20+ 个脚本 | 16 个脚本 (DRY 适配器) | N/A | 插件钩子 |
|
||||
| **规则** | 34 (通用 + 语言) | 34 (YAML 前页) | 基于指令 | 13 条指令 |
|
||||
|
||||
@@ -24,11 +24,5 @@ module.exports = [
|
||||
'no-undef': 'error',
|
||||
'eqeqeq': 'warn'
|
||||
}
|
||||
},
|
||||
{
|
||||
files: ['**/*.mjs'],
|
||||
languageOptions: {
|
||||
sourceType: 'module'
|
||||
}
|
||||
}
|
||||
];
|
||||
|
||||
@@ -35,7 +35,6 @@ User request → Claude picks a tool → PreToolUse hook runs → Tool executes
|
||||
| **PR logger** | `Bash` | Logs PR URL and review command after `gh pr create` |
|
||||
| **Build analysis** | `Bash` | Background analysis after build commands (async, non-blocking) |
|
||||
| **Quality gate** | `Edit\|Write\|MultiEdit` | Runs fast quality checks after edits |
|
||||
| **Design quality check** | `Edit\|Write\|MultiEdit` | Warns when frontend edits drift toward generic template-looking UI |
|
||||
| **Prettier format** | `Edit` | Auto-formats JS/TS files with Prettier after edits |
|
||||
| **TypeScript check** | `Edit` | Runs `tsc --noEmit` after editing `.ts`/`.tsx` files |
|
||||
| **console.log warning** | `Edit` | Warns about `console.log` statements in edited files |
|
||||
|
||||
@@ -226,18 +226,6 @@
|
||||
"description": "Run quality gate checks after file edits",
|
||||
"id": "post:quality-gate"
|
||||
},
|
||||
{
|
||||
"matcher": "Edit|Write|MultiEdit",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:edit:design-quality-check\" \"scripts/hooks/design-quality-check.js\" \"standard,strict\"",
|
||||
"timeout": 10
|
||||
}
|
||||
],
|
||||
"description": "Warn when frontend edits drift toward generic template-looking UI",
|
||||
"id": "post:edit:design-quality-check"
|
||||
},
|
||||
{
|
||||
"matcher": "Edit|Write|MultiEdit",
|
||||
"hooks": [
|
||||
|
||||
@@ -140,7 +140,7 @@
|
||||
{
|
||||
"id": "capability:content",
|
||||
"family": "capability",
|
||||
"description": "Business, writing, market, investor communication, and reusable voice-system skills.",
|
||||
"description": "Business, writing, market, and investor communication skills.",
|
||||
"modules": [
|
||||
"business-content"
|
||||
]
|
||||
@@ -148,7 +148,7 @@
|
||||
{
|
||||
"id": "capability:operators",
|
||||
"family": "capability",
|
||||
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, Google Workspace, and network optimization.",
|
||||
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, and Google Workspace.",
|
||||
"modules": [
|
||||
"operator-workflows"
|
||||
]
|
||||
@@ -164,7 +164,7 @@
|
||||
{
|
||||
"id": "capability:media",
|
||||
"family": "capability",
|
||||
"description": "Media generation, technical explainers, and AI-assisted editing skills.",
|
||||
"description": "Media generation and AI-assisted editing skills.",
|
||||
"modules": [
|
||||
"media-generation"
|
||||
]
|
||||
|
||||
@@ -117,14 +117,11 @@
|
||||
"skills/backend-patterns",
|
||||
"skills/coding-standards",
|
||||
"skills/compose-multiplatform-patterns",
|
||||
"skills/csharp-testing",
|
||||
"skills/cpp-coding-standards",
|
||||
"skills/cpp-testing",
|
||||
"skills/dart-flutter-patterns",
|
||||
"skills/django-patterns",
|
||||
"skills/django-tdd",
|
||||
"skills/django-verification",
|
||||
"skills/dotnet-patterns",
|
||||
"skills/frontend-patterns",
|
||||
"skills/frontend-slides",
|
||||
"skills/golang-patterns",
|
||||
@@ -140,7 +137,6 @@
|
||||
"skills/laravel-tdd",
|
||||
"skills/laravel-verification",
|
||||
"skills/mcp-server-patterns",
|
||||
"skills/nestjs-patterns",
|
||||
"skills/perl-patterns",
|
||||
"skills/perl-testing",
|
||||
"skills/python-patterns",
|
||||
@@ -286,12 +282,10 @@
|
||||
"description": "Business, writing, market, and investor communication skills.",
|
||||
"paths": [
|
||||
"skills/article-writing",
|
||||
"skills/brand-voice",
|
||||
"skills/content-engine",
|
||||
"skills/investor-materials",
|
||||
"skills/investor-outreach",
|
||||
"skills/lead-intelligence",
|
||||
"skills/social-graph-ranker",
|
||||
"skills/market-research"
|
||||
],
|
||||
"targets": [
|
||||
@@ -312,12 +306,10 @@
|
||||
{
|
||||
"id": "operator-workflows",
|
||||
"kind": "skills",
|
||||
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, Google Workspace, and network optimization.",
|
||||
"description": "Connected-app operator workflows for setup audits, billing operations, program tracking, and Google Workspace.",
|
||||
"paths": [
|
||||
"skills/connections-optimizer",
|
||||
"skills/customer-billing-ops",
|
||||
"skills/google-workspace-ops",
|
||||
"skills/jira-integration",
|
||||
"skills/project-flow-ops",
|
||||
"skills/workspace-surface-audit"
|
||||
],
|
||||
@@ -362,10 +354,9 @@
|
||||
{
|
||||
"id": "media-generation",
|
||||
"kind": "skills",
|
||||
"description": "Media generation, technical explainers, and AI-assisted editing skills.",
|
||||
"description": "Media generation and AI-assisted editing skills.",
|
||||
"paths": [
|
||||
"skills/fal-ai-media",
|
||||
"skills/manim-video",
|
||||
"skills/remotion-video-creation",
|
||||
"skills/ui-demo",
|
||||
"skills/video-editing",
|
||||
|
||||
@@ -1,15 +1,5 @@
|
||||
{
|
||||
"mcpServers": {
|
||||
"jira": {
|
||||
"command": "uvx",
|
||||
"args": ["mcp-atlassian==0.21.0"],
|
||||
"env": {
|
||||
"JIRA_URL": "YOUR_JIRA_URL_HERE",
|
||||
"JIRA_EMAIL": "YOUR_JIRA_EMAIL_HERE",
|
||||
"JIRA_API_TOKEN": "YOUR_JIRA_API_TOKEN_HERE"
|
||||
},
|
||||
"description": "Jira issue tracking — search, create, update, comment, transition issues"
|
||||
},
|
||||
"github": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-github"],
|
||||
|
||||
@@ -80,7 +80,6 @@
|
||||
"scripts/orchestrate-worktrees.js",
|
||||
"scripts/setup-package-manager.js",
|
||||
"scripts/skill-create-output.js",
|
||||
"scripts/codex/merge-codex-config.js",
|
||||
"scripts/codex/merge-mcp-config.js",
|
||||
"scripts/repair.js",
|
||||
"scripts/harness-audit.js",
|
||||
|
||||
@@ -17,7 +17,6 @@ rules/
|
||||
├── typescript/ # TypeScript/JavaScript specific
|
||||
├── python/ # Python specific
|
||||
├── golang/ # Go specific
|
||||
├── web/ # Web and frontend specific
|
||||
├── swift/ # Swift specific
|
||||
└── php/ # PHP specific
|
||||
```
|
||||
@@ -34,7 +33,6 @@ rules/
|
||||
./install.sh typescript
|
||||
./install.sh python
|
||||
./install.sh golang
|
||||
./install.sh web
|
||||
./install.sh swift
|
||||
./install.sh php
|
||||
|
||||
@@ -58,7 +56,6 @@ cp -r rules/common ~/.claude/rules/common
|
||||
cp -r rules/typescript ~/.claude/rules/typescript
|
||||
cp -r rules/python ~/.claude/rules/python
|
||||
cp -r rules/golang ~/.claude/rules/golang
|
||||
cp -r rules/web ~/.claude/rules/web
|
||||
cp -r rules/swift ~/.claude/rules/swift
|
||||
cp -r rules/php ~/.claude/rules/php
|
||||
|
||||
@@ -89,8 +86,6 @@ To add support for a new language (e.g., `rust/`):
|
||||
```
|
||||
4. Reference existing skills if available, or create new ones under `skills/`.
|
||||
|
||||
For non-language domains like `web/`, follow the same layered pattern when there is enough reusable domain-specific guidance to justify a standalone ruleset.
|
||||
|
||||
## Rule Priority
|
||||
|
||||
When language-specific rules and common rules conflict, **language-specific rules take precedence** (specific overrides general). This follows the standard layered configuration pattern (similar to CSS specificity or `.gitignore` precedence).
|
||||
|
||||
@@ -1,159 +0,0 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.dart"
|
||||
- "**/pubspec.yaml"
|
||||
- "**/analysis_options.yaml"
|
||||
---
|
||||
# Dart/Flutter Coding Style
|
||||
|
||||
> This file extends [common/coding-style.md](../common/coding-style.md) with Dart and Flutter-specific content.
|
||||
|
||||
## Formatting
|
||||
|
||||
- **dart format** for all `.dart` files — enforced in CI (`dart format --set-exit-if-changed .`)
|
||||
- Line length: 80 characters (dart format default)
|
||||
- Trailing commas on multi-line argument/parameter lists to improve diffs and formatting
|
||||
|
||||
## Immutability
|
||||
|
||||
- Prefer `final` for local variables and `const` for compile-time constants
|
||||
- Use `const` constructors wherever all fields are `final`
|
||||
- Return unmodifiable collections from public APIs (`List.unmodifiable`, `Map.unmodifiable`)
|
||||
- Use `copyWith()` for state mutations in immutable state classes
|
||||
|
||||
```dart
|
||||
// BAD
|
||||
var count = 0;
|
||||
List<String> items = ['a', 'b'];
|
||||
|
||||
// GOOD
|
||||
final count = 0;
|
||||
const items = ['a', 'b'];
|
||||
```
|
||||
|
||||
## Naming
|
||||
|
||||
Follow Dart conventions:
|
||||
- `camelCase` for variables, parameters, and named constructors
|
||||
- `PascalCase` for classes, enums, typedefs, and extensions
|
||||
- `snake_case` for file names and library names
|
||||
- `SCREAMING_SNAKE_CASE` for constants declared with `const` at top level
|
||||
- Prefix private members with `_`
|
||||
- Extension names describe the type they extend: `StringExtensions`, not `MyHelpers`
|
||||
|
||||
## Null Safety
|
||||
|
||||
- Avoid `!` (bang operator) — prefer `?.`, `??`, `if (x != null)`, or Dart 3 pattern matching; reserve `!` only where a null value is a programming error and crashing is the right behaviour
|
||||
- Avoid `late` unless initialization is guaranteed before first use (prefer nullable or constructor init)
|
||||
- Use `required` for constructor parameters that must always be provided
|
||||
|
||||
```dart
|
||||
// BAD — crashes at runtime if user is null
|
||||
final name = user!.name;
|
||||
|
||||
// GOOD — null-aware operators
|
||||
final name = user?.name ?? 'Unknown';
|
||||
|
||||
// GOOD — Dart 3 pattern matching (exhaustive, compiler-checked)
|
||||
final name = switch (user) {
|
||||
User(:final name) => name,
|
||||
null => 'Unknown',
|
||||
};
|
||||
|
||||
// GOOD — early-return null guard
|
||||
String getUserName(User? user) {
|
||||
if (user == null) return 'Unknown';
|
||||
return user.name; // promoted to non-null after the guard
|
||||
}
|
||||
```
|
||||
|
||||
## Sealed Types and Pattern Matching (Dart 3+)
|
||||
|
||||
Use sealed classes to model closed state hierarchies:
|
||||
|
||||
```dart
|
||||
sealed class AsyncState<T> {
|
||||
const AsyncState();
|
||||
}
|
||||
|
||||
final class Loading<T> extends AsyncState<T> {
|
||||
const Loading();
|
||||
}
|
||||
|
||||
final class Success<T> extends AsyncState<T> {
|
||||
const Success(this.data);
|
||||
final T data;
|
||||
}
|
||||
|
||||
final class Failure<T> extends AsyncState<T> {
|
||||
const Failure(this.error);
|
||||
final Object error;
|
||||
}
|
||||
```
|
||||
|
||||
Always use exhaustive `switch` with sealed types — no default/wildcard:
|
||||
|
||||
```dart
|
||||
// BAD
|
||||
if (state is Loading) { ... }
|
||||
|
||||
// GOOD
|
||||
return switch (state) {
|
||||
Loading() => const CircularProgressIndicator(),
|
||||
Success(:final data) => DataWidget(data),
|
||||
Failure(:final error) => ErrorWidget(error.toString()),
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
- Specify exception types in `on` clauses — never use bare `catch (e)`
|
||||
- Never catch `Error` subtypes — they indicate programming bugs
|
||||
- Use `Result`-style types or sealed classes for recoverable errors
|
||||
- Avoid using exceptions for control flow
|
||||
|
||||
```dart
|
||||
// BAD
|
||||
try {
|
||||
await fetchUser();
|
||||
} catch (e) {
|
||||
log(e.toString());
|
||||
}
|
||||
|
||||
// GOOD
|
||||
try {
|
||||
await fetchUser();
|
||||
} on NetworkException catch (e) {
|
||||
log('Network error: ${e.message}');
|
||||
} on NotFoundException {
|
||||
handleNotFound();
|
||||
}
|
||||
```
|
||||
|
||||
## Async / Futures
|
||||
|
||||
- Always `await` Futures or explicitly call `unawaited()` to signal intentional fire-and-forget
|
||||
- Never mark a function `async` if it never `await`s anything
|
||||
- Use `Future.wait` / `Future.any` for concurrent operations
|
||||
- Check `context.mounted` before using `BuildContext` after any `await` (Flutter 3.7+)
|
||||
|
||||
```dart
|
||||
// BAD — ignoring Future
|
||||
fetchData(); // fire-and-forget without marking intent
|
||||
|
||||
// GOOD
|
||||
unawaited(fetchData()); // explicit fire-and-forget
|
||||
await fetchData(); // or properly awaited
|
||||
```
|
||||
|
||||
## Imports
|
||||
|
||||
- Use `package:` imports throughout — never relative imports (`../`) for cross-feature or cross-layer code
|
||||
- Order: `dart:` → external `package:` → internal `package:` (same package)
|
||||
- No unused imports — `dart analyze` enforces this with `unused_import`
|
||||
|
||||
## Code Generation
|
||||
|
||||
- Generated files (`.g.dart`, `.freezed.dart`, `.gr.dart`) must be committed or gitignored consistently — pick one strategy per project
|
||||
- Never manually edit generated files
|
||||
- Keep generator annotations (`@JsonSerializable`, `@freezed`, `@riverpod`, etc.) on the canonical source file only
|
||||
@@ -1,66 +0,0 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.dart"
|
||||
- "**/pubspec.yaml"
|
||||
- "**/analysis_options.yaml"
|
||||
---
|
||||
# Dart/Flutter Hooks
|
||||
|
||||
> This file extends [common/hooks.md](../common/hooks.md) with Dart and Flutter-specific content.
|
||||
|
||||
## PostToolUse Hooks
|
||||
|
||||
Configure in `~/.claude/settings.json`:
|
||||
|
||||
- **dart format**: Auto-format `.dart` files after edit
|
||||
- **dart analyze**: Run static analysis after editing Dart files and surface warnings
|
||||
- **flutter test**: Optionally run affected tests after significant changes
|
||||
|
||||
## Recommended Hook Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"hooks": {
|
||||
"PostToolUse": [
|
||||
{
|
||||
"matcher": { "tool_name": "Edit", "file_paths": ["**/*.dart"] },
|
||||
"hooks": [
|
||||
{ "type": "command", "command": "dart format $CLAUDE_FILE_PATHS" }
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Pre-commit Checks
|
||||
|
||||
Run before committing Dart/Flutter changes:
|
||||
|
||||
```bash
|
||||
dart format --set-exit-if-changed .
|
||||
dart analyze --fatal-infos
|
||||
flutter test
|
||||
```
|
||||
|
||||
## Useful One-liners
|
||||
|
||||
```bash
|
||||
# Format all Dart files
|
||||
dart format .
|
||||
|
||||
# Analyze and report issues
|
||||
dart analyze
|
||||
|
||||
# Run all tests with coverage
|
||||
flutter test --coverage
|
||||
|
||||
# Regenerate code-gen files
|
||||
dart run build_runner build --delete-conflicting-outputs
|
||||
|
||||
# Check for outdated packages
|
||||
flutter pub outdated
|
||||
|
||||
# Upgrade packages within constraints
|
||||
flutter pub upgrade
|
||||
```
|
||||
@@ -1,261 +0,0 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.dart"
|
||||
- "**/pubspec.yaml"
|
||||
---
|
||||
# Dart/Flutter Patterns
|
||||
|
||||
> This file extends [common/patterns.md](../common/patterns.md) with Dart, Flutter, and common ecosystem-specific content.
|
||||
|
||||
## Repository Pattern
|
||||
|
||||
```dart
|
||||
abstract interface class UserRepository {
|
||||
Future<User?> getById(String id);
|
||||
Future<List<User>> getAll();
|
||||
Stream<List<User>> watchAll();
|
||||
Future<void> save(User user);
|
||||
Future<void> delete(String id);
|
||||
}
|
||||
|
||||
class UserRepositoryImpl implements UserRepository {
|
||||
const UserRepositoryImpl(this._remote, this._local);
|
||||
|
||||
final UserRemoteDataSource _remote;
|
||||
final UserLocalDataSource _local;
|
||||
|
||||
@override
|
||||
Future<User?> getById(String id) async {
|
||||
final local = await _local.getById(id);
|
||||
if (local != null) return local;
|
||||
final remote = await _remote.getById(id);
|
||||
if (remote != null) await _local.save(remote);
|
||||
return remote;
|
||||
}
|
||||
|
||||
@override
|
||||
Future<List<User>> getAll() async {
|
||||
final remote = await _remote.getAll();
|
||||
for (final user in remote) {
|
||||
await _local.save(user);
|
||||
}
|
||||
return remote;
|
||||
}
|
||||
|
||||
@override
|
||||
Stream<List<User>> watchAll() => _local.watchAll();
|
||||
|
||||
@override
|
||||
Future<void> save(User user) => _local.save(user);
|
||||
|
||||
@override
|
||||
Future<void> delete(String id) async {
|
||||
await _remote.delete(id);
|
||||
await _local.delete(id);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## State Management: BLoC/Cubit
|
||||
|
||||
```dart
|
||||
// Cubit — simple state transitions
|
||||
class CounterCubit extends Cubit<int> {
|
||||
CounterCubit() : super(0);
|
||||
|
||||
void increment() => emit(state + 1);
|
||||
void decrement() => emit(state - 1);
|
||||
}
|
||||
|
||||
// BLoC — event-driven
|
||||
@immutable
|
||||
sealed class CartEvent {}
|
||||
class CartItemAdded extends CartEvent { CartItemAdded(this.item); final Item item; }
|
||||
class CartItemRemoved extends CartEvent { CartItemRemoved(this.id); final String id; }
|
||||
class CartCleared extends CartEvent {}
|
||||
|
||||
@immutable
|
||||
class CartState {
|
||||
const CartState({this.items = const []});
|
||||
final List<Item> items;
|
||||
CartState copyWith({List<Item>? items}) => CartState(items: items ?? this.items);
|
||||
}
|
||||
|
||||
class CartBloc extends Bloc<CartEvent, CartState> {
|
||||
CartBloc() : super(const CartState()) {
|
||||
on<CartItemAdded>((event, emit) =>
|
||||
emit(state.copyWith(items: [...state.items, event.item])));
|
||||
on<CartItemRemoved>((event, emit) =>
|
||||
emit(state.copyWith(items: state.items.where((i) => i.id != event.id).toList())));
|
||||
on<CartCleared>((_, emit) => emit(const CartState()));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## State Management: Riverpod
|
||||
|
||||
```dart
|
||||
// Simple provider
|
||||
@riverpod
|
||||
Future<List<User>> users(Ref ref) async {
|
||||
final repo = ref.watch(userRepositoryProvider);
|
||||
return repo.getAll();
|
||||
}
|
||||
|
||||
// Notifier for mutable state
|
||||
@riverpod
|
||||
class CartNotifier extends _$CartNotifier {
|
||||
@override
|
||||
List<Item> build() => [];
|
||||
|
||||
void add(Item item) => state = [...state, item];
|
||||
void remove(String id) => state = state.where((i) => i.id != id).toList();
|
||||
void clear() => state = [];
|
||||
}
|
||||
|
||||
// ConsumerWidget
|
||||
class CartPage extends ConsumerWidget {
|
||||
const CartPage({super.key});
|
||||
|
||||
@override
|
||||
Widget build(BuildContext context, WidgetRef ref) {
|
||||
final items = ref.watch(cartNotifierProvider);
|
||||
return ListView(
|
||||
children: items.map((item) => CartItemTile(item: item)).toList(),
|
||||
);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Dependency Injection
|
||||
|
||||
Constructor injection is preferred. Use `get_it` or Riverpod providers at composition root:
|
||||
|
||||
```dart
|
||||
// get_it registration (in a setup file)
|
||||
void setupDependencies() {
|
||||
final di = GetIt.instance;
|
||||
di.registerSingleton<ApiClient>(ApiClient(baseUrl: Env.apiUrl));
|
||||
di.registerSingleton<UserRepository>(
|
||||
UserRepositoryImpl(di<ApiClient>(), di<LocalDatabase>()),
|
||||
);
|
||||
di.registerFactory(() => UserListViewModel(di<UserRepository>()));
|
||||
}
|
||||
```
|
||||
|
||||
## ViewModel Pattern (without BLoC/Riverpod)
|
||||
|
||||
```dart
|
||||
class UserListViewModel extends ChangeNotifier {
|
||||
UserListViewModel(this._repository);
|
||||
|
||||
final UserRepository _repository;
|
||||
|
||||
AsyncState<List<User>> _state = const Loading();
|
||||
AsyncState<List<User>> get state => _state;
|
||||
|
||||
Future<void> load() async {
|
||||
_state = const Loading();
|
||||
notifyListeners();
|
||||
try {
|
||||
final users = await _repository.getAll();
|
||||
_state = Success(users);
|
||||
} on Exception catch (e) {
|
||||
_state = Failure(e);
|
||||
}
|
||||
notifyListeners();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## UseCase Pattern
|
||||
|
||||
```dart
|
||||
class GetUserUseCase {
|
||||
const GetUserUseCase(this._repository);
|
||||
final UserRepository _repository;
|
||||
|
||||
Future<User?> call(String id) => _repository.getById(id);
|
||||
}
|
||||
|
||||
class CreateUserUseCase {
|
||||
const CreateUserUseCase(this._repository, this._idGenerator);
|
||||
final UserRepository _repository;
|
||||
final IdGenerator _idGenerator; // injected — domain layer must not depend on uuid package directly
|
||||
|
||||
Future<void> call(CreateUserInput input) async {
|
||||
// Validate, apply business rules, then persist
|
||||
final user = User(id: _idGenerator.generate(), name: input.name, email: input.email);
|
||||
await _repository.save(user);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Immutable State with freezed
|
||||
|
||||
```dart
|
||||
@freezed
|
||||
class UserState with _$UserState {
|
||||
const factory UserState({
|
||||
@Default([]) List<User> users,
|
||||
@Default(false) bool isLoading,
|
||||
String? errorMessage,
|
||||
}) = _UserState;
|
||||
}
|
||||
```
|
||||
|
||||
## Clean Architecture Layer Boundaries
|
||||
|
||||
```
|
||||
lib/
|
||||
├── domain/ # Pure Dart — no Flutter, no external packages
|
||||
│ ├── entities/
|
||||
│ ├── repositories/ # Abstract interfaces
|
||||
│ └── usecases/
|
||||
├── data/ # Implements domain interfaces
|
||||
│ ├── datasources/
|
||||
│ ├── models/ # DTOs with fromJson/toJson
|
||||
│ └── repositories/
|
||||
└── presentation/ # Flutter widgets + state management
|
||||
├── pages/
|
||||
├── widgets/
|
||||
└── providers/ (or blocs/ or viewmodels/)
|
||||
```
|
||||
|
||||
- Domain must not import `package:flutter` or any data-layer package
|
||||
- Data layer maps DTOs to domain entities at repository boundaries
|
||||
- Presentation calls use cases, not repositories directly
|
||||
|
||||
## Navigation (GoRouter)
|
||||
|
||||
```dart
|
||||
final router = GoRouter(
|
||||
routes: [
|
||||
GoRoute(
|
||||
path: '/',
|
||||
builder: (context, state) => const HomePage(),
|
||||
),
|
||||
GoRoute(
|
||||
path: '/users/:id',
|
||||
builder: (context, state) {
|
||||
final id = state.pathParameters['id']!;
|
||||
return UserDetailPage(userId: id);
|
||||
},
|
||||
),
|
||||
],
|
||||
// refreshListenable re-evaluates redirect whenever auth state changes
|
||||
refreshListenable: GoRouterRefreshStream(authCubit.stream),
|
||||
redirect: (context, state) {
|
||||
final isLoggedIn = context.read<AuthCubit>().state is AuthAuthenticated;
|
||||
if (!isLoggedIn && !state.matchedLocation.startsWith('/login')) {
|
||||
return '/login';
|
||||
}
|
||||
return null;
|
||||
},
|
||||
);
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
See skill: `flutter-dart-code-review` for the comprehensive review checklist.
|
||||
See skill: `compose-multiplatform-patterns` for Kotlin Multiplatform/Flutter interop patterns.
|
||||
@@ -1,135 +0,0 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.dart"
|
||||
- "**/pubspec.yaml"
|
||||
- "**/AndroidManifest.xml"
|
||||
- "**/Info.plist"
|
||||
---
|
||||
# Dart/Flutter Security
|
||||
|
||||
> This file extends [common/security.md](../common/security.md) with Dart, Flutter, and mobile-specific content.
|
||||
|
||||
## Secrets Management
|
||||
|
||||
- Never hardcode API keys, tokens, or credentials in Dart source
|
||||
- Use `--dart-define` or `--dart-define-from-file` for compile-time config (values are not truly secret — use a backend proxy for server-side secrets)
|
||||
- Use `flutter_dotenv` or equivalent, with `.env` files listed in `.gitignore`
|
||||
- Store runtime secrets in platform-secure storage: `flutter_secure_storage` (Keychain on iOS, EncryptedSharedPreferences on Android)
|
||||
|
||||
```dart
|
||||
// BAD
|
||||
const apiKey = 'sk-abc123...';
|
||||
|
||||
// GOOD — compile-time config (not secret, just configurable)
|
||||
const apiKey = String.fromEnvironment('API_KEY');
|
||||
|
||||
// GOOD — runtime secret from secure storage
|
||||
final token = await secureStorage.read(key: 'auth_token');
|
||||
```
|
||||
|
||||
## Network Security
|
||||
|
||||
- Enforce HTTPS — no `http://` calls in production
|
||||
- Configure Android `network_security_config.xml` to block cleartext traffic
|
||||
- Set `NSAppTransportSecurity` in `Info.plist` to disallow arbitrary loads
|
||||
- Set request timeouts on all HTTP clients — never leave defaults
|
||||
- Consider certificate pinning for high-security endpoints
|
||||
|
||||
```dart
|
||||
// Dio with timeout and HTTPS enforcement
|
||||
final dio = Dio(BaseOptions(
|
||||
baseUrl: 'https://api.example.com',
|
||||
connectTimeout: const Duration(seconds: 10),
|
||||
receiveTimeout: const Duration(seconds: 30),
|
||||
));
|
||||
```
|
||||
|
||||
## Input Validation
|
||||
|
||||
- Validate and sanitize all user input before sending to API or storage
|
||||
- Never pass unsanitized input to SQL queries — use parameterized queries (sqflite, drift)
|
||||
- Sanitize deep link URLs before navigation — validate scheme, host, and path parameters
|
||||
- Use `Uri.tryParse` and validate before navigating
|
||||
|
||||
```dart
|
||||
// BAD — SQL injection
|
||||
await db.rawQuery("SELECT * FROM users WHERE email = '$userInput'");
|
||||
|
||||
// GOOD — parameterized
|
||||
await db.query('users', where: 'email = ?', whereArgs: [userInput]);
|
||||
|
||||
// BAD — unvalidated deep link
|
||||
final uri = Uri.parse(incomingLink);
|
||||
context.go(uri.path); // could navigate to any route
|
||||
|
||||
// GOOD — validated deep link
|
||||
final uri = Uri.tryParse(incomingLink);
|
||||
if (uri != null && uri.host == 'myapp.com' && _allowedPaths.contains(uri.path)) {
|
||||
context.go(uri.path);
|
||||
}
|
||||
```
|
||||
|
||||
## Data Protection
|
||||
|
||||
- Store tokens, PII, and credentials only in `flutter_secure_storage`
|
||||
- Never write sensitive data to `SharedPreferences` or local files in plaintext
|
||||
- Clear auth state on logout: tokens, cached user data, cookies
|
||||
- Use biometric authentication (`local_auth`) for sensitive operations
|
||||
- Avoid logging sensitive data — no `print(token)` or `debugPrint(password)`
|
||||
|
||||
## Android-Specific
|
||||
|
||||
- Declare only required permissions in `AndroidManifest.xml`
|
||||
- Export Android components (`Activity`, `Service`, `BroadcastReceiver`) only when necessary; add `android:exported="false"` where not needed
|
||||
- Review intent filters — exported components with implicit intent filters are accessible by any app
|
||||
- Use `FLAG_SECURE` for screens displaying sensitive data (prevents screenshots)
|
||||
|
||||
```xml
|
||||
<!-- AndroidManifest.xml — restrict exported components -->
|
||||
<activity android:name=".MainActivity" android:exported="true">
|
||||
<!-- Only the launcher activity needs exported=true -->
|
||||
</activity>
|
||||
<activity android:name=".SensitiveActivity" android:exported="false" />
|
||||
```
|
||||
|
||||
## iOS-Specific
|
||||
|
||||
- Declare only required usage descriptions in `Info.plist` (`NSCameraUsageDescription`, etc.)
|
||||
- Store secrets in Keychain — `flutter_secure_storage` uses Keychain on iOS
|
||||
- Use App Transport Security (ATS) — disallow arbitrary loads
|
||||
- Enable data protection entitlement for sensitive files
|
||||
|
||||
## WebView Security
|
||||
|
||||
- Use `webview_flutter` v4+ (`WebViewController` / `WebViewWidget`) — the legacy `WebView` widget is removed
|
||||
- Disable JavaScript unless explicitly required (`JavaScriptMode.disabled`)
|
||||
- Validate URLs before loading — never load arbitrary URLs from deep links
|
||||
- Never expose Dart callbacks to JavaScript unless absolutely needed and carefully sandboxed
|
||||
- Use `NavigationDelegate.onNavigationRequest` to intercept and validate navigation requests
|
||||
|
||||
```dart
|
||||
// webview_flutter v4+ API (WebViewController + WebViewWidget)
|
||||
final controller = WebViewController()
|
||||
..setJavaScriptMode(JavaScriptMode.disabled) // disabled unless required
|
||||
..setNavigationDelegate(
|
||||
NavigationDelegate(
|
||||
onNavigationRequest: (request) {
|
||||
final uri = Uri.tryParse(request.url);
|
||||
if (uri == null || uri.host != 'trusted.example.com') {
|
||||
return NavigationDecision.prevent;
|
||||
}
|
||||
return NavigationDecision.navigate;
|
||||
},
|
||||
),
|
||||
);
|
||||
|
||||
// In your widget tree:
|
||||
WebViewWidget(controller: controller)
|
||||
```
|
||||
|
||||
## Obfuscation and Build Security
|
||||
|
||||
- Enable obfuscation in release builds: `flutter build apk --obfuscate --split-debug-info=./debug-info/`
|
||||
- Keep `--split-debug-info` output out of version control (used for crash symbolication only)
|
||||
- Ensure ProGuard/R8 rules don't inadvertently expose serialized classes
|
||||
- Run `flutter analyze` and address all warnings before release
|
||||
@@ -1,215 +0,0 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.dart"
|
||||
- "**/pubspec.yaml"
|
||||
- "**/analysis_options.yaml"
|
||||
---
|
||||
# Dart/Flutter Testing
|
||||
|
||||
> This file extends [common/testing.md](../common/testing.md) with Dart and Flutter-specific content.
|
||||
|
||||
## Test Framework
|
||||
|
||||
- **flutter_test** / **dart:test** — built-in test runner
|
||||
- **mockito** (with `@GenerateMocks`) or **mocktail** (no codegen) for mocking
|
||||
- **bloc_test** for BLoC/Cubit unit tests
|
||||
- **fake_async** for controlling time in unit tests
|
||||
- **integration_test** for end-to-end device tests
|
||||
|
||||
## Test Types
|
||||
|
||||
| Type | Tool | Location | When to Write |
|
||||
|------|------|----------|---------------|
|
||||
| Unit | `dart:test` | `test/unit/` | All domain logic, state managers, repositories |
|
||||
| Widget | `flutter_test` | `test/widget/` | All widgets with meaningful behavior |
|
||||
| Golden | `flutter_test` | `test/golden/` | Design-critical UI components |
|
||||
| Integration | `integration_test` | `integration_test/` | Critical user flows on real device/emulator |
|
||||
|
||||
## Unit Tests: State Managers
|
||||
|
||||
### BLoC with `bloc_test`
|
||||
|
||||
```dart
|
||||
group('CartBloc', () {
|
||||
late CartBloc bloc;
|
||||
late MockCartRepository repository;
|
||||
|
||||
setUp(() {
|
||||
repository = MockCartRepository();
|
||||
bloc = CartBloc(repository);
|
||||
});
|
||||
|
||||
tearDown(() => bloc.close());
|
||||
|
||||
blocTest<CartBloc, CartState>(
|
||||
'emits updated items when CartItemAdded',
|
||||
build: () => bloc,
|
||||
act: (b) => b.add(CartItemAdded(testItem)),
|
||||
expect: () => [CartState(items: [testItem])],
|
||||
);
|
||||
|
||||
blocTest<CartBloc, CartState>(
|
||||
'emits empty cart when CartCleared',
|
||||
seed: () => CartState(items: [testItem]),
|
||||
build: () => bloc,
|
||||
act: (b) => b.add(CartCleared()),
|
||||
expect: () => [const CartState()],
|
||||
);
|
||||
});
|
||||
```
|
||||
|
||||
### Riverpod with `ProviderContainer`
|
||||
|
||||
```dart
|
||||
test('usersProvider loads users from repository', () async {
|
||||
final container = ProviderContainer(
|
||||
overrides: [userRepositoryProvider.overrideWithValue(FakeUserRepository())],
|
||||
);
|
||||
addTearDown(container.dispose);
|
||||
|
||||
final result = await container.read(usersProvider.future);
|
||||
expect(result, isNotEmpty);
|
||||
});
|
||||
```
|
||||
|
||||
## Widget Tests
|
||||
|
||||
```dart
|
||||
testWidgets('CartPage shows item count badge', (tester) async {
|
||||
await tester.pumpWidget(
|
||||
ProviderScope(
|
||||
overrides: [
|
||||
cartNotifierProvider.overrideWith(() => FakeCartNotifier([testItem])),
|
||||
],
|
||||
child: const MaterialApp(home: CartPage()),
|
||||
),
|
||||
);
|
||||
|
||||
await tester.pump();
|
||||
expect(find.text('1'), findsOneWidget);
|
||||
expect(find.byType(CartItemTile), findsOneWidget);
|
||||
});
|
||||
|
||||
testWidgets('shows empty state when cart is empty', (tester) async {
|
||||
await tester.pumpWidget(
|
||||
ProviderScope(
|
||||
overrides: [cartNotifierProvider.overrideWith(() => FakeCartNotifier([]))],
|
||||
child: const MaterialApp(home: CartPage()),
|
||||
),
|
||||
);
|
||||
|
||||
await tester.pump();
|
||||
expect(find.text('Your cart is empty'), findsOneWidget);
|
||||
});
|
||||
```
|
||||
|
||||
## Fakes Over Mocks
|
||||
|
||||
Prefer hand-written fakes for complex dependencies:
|
||||
|
||||
```dart
|
||||
class FakeUserRepository implements UserRepository {
|
||||
final _users = <String, User>{};
|
||||
Object? fetchError;
|
||||
|
||||
@override
|
||||
Future<User?> getById(String id) async {
|
||||
if (fetchError != null) throw fetchError!;
|
||||
return _users[id];
|
||||
}
|
||||
|
||||
@override
|
||||
Future<List<User>> getAll() async {
|
||||
if (fetchError != null) throw fetchError!;
|
||||
return _users.values.toList();
|
||||
}
|
||||
|
||||
@override
|
||||
Stream<List<User>> watchAll() => Stream.value(_users.values.toList());
|
||||
|
||||
@override
|
||||
Future<void> save(User user) async {
|
||||
_users[user.id] = user;
|
||||
}
|
||||
|
||||
@override
|
||||
Future<void> delete(String id) async {
|
||||
_users.remove(id);
|
||||
}
|
||||
|
||||
void addUser(User user) => _users[user.id] = user;
|
||||
}
|
||||
```
|
||||
|
||||
## Async Testing
|
||||
|
||||
```dart
|
||||
// Use fake_async for controlling timers and Futures
|
||||
test('debounce triggers after 300ms', () {
|
||||
fakeAsync((async) {
|
||||
final debouncer = Debouncer(delay: const Duration(milliseconds: 300));
|
||||
var callCount = 0;
|
||||
debouncer.run(() => callCount++);
|
||||
expect(callCount, 0);
|
||||
async.elapse(const Duration(milliseconds: 200));
|
||||
expect(callCount, 0);
|
||||
async.elapse(const Duration(milliseconds: 200));
|
||||
expect(callCount, 1);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Golden Tests
|
||||
|
||||
```dart
|
||||
testWidgets('UserCard golden test', (tester) async {
|
||||
await tester.pumpWidget(
|
||||
MaterialApp(home: UserCard(user: testUser)),
|
||||
);
|
||||
|
||||
await expectLater(
|
||||
find.byType(UserCard),
|
||||
matchesGoldenFile('goldens/user_card.png'),
|
||||
);
|
||||
});
|
||||
```
|
||||
|
||||
Run `flutter test --update-goldens` when intentional visual changes are made.
|
||||
|
||||
## Test Naming
|
||||
|
||||
Use descriptive, behavior-focused names:
|
||||
|
||||
```dart
|
||||
test('returns null when user does not exist', () { ... });
|
||||
test('throws NotFoundException when id is empty string', () { ... });
|
||||
testWidgets('disables submit button while form is invalid', (tester) async { ... });
|
||||
```
|
||||
|
||||
## Test Organization
|
||||
|
||||
```
|
||||
test/
|
||||
├── unit/
|
||||
│ ├── domain/
|
||||
│ │ └── usecases/
|
||||
│ └── data/
|
||||
│ └── repositories/
|
||||
├── widget/
|
||||
│ └── presentation/
|
||||
│ └── pages/
|
||||
└── golden/
|
||||
└── widgets/
|
||||
|
||||
integration_test/
|
||||
└── flows/
|
||||
├── login_flow_test.dart
|
||||
└── checkout_flow_test.dart
|
||||
```
|
||||
|
||||
## Coverage
|
||||
|
||||
- Target 80%+ line coverage for business logic (domain + state managers)
|
||||
- All state transitions must have tests: loading → success, loading → error, retry
|
||||
- Run `flutter test --coverage` and inspect `lcov.info` with a coverage reporter
|
||||
- Coverage failures should block CI when below threshold
|
||||
@@ -1,96 +0,0 @@
|
||||
> This file extends [common/coding-style.md](../common/coding-style.md) with web-specific frontend content.
|
||||
|
||||
# Web Coding Style
|
||||
|
||||
## File Organization
|
||||
|
||||
Organize by feature or surface area, not by file type:
|
||||
|
||||
```text
|
||||
src/
|
||||
├── components/
|
||||
│ ├── hero/
|
||||
│ │ ├── Hero.tsx
|
||||
│ │ ├── HeroVisual.tsx
|
||||
│ │ └── hero.css
|
||||
│ ├── scrolly-section/
|
||||
│ │ ├── ScrollySection.tsx
|
||||
│ │ ├── StickyVisual.tsx
|
||||
│ │ └── scrolly.css
|
||||
│ └── ui/
|
||||
│ ├── Button.tsx
|
||||
│ ├── SurfaceCard.tsx
|
||||
│ └── AnimatedText.tsx
|
||||
├── hooks/
|
||||
│ ├── useReducedMotion.ts
|
||||
│ └── useScrollProgress.ts
|
||||
├── lib/
|
||||
│ ├── animation.ts
|
||||
│ └── color.ts
|
||||
└── styles/
|
||||
├── tokens.css
|
||||
├── typography.css
|
||||
└── global.css
|
||||
```
|
||||
|
||||
## CSS Custom Properties
|
||||
|
||||
Define design tokens as variables. Do not hardcode palette, typography, or spacing repeatedly:
|
||||
|
||||
```css
|
||||
:root {
|
||||
--color-surface: oklch(98% 0 0);
|
||||
--color-text: oklch(18% 0 0);
|
||||
--color-accent: oklch(68% 0.21 250);
|
||||
|
||||
--text-base: clamp(1rem, 0.92rem + 0.4vw, 1.125rem);
|
||||
--text-hero: clamp(3rem, 1rem + 7vw, 8rem);
|
||||
|
||||
--space-section: clamp(4rem, 3rem + 5vw, 10rem);
|
||||
|
||||
--duration-fast: 150ms;
|
||||
--duration-normal: 300ms;
|
||||
--ease-out-expo: cubic-bezier(0.16, 1, 0.3, 1);
|
||||
}
|
||||
```
|
||||
|
||||
## Animation-Only Properties
|
||||
|
||||
Prefer compositor-friendly motion:
|
||||
- `transform`
|
||||
- `opacity`
|
||||
- `clip-path`
|
||||
- `filter` (sparingly)
|
||||
|
||||
Avoid animating layout-bound properties:
|
||||
- `width`
|
||||
- `height`
|
||||
- `top`
|
||||
- `left`
|
||||
- `margin`
|
||||
- `padding`
|
||||
- `border`
|
||||
- `font-size`
|
||||
|
||||
## Semantic HTML First
|
||||
|
||||
```html
|
||||
<header>
|
||||
<nav aria-label="Main navigation">...</nav>
|
||||
</header>
|
||||
<main>
|
||||
<section aria-labelledby="hero-heading">
|
||||
<h1 id="hero-heading">...</h1>
|
||||
</section>
|
||||
</main>
|
||||
<footer>...</footer>
|
||||
```
|
||||
|
||||
Do not reach for generic wrapper `div` stacks when a semantic element exists.
|
||||
|
||||
## Naming
|
||||
|
||||
- Components: PascalCase (`ScrollySection`, `SurfaceCard`)
|
||||
- Hooks: `use` prefix (`useReducedMotion`)
|
||||
- CSS classes: kebab-case or utility classes
|
||||
- Animation timelines: camelCase with intent (`heroRevealTl`)
|
||||
@@ -1,63 +0,0 @@
|
||||
> This file extends [common/patterns.md](../common/patterns.md) with web-specific design-quality guidance.
|
||||
|
||||
# Web Design Quality Standards
|
||||
|
||||
## Anti-Template Policy
|
||||
|
||||
Do not ship generic template-looking UI. Frontend output should look intentional, opinionated, and specific to the product.
|
||||
|
||||
### Banned Patterns
|
||||
|
||||
- Default card grids with uniform spacing and no hierarchy
|
||||
- Stock hero section with centered headline, gradient blob, and generic CTA
|
||||
- Unmodified library defaults passed off as finished design
|
||||
- Flat layouts with no layering, depth, or motion
|
||||
- Uniform radius, spacing, and shadows across every component
|
||||
- Safe gray-on-white styling with one decorative accent color
|
||||
- Dashboard-by-numbers layouts with sidebar + cards + charts and no point of view
|
||||
- Default font stacks used without a deliberate reason
|
||||
|
||||
### Required Qualities
|
||||
|
||||
Every meaningful frontend surface should demonstrate at least four of these:
|
||||
|
||||
1. Clear hierarchy through scale contrast
|
||||
2. Intentional rhythm in spacing, not uniform padding everywhere
|
||||
3. Depth or layering through overlap, shadows, surfaces, or motion
|
||||
4. Typography with character and a real pairing strategy
|
||||
5. Color used semantically, not just decoratively
|
||||
6. Hover, focus, and active states that feel designed
|
||||
7. Grid-breaking editorial or bento composition where appropriate
|
||||
8. Texture, grain, or atmosphere when it fits the visual direction
|
||||
9. Motion that clarifies flow instead of distracting from it
|
||||
10. Data visualization treated as part of the design system, not an afterthought
|
||||
|
||||
## Before Writing Frontend Code
|
||||
|
||||
1. Pick a specific style direction. Avoid vague defaults like "clean minimal".
|
||||
2. Define a palette intentionally.
|
||||
3. Choose typography deliberately.
|
||||
4. Gather at least a small set of real references.
|
||||
5. Use ECC design/frontend skills where relevant.
|
||||
|
||||
## Worthwhile Style Directions
|
||||
|
||||
- Editorial / magazine
|
||||
- Neo-brutalism
|
||||
- Glassmorphism with real depth
|
||||
- Dark luxury or light luxury with disciplined contrast
|
||||
- Bento layouts
|
||||
- Scrollytelling
|
||||
- 3D integration
|
||||
- Swiss / International
|
||||
- Retro-futurism
|
||||
|
||||
Do not default to dark mode automatically. Choose the visual direction the product actually wants.
|
||||
|
||||
## Component Checklist
|
||||
|
||||
- [ ] Does it avoid looking like a default Tailwind or shadcn template?
|
||||
- [ ] Does it have intentional hover/focus/active states?
|
||||
- [ ] Does it use hierarchy rather than uniform emphasis?
|
||||
- [ ] Would this look believable in a real product screenshot?
|
||||
- [ ] If it supports both themes, do both light and dark feel intentional?
|
||||
@@ -1,120 +0,0 @@
|
||||
> This file extends [common/hooks.md](../common/hooks.md) with web-specific hook recommendations.
|
||||
|
||||
# Web Hooks
|
||||
|
||||
## Recommended PostToolUse Hooks
|
||||
|
||||
Prefer project-local tooling. Do not wire hooks to remote one-off package execution.
|
||||
|
||||
### Format on Save
|
||||
|
||||
Use the project's existing formatter entrypoint after edits:
|
||||
|
||||
```json
|
||||
{
|
||||
"hooks": {
|
||||
"PostToolUse": [
|
||||
{
|
||||
"matcher": "Write|Edit",
|
||||
"command": "pnpm prettier --write \"$FILE_PATH\"",
|
||||
"description": "Format edited frontend files"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Equivalent local commands via `yarn prettier` or `npm exec prettier --` are fine when they use repo-owned dependencies.
|
||||
|
||||
### Lint Check
|
||||
|
||||
```json
|
||||
{
|
||||
"hooks": {
|
||||
"PostToolUse": [
|
||||
{
|
||||
"matcher": "Write|Edit",
|
||||
"command": "pnpm eslint --fix \"$FILE_PATH\"",
|
||||
"description": "Run ESLint on edited frontend files"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Type Check
|
||||
|
||||
```json
|
||||
{
|
||||
"hooks": {
|
||||
"PostToolUse": [
|
||||
{
|
||||
"matcher": "Write|Edit",
|
||||
"command": "pnpm tsc --noEmit --pretty false",
|
||||
"description": "Type-check after frontend edits"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### CSS Lint
|
||||
|
||||
```json
|
||||
{
|
||||
"hooks": {
|
||||
"PostToolUse": [
|
||||
{
|
||||
"matcher": "Write|Edit",
|
||||
"command": "pnpm stylelint --fix \"$FILE_PATH\"",
|
||||
"description": "Lint edited stylesheets"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## PreToolUse Hooks
|
||||
|
||||
### Guard File Size
|
||||
|
||||
Block oversized writes from tool input content, not from a file that may not exist yet:
|
||||
|
||||
```json
|
||||
{
|
||||
"hooks": {
|
||||
"PreToolUse": [
|
||||
{
|
||||
"matcher": "Write",
|
||||
"command": "node -e \"let d='';process.stdin.on('data',c=>d+=c);process.stdin.on('end',()=>{const i=JSON.parse(d);const c=i.tool_input?.content||'';const lines=c.split('\\n').length;if(lines>800){console.error('[Hook] BLOCKED: File exceeds 800 lines ('+lines+' lines)');console.error('[Hook] Split into smaller modules');process.exit(2)}console.log(d)})\"",
|
||||
"description": "Block writes that exceed 800 lines"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Stop Hooks
|
||||
|
||||
### Final Build Verification
|
||||
|
||||
```json
|
||||
{
|
||||
"hooks": {
|
||||
"Stop": [
|
||||
{
|
||||
"command": "pnpm build",
|
||||
"description": "Verify the production build at session end"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Ordering
|
||||
|
||||
Recommended order:
|
||||
1. format
|
||||
2. lint
|
||||
3. type check
|
||||
4. build verification
|
||||
@@ -1,79 +0,0 @@
|
||||
> This file extends [common/patterns.md](../common/patterns.md) with web-specific patterns.
|
||||
|
||||
# Web Patterns
|
||||
|
||||
## Component Composition
|
||||
|
||||
### Compound Components
|
||||
|
||||
Use compound components when related UI shares state and interaction semantics:
|
||||
|
||||
```tsx
|
||||
<Tabs defaultValue="overview">
|
||||
<Tabs.List>
|
||||
<Tabs.Trigger value="overview">Overview</Tabs.Trigger>
|
||||
<Tabs.Trigger value="settings">Settings</Tabs.Trigger>
|
||||
</Tabs.List>
|
||||
<Tabs.Content value="overview">...</Tabs.Content>
|
||||
<Tabs.Content value="settings">...</Tabs.Content>
|
||||
</Tabs>
|
||||
```
|
||||
|
||||
- Parent owns state
|
||||
- Children consume via context
|
||||
- Prefer this over prop drilling for complex widgets
|
||||
|
||||
### Render Props / Slots
|
||||
|
||||
- Use render props or slot patterns when behavior is shared but markup must vary
|
||||
- Keep keyboard handling, ARIA, and focus logic in the headless layer
|
||||
|
||||
### Container / Presentational Split
|
||||
|
||||
- Container components own data loading and side effects
|
||||
- Presentational components receive props and render UI
|
||||
- Presentational components should stay pure
|
||||
|
||||
## State Management
|
||||
|
||||
Treat these separately:
|
||||
|
||||
| Concern | Tooling |
|
||||
|---------|---------|
|
||||
| Server state | TanStack Query, SWR, tRPC |
|
||||
| Client state | Zustand, Jotai, signals |
|
||||
| URL state | search params, route segments |
|
||||
| Form state | React Hook Form or equivalent |
|
||||
|
||||
- Do not duplicate server state into client stores
|
||||
- Derive values instead of storing redundant computed state
|
||||
|
||||
## URL As State
|
||||
|
||||
Persist shareable state in the URL:
|
||||
- filters
|
||||
- sort order
|
||||
- pagination
|
||||
- active tab
|
||||
- search query
|
||||
|
||||
## Data Fetching
|
||||
|
||||
### Stale-While-Revalidate
|
||||
|
||||
- Return cached data immediately
|
||||
- Revalidate in the background
|
||||
- Prefer existing libraries instead of rolling this by hand
|
||||
|
||||
### Optimistic Updates
|
||||
|
||||
- Snapshot current state
|
||||
- Apply optimistic update
|
||||
- Roll back on failure
|
||||
- Emit visible error feedback when rolling back
|
||||
|
||||
### Parallel Loading
|
||||
|
||||
- Fetch independent data in parallel
|
||||
- Avoid parent-child request waterfalls
|
||||
- Prefetch likely next routes or states when justified
|
||||
@@ -1,64 +0,0 @@
|
||||
> This file extends [common/performance.md](../common/performance.md) with web-specific performance content.
|
||||
|
||||
# Web Performance Rules
|
||||
|
||||
## Core Web Vitals Targets
|
||||
|
||||
| Metric | Target |
|
||||
|--------|--------|
|
||||
| LCP | < 2.5s |
|
||||
| INP | < 200ms |
|
||||
| CLS | < 0.1 |
|
||||
| FCP | < 1.5s |
|
||||
| TBT | < 200ms |
|
||||
|
||||
## Bundle Budget
|
||||
|
||||
| Page Type | JS Budget (gzipped) | CSS Budget |
|
||||
|-----------|---------------------|------------|
|
||||
| Landing page | < 150kb | < 30kb |
|
||||
| App page | < 300kb | < 50kb |
|
||||
| Microsite | < 80kb | < 15kb |
|
||||
|
||||
## Loading Strategy
|
||||
|
||||
1. Inline critical above-the-fold CSS where justified
|
||||
2. Preload the hero image and primary font only
|
||||
3. Defer non-critical CSS or JS
|
||||
4. Dynamically import heavy libraries
|
||||
|
||||
```js
|
||||
const gsapModule = await import('gsap');
|
||||
const { ScrollTrigger } = await import('gsap/ScrollTrigger');
|
||||
```
|
||||
|
||||
## Image Optimization
|
||||
|
||||
- Explicit `width` and `height`
|
||||
- `loading="eager"` plus `fetchpriority="high"` for hero media only
|
||||
- `loading="lazy"` for below-the-fold assets
|
||||
- Prefer AVIF or WebP with fallbacks
|
||||
- Never ship source images far beyond rendered size
|
||||
|
||||
## Font Loading
|
||||
|
||||
- Max two font families unless there is a clear exception
|
||||
- `font-display: swap`
|
||||
- Subset where possible
|
||||
- Preload only the truly critical weight/style
|
||||
|
||||
## Animation Performance
|
||||
|
||||
- Animate compositor-friendly properties only
|
||||
- Use `will-change` narrowly and remove it when done
|
||||
- Prefer CSS for simple transitions
|
||||
- Use `requestAnimationFrame` or established animation libraries for JS motion
|
||||
- Avoid scroll handler churn; use IntersectionObserver or well-behaved libraries
|
||||
|
||||
## Performance Checklist
|
||||
|
||||
- [ ] All images have explicit dimensions
|
||||
- [ ] No accidental render-blocking resources
|
||||
- [ ] No layout shifts from dynamic content
|
||||
- [ ] Motion stays on compositor-friendly properties
|
||||
- [ ] Third-party scripts load async/defer and only when needed
|
||||
@@ -1,57 +0,0 @@
|
||||
> This file extends [common/security.md](../common/security.md) with web-specific security content.
|
||||
|
||||
# Web Security Rules
|
||||
|
||||
## Content Security Policy
|
||||
|
||||
Always configure a production CSP.
|
||||
|
||||
### Nonce-Based CSP
|
||||
|
||||
Use a per-request nonce for scripts instead of `'unsafe-inline'`.
|
||||
|
||||
```text
|
||||
Content-Security-Policy:
|
||||
default-src 'self';
|
||||
script-src 'self' 'nonce-{RANDOM}' https://cdn.jsdelivr.net;
|
||||
style-src 'self' 'unsafe-inline' https://fonts.googleapis.com;
|
||||
img-src 'self' data: https:;
|
||||
font-src 'self' https://fonts.gstatic.com;
|
||||
connect-src 'self' https://*.example.com;
|
||||
frame-src 'none';
|
||||
object-src 'none';
|
||||
base-uri 'self';
|
||||
```
|
||||
|
||||
Adjust origins to the project. Do not cargo-cult this block unchanged.
|
||||
|
||||
## XSS Prevention
|
||||
|
||||
- Never inject unsanitized HTML
|
||||
- Avoid `innerHTML` / `dangerouslySetInnerHTML` unless sanitized first
|
||||
- Escape dynamic template values
|
||||
- Sanitize user HTML with a vetted local sanitizer when absolutely necessary
|
||||
|
||||
## Third-Party Scripts
|
||||
|
||||
- Load asynchronously
|
||||
- Use SRI when serving from a CDN
|
||||
- Audit quarterly
|
||||
- Prefer self-hosting for critical dependencies when practical
|
||||
|
||||
## HTTPS and Headers
|
||||
|
||||
```text
|
||||
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
|
||||
X-Content-Type-Options: nosniff
|
||||
X-Frame-Options: DENY
|
||||
Referrer-Policy: strict-origin-when-cross-origin
|
||||
Permissions-Policy: camera=(), microphone=(), geolocation=()
|
||||
```
|
||||
|
||||
## Forms
|
||||
|
||||
- CSRF protection on state-changing forms
|
||||
- Rate limiting on submission endpoints
|
||||
- Validate client and server side
|
||||
- Prefer honeypots or light anti-abuse controls over heavy-handed CAPTCHA defaults
|
||||
@@ -1,55 +0,0 @@
|
||||
> This file extends [common/testing.md](../common/testing.md) with web-specific testing content.
|
||||
|
||||
# Web Testing Rules
|
||||
|
||||
## Priority Order
|
||||
|
||||
### 1. Visual Regression
|
||||
|
||||
- Screenshot key breakpoints: 320, 768, 1024, 1440
|
||||
- Test hero sections, scrollytelling sections, and meaningful states
|
||||
- Use Playwright screenshots for visual-heavy work
|
||||
- If both themes exist, test both
|
||||
|
||||
### 2. Accessibility
|
||||
|
||||
- Run automated accessibility checks
|
||||
- Test keyboard navigation
|
||||
- Verify reduced-motion behavior
|
||||
- Verify color contrast
|
||||
|
||||
### 3. Performance
|
||||
|
||||
- Run Lighthouse or equivalent against meaningful pages
|
||||
- Keep CWV targets from [performance.md](performance.md)
|
||||
|
||||
### 4. Cross-Browser
|
||||
|
||||
- Minimum: Chrome, Firefox, Safari
|
||||
- Test scrolling, motion, and fallback behavior
|
||||
|
||||
### 5. Responsive
|
||||
|
||||
- Test 320, 375, 768, 1024, 1440, 1920
|
||||
- Verify no overflow
|
||||
- Verify touch interactions
|
||||
|
||||
## E2E Shape
|
||||
|
||||
```ts
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test('landing hero loads', async ({ page }) => {
|
||||
await page.goto('/');
|
||||
await expect(page.locator('h1')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
- Avoid flaky timeout-based assertions
|
||||
- Prefer deterministic waits
|
||||
|
||||
## Unit Tests
|
||||
|
||||
- Test utilities, data transforms, and custom hooks
|
||||
- For highly visual components, visual regression often carries more signal than brittle markup assertions
|
||||
- Visual regression supplements coverage targets; it does not replace them
|
||||
@@ -1,317 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
'use strict';
|
||||
|
||||
/**
|
||||
* Merge the non-MCP Codex baseline from `.codex/config.toml` into a target
|
||||
* `config.toml` without overwriting existing user choices.
|
||||
*
|
||||
* Strategy: add-only.
|
||||
* - Missing root keys are inserted before the first TOML table.
|
||||
* - Missing table keys are appended to existing tables.
|
||||
* - Missing tables are appended to the end of the file.
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
let TOML;
|
||||
try {
|
||||
TOML = require('@iarna/toml');
|
||||
} catch {
|
||||
console.error('[ecc-codex] Missing dependency: @iarna/toml');
|
||||
console.error('[ecc-codex] Run: npm install (from the ECC repo root)');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const ROOT_KEYS = ['approval_policy', 'sandbox_mode', 'web_search', 'notify', 'persistent_instructions'];
|
||||
const TABLE_PATHS = [
|
||||
'features',
|
||||
'profiles.strict',
|
||||
'profiles.yolo',
|
||||
'agents',
|
||||
'agents.explorer',
|
||||
'agents.reviewer',
|
||||
'agents.docs_researcher',
|
||||
];
|
||||
const TOML_HEADER_RE = /^[ \t]*(?:\[[^[\]\n][^\]\n]*\]|\[\[[^[\]\n][^\]\n]*\]\])[ \t]*(?:#.*)?$/m;
|
||||
|
||||
function log(message) {
|
||||
console.log(`[ecc-codex] ${message}`);
|
||||
}
|
||||
|
||||
function warn(message) {
|
||||
console.warn(`[ecc-codex] WARNING: ${message}`);
|
||||
}
|
||||
|
||||
function getNested(obj, pathParts) {
|
||||
let current = obj;
|
||||
for (const part of pathParts) {
|
||||
if (!current || typeof current !== 'object' || !(part in current)) {
|
||||
return undefined;
|
||||
}
|
||||
current = current[part];
|
||||
}
|
||||
return current;
|
||||
}
|
||||
|
||||
function setNested(obj, pathParts, value) {
|
||||
let current = obj;
|
||||
for (let i = 0; i < pathParts.length - 1; i += 1) {
|
||||
const part = pathParts[i];
|
||||
if (!current[part] || typeof current[part] !== 'object' || Array.isArray(current[part])) {
|
||||
current[part] = {};
|
||||
}
|
||||
current = current[part];
|
||||
}
|
||||
current[pathParts[pathParts.length - 1]] = value;
|
||||
}
|
||||
|
||||
function findFirstTableIndex(raw) {
|
||||
const match = TOML_HEADER_RE.exec(raw);
|
||||
return match ? match.index : -1;
|
||||
}
|
||||
|
||||
function findTableRange(raw, tablePath) {
|
||||
const escaped = tablePath.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
const headerPattern = new RegExp(`^[ \\t]*\\[${escaped}\\][ \\t]*(?:#.*)?$`, 'm');
|
||||
const match = headerPattern.exec(raw);
|
||||
if (!match) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const headerEnd = raw.indexOf('\n', match.index);
|
||||
const bodyStart = headerEnd === -1 ? raw.length : headerEnd + 1;
|
||||
const nextHeaderRel = raw.slice(bodyStart).search(TOML_HEADER_RE);
|
||||
const bodyEnd = nextHeaderRel === -1 ? raw.length : bodyStart + nextHeaderRel;
|
||||
return { bodyStart, bodyEnd };
|
||||
}
|
||||
|
||||
function ensureTrailingNewline(text) {
|
||||
return text.endsWith('\n') ? text : `${text}\n`;
|
||||
}
|
||||
|
||||
function insertBeforeFirstTable(raw, block) {
|
||||
const normalizedBlock = ensureTrailingNewline(block.trimEnd());
|
||||
const firstTableIndex = findFirstTableIndex(raw);
|
||||
if (firstTableIndex === -1) {
|
||||
const prefix = raw.trimEnd();
|
||||
return prefix ? `${prefix}\n${normalizedBlock}` : normalizedBlock;
|
||||
}
|
||||
|
||||
const before = raw.slice(0, firstTableIndex).trimEnd();
|
||||
const after = raw.slice(firstTableIndex).replace(/^\n+/, '');
|
||||
return `${before}\n\n${normalizedBlock}\n${after}`;
|
||||
}
|
||||
|
||||
function appendBlock(raw, block) {
|
||||
const prefix = raw.trimEnd();
|
||||
const normalizedBlock = block.trimEnd();
|
||||
return prefix ? `${prefix}\n\n${normalizedBlock}\n` : `${normalizedBlock}\n`;
|
||||
}
|
||||
|
||||
function stringifyValue(value) {
|
||||
return TOML.stringify({ value }).trim().replace(/^value = /, '');
|
||||
}
|
||||
|
||||
function updateInlineTableKeys(raw, tablePath, missingKeys) {
|
||||
const pathParts = tablePath.split('.');
|
||||
if (pathParts.length < 2) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const parentPath = pathParts.slice(0, -1).join('.');
|
||||
const parentRange = findTableRange(raw, parentPath);
|
||||
if (!parentRange) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const tableKey = pathParts[pathParts.length - 1];
|
||||
const escapedKey = tableKey.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
const body = raw.slice(parentRange.bodyStart, parentRange.bodyEnd);
|
||||
const lines = body.split('\n');
|
||||
for (let index = 0; index < lines.length; index += 1) {
|
||||
const inlinePattern = new RegExp(`^(\\s*${escapedKey}\\s*=\\s*\\{)(.*?)(\\}\\s*(?:#.*)?)$`);
|
||||
const match = inlinePattern.exec(lines[index]);
|
||||
if (!match) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const additions = Object.entries(missingKeys)
|
||||
.map(([key, value]) => `${key} = ${stringifyValue(value)}`)
|
||||
.join(', ');
|
||||
const existingEntries = match[2].trim();
|
||||
const nextEntries = existingEntries ? `${existingEntries}, ${additions}` : additions;
|
||||
lines[index] = `${match[1]}${nextEntries}${match[3]}`;
|
||||
return `${raw.slice(0, parentRange.bodyStart)}${lines.join('\n')}${raw.slice(parentRange.bodyEnd)}`;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function appendImplicitTable(raw, tablePath, missingKeys) {
|
||||
const candidate = appendBlock(raw, stringifyTable(tablePath, missingKeys));
|
||||
try {
|
||||
TOML.parse(candidate);
|
||||
return candidate;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
function appendToTable(raw, tablePath, block, missingKeys = null) {
|
||||
const range = findTableRange(raw, tablePath);
|
||||
if (!range) {
|
||||
if (missingKeys) {
|
||||
const inlineUpdated = updateInlineTableKeys(raw, tablePath, missingKeys);
|
||||
if (inlineUpdated) {
|
||||
return inlineUpdated;
|
||||
}
|
||||
|
||||
const appendedTable = appendImplicitTable(raw, tablePath, missingKeys);
|
||||
if (appendedTable) {
|
||||
return appendedTable;
|
||||
}
|
||||
}
|
||||
warn(`Skipping missing keys for [${tablePath}] because it has no standalone header and could not be safely updated`);
|
||||
return raw;
|
||||
}
|
||||
|
||||
const before = raw.slice(0, range.bodyEnd).trimEnd();
|
||||
const after = raw.slice(range.bodyEnd).replace(/^\n*/, '\n');
|
||||
return `${before}\n${block.trimEnd()}\n${after}`;
|
||||
}
|
||||
|
||||
function stringifyRootKeys(keys) {
|
||||
return TOML.stringify(keys).trim();
|
||||
}
|
||||
|
||||
function stringifyTable(tablePath, value) {
|
||||
const scalarOnly = {};
|
||||
for (const [key, entryValue] of Object.entries(value)) {
|
||||
if (entryValue && typeof entryValue === 'object' && !Array.isArray(entryValue)) {
|
||||
continue;
|
||||
}
|
||||
scalarOnly[key] = entryValue;
|
||||
}
|
||||
|
||||
const snippet = {};
|
||||
setNested(snippet, tablePath.split('.'), scalarOnly);
|
||||
return TOML.stringify(snippet).trim();
|
||||
}
|
||||
|
||||
function stringifyTableKeys(tableValue) {
|
||||
const lines = [];
|
||||
for (const [key, value] of Object.entries(tableValue)) {
|
||||
if (value && typeof value === 'object' && !Array.isArray(value)) {
|
||||
continue;
|
||||
}
|
||||
lines.push(TOML.stringify({ [key]: value }).trim());
|
||||
}
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
function main() {
|
||||
const args = process.argv.slice(2);
|
||||
const configPath = args.find(arg => !arg.startsWith('-'));
|
||||
const dryRun = args.includes('--dry-run');
|
||||
|
||||
if (!configPath) {
|
||||
console.error('Usage: merge-codex-config.js <config.toml> [--dry-run]');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const referencePath = path.join(__dirname, '..', '..', '.codex', 'config.toml');
|
||||
if (!fs.existsSync(referencePath)) {
|
||||
console.error(`[ecc-codex] Reference config not found: ${referencePath}`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (!fs.existsSync(configPath)) {
|
||||
console.error(`[ecc-codex] Config file not found: ${configPath}`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const raw = fs.readFileSync(configPath, 'utf8');
|
||||
const referenceRaw = fs.readFileSync(referencePath, 'utf8');
|
||||
|
||||
let targetConfig;
|
||||
let referenceConfig;
|
||||
try {
|
||||
targetConfig = TOML.parse(raw);
|
||||
referenceConfig = TOML.parse(referenceRaw);
|
||||
} catch (error) {
|
||||
console.error(`[ecc-codex] Failed to parse TOML: ${error.message}`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const missingRootKeys = {};
|
||||
for (const key of ROOT_KEYS) {
|
||||
if (referenceConfig[key] !== undefined && targetConfig[key] === undefined) {
|
||||
missingRootKeys[key] = referenceConfig[key];
|
||||
}
|
||||
}
|
||||
|
||||
const missingTables = [];
|
||||
const missingTableKeys = [];
|
||||
for (const tablePath of TABLE_PATHS) {
|
||||
const pathParts = tablePath.split('.');
|
||||
const referenceValue = getNested(referenceConfig, pathParts);
|
||||
if (referenceValue === undefined) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const targetValue = getNested(targetConfig, pathParts);
|
||||
if (targetValue === undefined) {
|
||||
missingTables.push(tablePath);
|
||||
continue;
|
||||
}
|
||||
|
||||
const missingKeys = {};
|
||||
for (const [key, value] of Object.entries(referenceValue)) {
|
||||
if (value && typeof value === 'object' && !Array.isArray(value)) {
|
||||
continue;
|
||||
}
|
||||
if (targetValue[key] === undefined) {
|
||||
missingKeys[key] = value;
|
||||
}
|
||||
}
|
||||
|
||||
if (Object.keys(missingKeys).length > 0) {
|
||||
missingTableKeys.push({ tablePath, missingKeys });
|
||||
}
|
||||
}
|
||||
|
||||
if (
|
||||
Object.keys(missingRootKeys).length === 0 &&
|
||||
missingTables.length === 0 &&
|
||||
missingTableKeys.length === 0
|
||||
) {
|
||||
log('All baseline Codex settings already present. Nothing to do.');
|
||||
return;
|
||||
}
|
||||
|
||||
let nextRaw = raw;
|
||||
if (Object.keys(missingRootKeys).length > 0) {
|
||||
log(` [add-root] ${Object.keys(missingRootKeys).join(', ')}`);
|
||||
nextRaw = insertBeforeFirstTable(nextRaw, stringifyRootKeys(missingRootKeys));
|
||||
}
|
||||
|
||||
for (const { tablePath, missingKeys } of missingTableKeys) {
|
||||
log(` [add-keys] [${tablePath}] -> ${Object.keys(missingKeys).join(', ')}`);
|
||||
nextRaw = appendToTable(nextRaw, tablePath, stringifyTableKeys(missingKeys), missingKeys);
|
||||
}
|
||||
|
||||
for (const tablePath of missingTables) {
|
||||
log(` [add-table] [${tablePath}]`);
|
||||
nextRaw = appendBlock(nextRaw, stringifyTable(tablePath, getNested(referenceConfig, tablePath.split('.'))));
|
||||
}
|
||||
|
||||
if (dryRun) {
|
||||
log('Dry run — would write the merged Codex baseline.');
|
||||
return;
|
||||
}
|
||||
|
||||
fs.writeFileSync(configPath, nextRaw, 'utf8');
|
||||
log('Done. Baseline Codex settings merged.');
|
||||
}
|
||||
|
||||
main();
|
||||
@@ -1,131 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
/**
|
||||
* PostToolUse hook: lightweight frontend design-quality reminder.
|
||||
*
|
||||
* This stays self-contained inside ECC. It does not call remote models or
|
||||
* install packages. The goal is to catch obviously generic UI drift and keep
|
||||
* frontend edits aligned with ECC's stronger design standards.
|
||||
*/
|
||||
|
||||
'use strict';
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const FRONTEND_EXTENSIONS = /\.(astro|css|html|jsx|scss|svelte|tsx|vue)$/i;
|
||||
const MAX_STDIN = 1024 * 1024;
|
||||
|
||||
const GENERIC_SIGNALS = [
|
||||
{ pattern: /\bget started\b/i, label: '"Get Started" CTA copy' },
|
||||
{ pattern: /\blearn more\b/i, label: '"Learn more" CTA copy' },
|
||||
{ pattern: /\bgrid-cols-(3|4)\b/, label: 'uniform multi-card grid' },
|
||||
{ pattern: /\bbg-gradient-to-[trbl]/, label: 'stock gradient utility usage' },
|
||||
{ pattern: /\btext-center\b/, label: 'centered default layout cues' },
|
||||
{ pattern: /\bfont-(sans|inter)\b/i, label: 'default font utility' },
|
||||
];
|
||||
|
||||
const CHECKLIST = [
|
||||
'visual hierarchy with real contrast',
|
||||
'intentional spacing rhythm',
|
||||
'depth, layering, or overlap',
|
||||
'purposeful hover and focus states',
|
||||
'color and typography that feel specific',
|
||||
];
|
||||
|
||||
function getFilePaths(input) {
|
||||
const toolInput = input?.tool_input || {};
|
||||
if (toolInput.file_path) {
|
||||
return [String(toolInput.file_path)];
|
||||
}
|
||||
|
||||
if (Array.isArray(toolInput.edits)) {
|
||||
return toolInput.edits
|
||||
.map(edit => String(edit?.file_path || ''))
|
||||
.filter(Boolean);
|
||||
}
|
||||
|
||||
return [];
|
||||
}
|
||||
|
||||
function readContent(filePath) {
|
||||
try {
|
||||
return fs.readFileSync(path.resolve(filePath), 'utf8');
|
||||
} catch {
|
||||
return '';
|
||||
}
|
||||
}
|
||||
|
||||
function detectSignals(content) {
|
||||
return GENERIC_SIGNALS.filter(signal => signal.pattern.test(content)).map(signal => signal.label);
|
||||
}
|
||||
|
||||
function buildWarning(frontendPaths, findings) {
|
||||
const pathLines = frontendPaths.map(fp => ` - ${fp}`).join('\n');
|
||||
const signalLines = findings.length > 0
|
||||
? findings.map(item => ` - ${item}`).join('\n')
|
||||
: ' - no obvious canned-template strings detected';
|
||||
|
||||
return [
|
||||
'[Hook] DESIGN CHECK: frontend file(s) modified:',
|
||||
pathLines,
|
||||
'[Hook] Review for generic/template drift. Frontend should have:',
|
||||
CHECKLIST.map(item => ` - ${item}`).join('\n'),
|
||||
'[Hook] Heuristic signals:',
|
||||
signalLines,
|
||||
].join('\n');
|
||||
}
|
||||
|
||||
function run(inputOrRaw) {
|
||||
let input;
|
||||
let rawInput = inputOrRaw;
|
||||
|
||||
try {
|
||||
if (typeof inputOrRaw === 'string') {
|
||||
rawInput = inputOrRaw;
|
||||
input = inputOrRaw.trim() ? JSON.parse(inputOrRaw) : {};
|
||||
} else {
|
||||
input = inputOrRaw || {};
|
||||
rawInput = JSON.stringify(inputOrRaw ?? {});
|
||||
}
|
||||
} catch {
|
||||
return { exitCode: 0, stdout: typeof rawInput === 'string' ? rawInput : '' };
|
||||
}
|
||||
|
||||
const filePaths = getFilePaths(input);
|
||||
const frontendPaths = filePaths.filter(filePath => FRONTEND_EXTENSIONS.test(filePath));
|
||||
|
||||
if (frontendPaths.length === 0) {
|
||||
return { exitCode: 0, stdout: typeof rawInput === 'string' ? rawInput : '' };
|
||||
}
|
||||
|
||||
const findings = [];
|
||||
for (const filePath of frontendPaths) {
|
||||
const content = readContent(filePath);
|
||||
findings.push(...detectSignals(content));
|
||||
}
|
||||
|
||||
return {
|
||||
exitCode: 0,
|
||||
stdout: typeof rawInput === 'string' ? rawInput : '',
|
||||
stderr: buildWarning(frontendPaths, findings),
|
||||
};
|
||||
}
|
||||
|
||||
module.exports = { run };
|
||||
|
||||
if (require.main === module) {
|
||||
let raw = '';
|
||||
process.stdin.setEncoding('utf8');
|
||||
process.stdin.on('data', chunk => {
|
||||
if (raw.length < MAX_STDIN) {
|
||||
const remaining = MAX_STDIN - raw.length;
|
||||
raw += chunk.substring(0, remaining);
|
||||
}
|
||||
});
|
||||
process.stdin.on('end', () => {
|
||||
const result = run(raw);
|
||||
if (result.stderr) process.stderr.write(`${result.stderr}\n`);
|
||||
process.stdout.write(typeof result.stdout === 'string' ? result.stdout : raw);
|
||||
process.exit(Number.isInteger(result.exitCode) ? result.exitCode : 0);
|
||||
});
|
||||
}
|
||||
@@ -67,7 +67,7 @@ function findFileIssues(filePath) {
|
||||
|
||||
try {
|
||||
const content = getStagedFileContent(filePath);
|
||||
if (content === null || content === undefined) {
|
||||
if (content == null) {
|
||||
return issues;
|
||||
}
|
||||
const lines = content.split('\n');
|
||||
|
||||
@@ -2,50 +2,12 @@
|
||||
'use strict';
|
||||
|
||||
/**
|
||||
* Session end marker hook - performs lightweight observer cleanup and
|
||||
* outputs stdin to stdout unchanged. Exports run() for in-process execution.
|
||||
* Session end marker hook - outputs stdin to stdout unchanged.
|
||||
* Exports run() for in-process execution (avoids spawnSync issues on Windows).
|
||||
*/
|
||||
|
||||
const {
|
||||
resolveProjectContext,
|
||||
removeSessionLease,
|
||||
listSessionLeases,
|
||||
stopObserverForContext,
|
||||
resolveSessionId
|
||||
} = require('../lib/observer-sessions');
|
||||
|
||||
function log(message) {
|
||||
process.stderr.write(`[SessionEnd] ${message}\n`);
|
||||
}
|
||||
|
||||
function run(rawInput) {
|
||||
const output = rawInput || '';
|
||||
const sessionId = resolveSessionId();
|
||||
|
||||
if (!sessionId) {
|
||||
log('No CLAUDE_SESSION_ID available; skipping observer cleanup');
|
||||
return output;
|
||||
}
|
||||
|
||||
try {
|
||||
const observerContext = resolveProjectContext();
|
||||
removeSessionLease(observerContext, sessionId);
|
||||
const remainingLeases = listSessionLeases(observerContext);
|
||||
|
||||
if (remainingLeases.length === 0) {
|
||||
if (stopObserverForContext(observerContext)) {
|
||||
log(`Stopped observer for project ${observerContext.projectId} after final session lease ended`);
|
||||
} else {
|
||||
log(`No running observer to stop for project ${observerContext.projectId}`);
|
||||
}
|
||||
} else {
|
||||
log(`Retained observer for project ${observerContext.projectId}; ${remainingLeases.length} session lease(s) remain`);
|
||||
}
|
||||
} catch (err) {
|
||||
log(`Observer cleanup skipped: ${err.message}`);
|
||||
}
|
||||
|
||||
return output;
|
||||
return rawInput || '';
|
||||
}
|
||||
|
||||
// Legacy CLI execution (when run directly)
|
||||
@@ -60,7 +22,7 @@ if (require.main === module) {
|
||||
}
|
||||
});
|
||||
process.stdin.on('end', () => {
|
||||
process.stdout.write(run(raw));
|
||||
process.stdout.write(raw);
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
@@ -20,7 +20,6 @@ const {
|
||||
stripAnsi,
|
||||
log
|
||||
} = require('../lib/utils');
|
||||
const { resolveProjectContext, writeSessionLease, resolveSessionId } = require('../lib/observer-sessions');
|
||||
const { getPackageManager, getSelectionPrompt } = require('../lib/package-manager');
|
||||
const { listAliases } = require('../lib/session-aliases');
|
||||
const { detectProjectType } = require('../lib/project-detect');
|
||||
@@ -164,18 +163,6 @@ async function main() {
|
||||
ensureDir(sessionsDir);
|
||||
ensureDir(learnedDir);
|
||||
|
||||
const observerSessionId = resolveSessionId();
|
||||
if (observerSessionId) {
|
||||
const observerContext = resolveProjectContext();
|
||||
writeSessionLease(observerContext, observerSessionId, {
|
||||
hook: 'SessionStart',
|
||||
projectRoot: observerContext.projectRoot
|
||||
});
|
||||
log(`[SessionStart] Registered observer lease for ${observerSessionId}`);
|
||||
} else {
|
||||
log('[SessionStart] No CLAUDE_SESSION_ID available; skipping observer lease registration');
|
||||
}
|
||||
|
||||
// Check for recent session files (last 7 days)
|
||||
const recentSessions = dedupeRecentSessions(getSessionSearchDirs());
|
||||
|
||||
|
||||
@@ -20,43 +20,16 @@ function readJsonObject(filePath, label) {
|
||||
return parsed;
|
||||
}
|
||||
|
||||
function replacePluginRootPlaceholders(value, pluginRoot) {
|
||||
if (!pluginRoot) {
|
||||
return value;
|
||||
}
|
||||
|
||||
if (typeof value === 'string') {
|
||||
return value.split('${CLAUDE_PLUGIN_ROOT}').join(pluginRoot);
|
||||
}
|
||||
|
||||
if (Array.isArray(value)) {
|
||||
return value.map(item => replacePluginRootPlaceholders(item, pluginRoot));
|
||||
}
|
||||
|
||||
if (value && typeof value === 'object') {
|
||||
return Object.fromEntries(
|
||||
Object.entries(value).map(([key, nestedValue]) => [
|
||||
key,
|
||||
replacePluginRootPlaceholders(nestedValue, pluginRoot),
|
||||
])
|
||||
);
|
||||
}
|
||||
|
||||
return value;
|
||||
}
|
||||
|
||||
function buildLegacyHookSignature(entry, pluginRoot) {
|
||||
function buildLegacyHookSignature(entry) {
|
||||
if (!entry || typeof entry !== 'object' || Array.isArray(entry)) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const normalizedEntry = replacePluginRootPlaceholders(entry, pluginRoot);
|
||||
|
||||
if (typeof normalizedEntry.matcher !== 'string' || !Array.isArray(normalizedEntry.hooks)) {
|
||||
if (typeof entry.matcher !== 'string' || !Array.isArray(entry.hooks)) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const hookSignature = normalizedEntry.hooks.map(hook => JSON.stringify({
|
||||
const hookSignature = entry.hooks.map(hook => JSON.stringify({
|
||||
type: hook && typeof hook === 'object' ? hook.type : undefined,
|
||||
command: hook && typeof hook === 'object' ? hook.command : undefined,
|
||||
timeout: hook && typeof hook === 'object' ? hook.timeout : undefined,
|
||||
@@ -64,35 +37,33 @@ function buildLegacyHookSignature(entry, pluginRoot) {
|
||||
}));
|
||||
|
||||
return JSON.stringify({
|
||||
matcher: normalizedEntry.matcher,
|
||||
matcher: entry.matcher,
|
||||
hooks: hookSignature,
|
||||
});
|
||||
}
|
||||
|
||||
function getHookEntryAliases(entry, pluginRoot) {
|
||||
function getHookEntryAliases(entry) {
|
||||
const aliases = [];
|
||||
|
||||
if (!entry || typeof entry !== 'object' || Array.isArray(entry)) {
|
||||
return aliases;
|
||||
}
|
||||
|
||||
const normalizedEntry = replacePluginRootPlaceholders(entry, pluginRoot);
|
||||
|
||||
if (typeof normalizedEntry.id === 'string' && normalizedEntry.id.trim().length > 0) {
|
||||
aliases.push(`id:${normalizedEntry.id.trim()}`);
|
||||
if (typeof entry.id === 'string' && entry.id.trim().length > 0) {
|
||||
aliases.push(`id:${entry.id.trim()}`);
|
||||
}
|
||||
|
||||
const legacySignature = buildLegacyHookSignature(normalizedEntry, pluginRoot);
|
||||
const legacySignature = buildLegacyHookSignature(entry);
|
||||
if (legacySignature) {
|
||||
aliases.push(`legacy:${legacySignature}`);
|
||||
}
|
||||
|
||||
aliases.push(`json:${JSON.stringify(normalizedEntry)}`);
|
||||
aliases.push(`json:${JSON.stringify(entry)}`);
|
||||
|
||||
return aliases;
|
||||
}
|
||||
|
||||
function mergeHookEntries(existingEntries, incomingEntries, pluginRoot) {
|
||||
function mergeHookEntries(existingEntries, incomingEntries) {
|
||||
const mergedEntries = [];
|
||||
const seenEntries = new Set();
|
||||
|
||||
@@ -105,7 +76,7 @@ function mergeHookEntries(existingEntries, incomingEntries, pluginRoot) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const aliases = getHookEntryAliases(entry, pluginRoot);
|
||||
const aliases = getHookEntryAliases(entry);
|
||||
if (aliases.some(alias => seenEntries.has(alias))) {
|
||||
continue;
|
||||
}
|
||||
@@ -113,7 +84,7 @@ function mergeHookEntries(existingEntries, incomingEntries, pluginRoot) {
|
||||
for (const alias of aliases) {
|
||||
seenEntries.add(alias);
|
||||
}
|
||||
mergedEntries.push(replacePluginRootPlaceholders(entry, pluginRoot));
|
||||
mergedEntries.push(entry);
|
||||
}
|
||||
|
||||
return mergedEntries;
|
||||
@@ -129,7 +100,6 @@ function buildMergedSettings(plan) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const pluginRoot = plan.targetRoot;
|
||||
const hooksDestinationPath = path.join(plan.targetRoot, 'hooks', 'hooks.json');
|
||||
const hooksSourcePath = findHooksSourcePath(plan, hooksDestinationPath) || hooksDestinationPath;
|
||||
if (!fs.existsSync(hooksSourcePath)) {
|
||||
@@ -137,7 +107,7 @@ function buildMergedSettings(plan) {
|
||||
}
|
||||
|
||||
const hooksConfig = readJsonObject(hooksSourcePath, 'hooks config');
|
||||
const incomingHooks = replacePluginRootPlaceholders(hooksConfig.hooks, pluginRoot);
|
||||
const incomingHooks = hooksConfig.hooks;
|
||||
if (!incomingHooks || typeof incomingHooks !== 'object' || Array.isArray(incomingHooks)) {
|
||||
throw new Error(`Invalid hooks config at ${hooksSourcePath}: expected "hooks" to be a JSON object`);
|
||||
}
|
||||
@@ -156,7 +126,7 @@ function buildMergedSettings(plan) {
|
||||
for (const [eventName, incomingEntries] of Object.entries(incomingHooks)) {
|
||||
const currentEntries = Array.isArray(existingHooks[eventName]) ? existingHooks[eventName] : [];
|
||||
const nextEntries = Array.isArray(incomingEntries) ? incomingEntries : [];
|
||||
mergedHooks[eventName] = mergeHookEntries(currentEntries, nextEntries, pluginRoot);
|
||||
mergedHooks[eventName] = mergeHookEntries(currentEntries, nextEntries);
|
||||
}
|
||||
|
||||
const mergedSettings = {
|
||||
@@ -167,11 +137,6 @@ function buildMergedSettings(plan) {
|
||||
return {
|
||||
settingsPath,
|
||||
mergedSettings,
|
||||
hooksDestinationPath,
|
||||
resolvedHooksConfig: {
|
||||
...hooksConfig,
|
||||
hooks: incomingHooks,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
@@ -184,12 +149,6 @@ function applyInstallPlan(plan) {
|
||||
}
|
||||
|
||||
if (mergedSettingsPlan) {
|
||||
fs.mkdirSync(path.dirname(mergedSettingsPlan.hooksDestinationPath), { recursive: true });
|
||||
fs.writeFileSync(
|
||||
mergedSettingsPlan.hooksDestinationPath,
|
||||
JSON.stringify(mergedSettingsPlan.resolvedHooksConfig, null, 2) + '\n',
|
||||
'utf8'
|
||||
);
|
||||
fs.mkdirSync(path.dirname(mergedSettingsPlan.settingsPath), { recursive: true });
|
||||
fs.writeFileSync(
|
||||
mergedSettingsPlan.settingsPath,
|
||||
|
||||
@@ -1,175 +0,0 @@
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const crypto = require('crypto');
|
||||
const { spawnSync } = require('child_process');
|
||||
const { getClaudeDir, ensureDir, sanitizeSessionId } = require('./utils');
|
||||
|
||||
function getHomunculusDir() {
|
||||
return path.join(getClaudeDir(), 'homunculus');
|
||||
}
|
||||
|
||||
function getProjectsDir() {
|
||||
return path.join(getHomunculusDir(), 'projects');
|
||||
}
|
||||
|
||||
function getProjectRegistryPath() {
|
||||
return path.join(getHomunculusDir(), 'projects.json');
|
||||
}
|
||||
|
||||
function readProjectRegistry() {
|
||||
try {
|
||||
return JSON.parse(fs.readFileSync(getProjectRegistryPath(), 'utf8'));
|
||||
} catch {
|
||||
return {};
|
||||
}
|
||||
}
|
||||
|
||||
function runGit(args, cwd) {
|
||||
const result = spawnSync('git', args, {
|
||||
cwd,
|
||||
encoding: 'utf8',
|
||||
stdio: ['ignore', 'pipe', 'ignore']
|
||||
});
|
||||
if (result.status !== 0) return '';
|
||||
return (result.stdout || '').trim();
|
||||
}
|
||||
|
||||
function stripRemoteCredentials(remoteUrl) {
|
||||
if (!remoteUrl) return '';
|
||||
return String(remoteUrl).replace(/:\/\/[^@]+@/, '://');
|
||||
}
|
||||
|
||||
function resolveProjectRoot(cwd = process.cwd()) {
|
||||
const envRoot = process.env.CLAUDE_PROJECT_DIR;
|
||||
if (envRoot && fs.existsSync(envRoot)) {
|
||||
return path.resolve(envRoot);
|
||||
}
|
||||
|
||||
const gitRoot = runGit(['rev-parse', '--show-toplevel'], cwd);
|
||||
if (gitRoot) return path.resolve(gitRoot);
|
||||
|
||||
return '';
|
||||
}
|
||||
|
||||
function computeProjectId(projectRoot) {
|
||||
const remoteUrl = stripRemoteCredentials(runGit(['remote', 'get-url', 'origin'], projectRoot));
|
||||
return crypto.createHash('sha256').update(remoteUrl || projectRoot).digest('hex').slice(0, 12);
|
||||
}
|
||||
|
||||
function resolveProjectContext(cwd = process.cwd()) {
|
||||
const projectRoot = resolveProjectRoot(cwd);
|
||||
if (!projectRoot) {
|
||||
const projectDir = getHomunculusDir();
|
||||
ensureDir(projectDir);
|
||||
return { projectId: 'global', projectRoot: '', projectDir, isGlobal: true };
|
||||
}
|
||||
|
||||
const registry = readProjectRegistry();
|
||||
const registryEntry = Object.values(registry).find(entry => entry && path.resolve(entry.root || '') === projectRoot);
|
||||
const projectId = registryEntry?.id || computeProjectId(projectRoot);
|
||||
const projectDir = path.join(getProjectsDir(), projectId);
|
||||
ensureDir(projectDir);
|
||||
|
||||
return { projectId, projectRoot, projectDir, isGlobal: false };
|
||||
}
|
||||
|
||||
function getObserverPidFile(context) {
|
||||
return path.join(context.projectDir, '.observer.pid');
|
||||
}
|
||||
|
||||
function getObserverSignalCounterFile(context) {
|
||||
return path.join(context.projectDir, '.observer-signal-counter');
|
||||
}
|
||||
|
||||
function getObserverActivityFile(context) {
|
||||
return path.join(context.projectDir, '.observer-last-activity');
|
||||
}
|
||||
|
||||
function getSessionLeaseDir(context) {
|
||||
return path.join(context.projectDir, '.observer-sessions');
|
||||
}
|
||||
|
||||
function resolveSessionId(rawSessionId = process.env.CLAUDE_SESSION_ID) {
|
||||
return sanitizeSessionId(rawSessionId || '') || '';
|
||||
}
|
||||
|
||||
function getSessionLeaseFile(context, rawSessionId = process.env.CLAUDE_SESSION_ID) {
|
||||
const sessionId = resolveSessionId(rawSessionId);
|
||||
if (!sessionId) return '';
|
||||
return path.join(getSessionLeaseDir(context), `${sessionId}.json`);
|
||||
}
|
||||
|
||||
function writeSessionLease(context, rawSessionId = process.env.CLAUDE_SESSION_ID, extra = {}) {
|
||||
const leaseFile = getSessionLeaseFile(context, rawSessionId);
|
||||
if (!leaseFile) return '';
|
||||
|
||||
ensureDir(getSessionLeaseDir(context));
|
||||
const payload = {
|
||||
sessionId: resolveSessionId(rawSessionId),
|
||||
cwd: process.cwd(),
|
||||
pid: process.pid,
|
||||
updatedAt: new Date().toISOString(),
|
||||
...extra
|
||||
};
|
||||
fs.writeFileSync(leaseFile, JSON.stringify(payload, null, 2) + '\n');
|
||||
return leaseFile;
|
||||
}
|
||||
|
||||
function removeSessionLease(context, rawSessionId = process.env.CLAUDE_SESSION_ID) {
|
||||
const leaseFile = getSessionLeaseFile(context, rawSessionId);
|
||||
if (!leaseFile) return false;
|
||||
try {
|
||||
fs.rmSync(leaseFile, { force: true });
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function listSessionLeases(context) {
|
||||
const leaseDir = getSessionLeaseDir(context);
|
||||
if (!fs.existsSync(leaseDir)) return [];
|
||||
return fs.readdirSync(leaseDir)
|
||||
.filter(name => name.endsWith('.json'))
|
||||
.map(name => path.join(leaseDir, name));
|
||||
}
|
||||
|
||||
function stopObserverForContext(context) {
|
||||
const pidFile = getObserverPidFile(context);
|
||||
if (!fs.existsSync(pidFile)) return false;
|
||||
|
||||
const pid = (fs.readFileSync(pidFile, 'utf8') || '').trim();
|
||||
if (!/^[0-9]+$/.test(pid) || pid === '0' || pid === '1') {
|
||||
fs.rmSync(pidFile, { force: true });
|
||||
return false;
|
||||
}
|
||||
|
||||
try {
|
||||
process.kill(Number(pid), 0);
|
||||
} catch {
|
||||
fs.rmSync(pidFile, { force: true });
|
||||
return false;
|
||||
}
|
||||
|
||||
try {
|
||||
process.kill(Number(pid), 'SIGTERM');
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
|
||||
fs.rmSync(pidFile, { force: true });
|
||||
fs.rmSync(getObserverSignalCounterFile(context), { force: true });
|
||||
return true;
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
resolveProjectContext,
|
||||
getObserverActivityFile,
|
||||
getObserverPidFile,
|
||||
getSessionLeaseDir,
|
||||
writeSessionLease,
|
||||
removeSessionLease,
|
||||
listSessionLeases,
|
||||
stopObserverForContext,
|
||||
resolveSessionId
|
||||
};
|
||||
@@ -37,8 +37,8 @@ function createSkillObservation(input) {
|
||||
? input.skill.path.trim()
|
||||
: null;
|
||||
const success = Boolean(input.success);
|
||||
const error = input.error === null || input.error === undefined ? null : String(input.error);
|
||||
const feedback = input.feedback === null || input.feedback === undefined ? null : String(input.feedback);
|
||||
const error = input.error == null ? null : String(input.error);
|
||||
const feedback = input.feedback == null ? null : String(input.feedback);
|
||||
const variant = typeof input.variant === 'string' && input.variant.trim().length > 0
|
||||
? input.variant.trim()
|
||||
: 'baseline';
|
||||
|
||||
@@ -25,10 +25,6 @@ const WINDOWS_RESERVED_SESSION_IDS = new Set([
|
||||
* Get the user's home directory (cross-platform)
|
||||
*/
|
||||
function getHomeDir() {
|
||||
const explicitHome = process.env.HOME || process.env.USERPROFILE;
|
||||
if (explicitHome && explicitHome.trim().length > 0) {
|
||||
return path.resolve(explicitHome);
|
||||
}
|
||||
return os.homedir();
|
||||
}
|
||||
|
||||
|
||||
@@ -27,11 +27,8 @@ CONFIG_FILE="$CODEX_HOME/config.toml"
|
||||
AGENTS_FILE="$CODEX_HOME/AGENTS.md"
|
||||
AGENTS_ROOT_SRC="$REPO_ROOT/AGENTS.md"
|
||||
AGENTS_CODEX_SUPP_SRC="$REPO_ROOT/.codex/AGENTS.md"
|
||||
CODEX_AGENTS_SRC="$REPO_ROOT/.codex/agents"
|
||||
CODEX_AGENTS_DEST="$CODEX_HOME/agents"
|
||||
PROMPTS_SRC="$REPO_ROOT/commands"
|
||||
PROMPTS_DEST="$CODEX_HOME/prompts"
|
||||
BASELINE_MERGE_SCRIPT="$REPO_ROOT/scripts/codex/merge-codex-config.js"
|
||||
HOOKS_INSTALLER="$REPO_ROOT/scripts/codex/install-global-git-hooks.sh"
|
||||
SANITY_CHECKER="$REPO_ROOT/scripts/codex/check-codex-global-state.sh"
|
||||
CURSOR_RULES_DIR="$REPO_ROOT/.cursor/rules"
|
||||
@@ -109,23 +106,7 @@ extract_toml_value() {
|
||||
|
||||
extract_context7_key() {
|
||||
local file="$1"
|
||||
node - "$file" <<'EOF'
|
||||
const fs = require('fs');
|
||||
|
||||
const filePath = process.argv[2];
|
||||
let source = '';
|
||||
|
||||
try {
|
||||
source = fs.readFileSync(filePath, 'utf8');
|
||||
} catch {
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
const match = source.match(/--key",\s*"([^"]+)"/);
|
||||
if (match && match[1]) {
|
||||
process.stdout.write(`${match[1]}\n`);
|
||||
}
|
||||
EOF
|
||||
grep -oP -- '--key",[[:space:]]*"\K[^"]+' "$file" | head -n 1 || true
|
||||
}
|
||||
|
||||
generate_prompt_file() {
|
||||
@@ -149,9 +130,7 @@ MCP_MERGE_SCRIPT="$REPO_ROOT/scripts/codex/merge-mcp-config.js"
|
||||
|
||||
require_path "$REPO_ROOT/AGENTS.md" "ECC AGENTS.md"
|
||||
require_path "$AGENTS_CODEX_SUPP_SRC" "ECC Codex AGENTS supplement"
|
||||
require_path "$CODEX_AGENTS_SRC" "ECC Codex agent roles"
|
||||
require_path "$PROMPTS_SRC" "ECC commands directory"
|
||||
require_path "$BASELINE_MERGE_SCRIPT" "ECC Codex baseline merge script"
|
||||
require_path "$HOOKS_INSTALLER" "ECC global git hooks installer"
|
||||
require_path "$SANITY_CHECKER" "ECC global sanity checker"
|
||||
require_path "$CURSOR_RULES_DIR" "ECC Cursor rules directory"
|
||||
@@ -252,26 +231,6 @@ else
|
||||
fi
|
||||
fi
|
||||
|
||||
log "Merging ECC Codex baseline into $CONFIG_FILE (add-only, preserving user config)"
|
||||
if [[ "$MODE" == "dry-run" ]]; then
|
||||
node "$BASELINE_MERGE_SCRIPT" "$CONFIG_FILE" --dry-run
|
||||
else
|
||||
node "$BASELINE_MERGE_SCRIPT" "$CONFIG_FILE"
|
||||
fi
|
||||
|
||||
log "Syncing sample Codex agent role files"
|
||||
run_or_echo mkdir -p "$CODEX_AGENTS_DEST"
|
||||
for agent_file in "$CODEX_AGENTS_SRC"/*.toml; do
|
||||
[[ -f "$agent_file" ]] || continue
|
||||
agent_name="$(basename "$agent_file")"
|
||||
dest="$CODEX_AGENTS_DEST/$agent_name"
|
||||
if [[ -e "$dest" ]]; then
|
||||
log "Keeping existing Codex agent role file: $dest"
|
||||
else
|
||||
run_or_echo cp "$agent_file" "$dest"
|
||||
fi
|
||||
done
|
||||
|
||||
# Skills are NOT synced here — Codex CLI reads directly from
|
||||
# ~/.agents/skills/ (installed by ECC installer / npx skills).
|
||||
# Copying into ~/.codex/skills/ was unnecessary.
|
||||
|
||||
@@ -23,23 +23,50 @@ Write long-form content that sounds like an actual person with a point of view,
|
||||
4. Use proof instead of adjectives.
|
||||
5. Never invent facts, credibility, or customer evidence.
|
||||
|
||||
## Voice Handling
|
||||
## Voice Capture Workflow
|
||||
|
||||
If the user wants a specific voice, run `brand-voice` first and reuse its `VOICE PROFILE`.
|
||||
Do not duplicate a second style-analysis pass here unless the user explicitly asks for one.
|
||||
If the user wants a specific voice, collect one or more of:
|
||||
- published articles
|
||||
- newsletters
|
||||
- X posts or threads
|
||||
- docs or memos
|
||||
- launch notes
|
||||
- a style guide
|
||||
|
||||
Then extract:
|
||||
- sentence length and rhythm
|
||||
- whether the writing is compressed, explanatory, sharp, or formal
|
||||
- how parentheses are used
|
||||
- how often the writer asks questions
|
||||
- whether the writer uses fragments, lists, or hard pivots
|
||||
- formatting habits such as headers, bullets, code blocks, pull quotes
|
||||
- what the writer clearly avoids
|
||||
|
||||
If no voice references are given, default to a sharp operator voice: concrete, unsentimental, useful.
|
||||
|
||||
## Affaan / ECC Voice Reference
|
||||
|
||||
When matching Affaan / ECC voice, bias toward:
|
||||
- direct claims over scene-setting
|
||||
- high specificity
|
||||
- parentheticals used for qualification or over-clarification, not comedy
|
||||
- capitalization chosen situationally, not as a gimmick
|
||||
- very low tolerance for fake thought-leadership cadence
|
||||
- almost no bait questions
|
||||
|
||||
## Banned Patterns
|
||||
|
||||
Delete and rewrite any of these:
|
||||
- "In today's rapidly evolving landscape"
|
||||
- "game-changer", "cutting-edge", "revolutionary"
|
||||
- "no fluff"
|
||||
- "not X, just Y"
|
||||
- "here's why this matters" as a standalone bridge
|
||||
- fake vulnerability arcs
|
||||
- a closing question added only to juice engagement
|
||||
- forced lowercase
|
||||
- corny parenthetical asides
|
||||
- biography padding that does not move the argument
|
||||
- generic AI throat-clearing that delays the point
|
||||
|
||||
## Writing Process
|
||||
|
||||
@@ -74,6 +101,6 @@ Delete and rewrite any of these:
|
||||
Before delivering:
|
||||
- factual claims are backed by provided sources
|
||||
- generic AI transitions are gone
|
||||
- the voice matches the supplied examples or the agreed `VOICE PROFILE`
|
||||
- the voice matches the supplied examples
|
||||
- every section adds something new
|
||||
- formatting matches the intended medium
|
||||
|
||||
@@ -1,97 +0,0 @@
|
||||
---
|
||||
name: brand-voice
|
||||
description: Build a source-derived writing style profile from real posts, essays, launch notes, docs, or site copy, then reuse that profile across content, outreach, and social workflows. Use when the user wants voice consistency without generic AI writing tropes.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Brand Voice
|
||||
|
||||
Build a durable voice profile from real source material, then use that profile everywhere instead of re-deriving style from scratch or defaulting to generic AI copy.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- the user wants content or outreach in a specific voice
|
||||
- writing for X, LinkedIn, email, launch posts, threads, or product updates
|
||||
- adapting a known author's tone across channels
|
||||
- the existing content lane needs a reusable style system instead of one-off mimicry
|
||||
|
||||
## Source Priority
|
||||
|
||||
Use the strongest real source set available, in this order:
|
||||
|
||||
1. recent original X posts and threads
|
||||
2. articles, essays, memos, launch notes, or newsletters
|
||||
3. real outbound emails or DMs that worked
|
||||
4. product docs, changelogs, README framing, and site copy
|
||||
|
||||
Do not use generic platform exemplars as source material.
|
||||
|
||||
## Collection Workflow
|
||||
|
||||
1. Gather 5 to 20 representative samples when available.
|
||||
2. Prefer recent material over old material unless the user says the older writing is more canonical.
|
||||
3. Separate "public launch voice" from "private working voice" if the source set clearly splits.
|
||||
4. If live X access is available, use `x-api` to pull recent original posts before drafting.
|
||||
5. If site copy matters, include the current ECC landing page and repo/plugin framing.
|
||||
|
||||
## What to Extract
|
||||
|
||||
- rhythm and sentence length
|
||||
- compression vs explanation
|
||||
- capitalization norms
|
||||
- parenthetical use
|
||||
- question frequency and purpose
|
||||
- how sharply claims are made
|
||||
- how often numbers, mechanisms, or receipts show up
|
||||
- how transitions work
|
||||
- what the author never does
|
||||
|
||||
## Output Contract
|
||||
|
||||
Produce a reusable `VOICE PROFILE` block that downstream skills can consume directly. Use the schema in [references/voice-profile-schema.md](references/voice-profile-schema.md).
|
||||
|
||||
Keep the profile structured and short enough to reuse in session context. The point is not literary criticism. The point is operational reuse.
|
||||
|
||||
## Affaan / ECC Defaults
|
||||
|
||||
If the user wants Affaan / ECC voice and live sources are thin, start here unless newer source material overrides it:
|
||||
|
||||
- direct, compressed, concrete
|
||||
- specifics, mechanisms, receipts, and numbers beat adjectives
|
||||
- parentheticals are for qualification, narrowing, or over-clarification
|
||||
- capitalization is conventional unless there is a real reason to break it
|
||||
- questions are rare and should not be used as bait
|
||||
- tone can be sharp, blunt, skeptical, or dry
|
||||
- transitions should feel earned, not smoothed over
|
||||
|
||||
## Hard Bans
|
||||
|
||||
Delete and rewrite any of these:
|
||||
|
||||
- fake curiosity hooks
|
||||
- "not X, just Y"
|
||||
- "no fluff"
|
||||
- forced lowercase
|
||||
- LinkedIn thought-leader cadence
|
||||
- bait questions
|
||||
- "Excited to share"
|
||||
- generic founder-journey filler
|
||||
- corny parentheticals
|
||||
|
||||
## Persistence Rules
|
||||
|
||||
- Reuse the latest confirmed `VOICE PROFILE` across related tasks in the same session.
|
||||
- If the user asks for a durable artifact, save the profile in the requested workspace location or memory surface.
|
||||
- Do not create repo-tracked files that store personal voice fingerprints unless the user explicitly asks for that.
|
||||
|
||||
## Downstream Use
|
||||
|
||||
Use this skill before or inside:
|
||||
|
||||
- `content-engine`
|
||||
- `crosspost`
|
||||
- `lead-intelligence`
|
||||
- article or launch writing
|
||||
- cold or warm outbound across X, LinkedIn, and email
|
||||
|
||||
If another skill already has a partial voice capture section, this skill is the canonical source of truth.
|
||||
@@ -1,55 +0,0 @@
|
||||
# Voice Profile Schema
|
||||
|
||||
Use this exact structure when building a reusable voice profile:
|
||||
|
||||
```text
|
||||
VOICE PROFILE
|
||||
=============
|
||||
Author:
|
||||
Goal:
|
||||
Confidence:
|
||||
|
||||
Source Set
|
||||
- source 1
|
||||
- source 2
|
||||
- source 3
|
||||
|
||||
Rhythm
|
||||
- short note on sentence length, pacing, and fragmentation
|
||||
|
||||
Compression
|
||||
- how dense or explanatory the writing is
|
||||
|
||||
Capitalization
|
||||
- conventional, mixed, or situational
|
||||
|
||||
Parentheticals
|
||||
- how they are used and how they are not used
|
||||
|
||||
Question Use
|
||||
- rare, frequent, rhetorical, direct, or mostly absent
|
||||
|
||||
Claim Style
|
||||
- how claims are framed, supported, and sharpened
|
||||
|
||||
Preferred Moves
|
||||
- concrete moves the author does use
|
||||
|
||||
Banned Moves
|
||||
- specific patterns the author does not use
|
||||
|
||||
CTA Rules
|
||||
- how, when, or whether to close with asks
|
||||
|
||||
Channel Notes
|
||||
- X:
|
||||
- LinkedIn:
|
||||
- Email:
|
||||
```
|
||||
|
||||
Guidelines:
|
||||
|
||||
- Keep the profile concrete and source-backed.
|
||||
- Use short bullets, not essay paragraphs.
|
||||
- Every banned move should be observable in the source set or explicitly requested by the user.
|
||||
- If the source set conflicts, call out the split instead of averaging it into mush.
|
||||
@@ -8,7 +8,8 @@
|
||||
* exit 0: success exit 1: no projects
|
||||
*/
|
||||
|
||||
import { readProjects, loadContext, today, renderListTable } from './shared.mjs';
|
||||
import { readProjects, loadContext, today, CONTEXTS_DIR } from './shared.mjs';
|
||||
import { renderListTable } from './shared.mjs';
|
||||
|
||||
const cwd = process.env.PWD || process.cwd();
|
||||
const projects = readProjects();
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
* exit 0: success exit 1: error
|
||||
*/
|
||||
|
||||
import { readFileSync, existsSync, renameSync } from 'fs';
|
||||
import { readFileSync, writeFileSync, existsSync, renameSync } from 'fs';
|
||||
import { resolve } from 'path';
|
||||
import { readProjects, writeProjects, saveContext, today, shortId, CONTEXTS_DIR } from './shared.mjs';
|
||||
|
||||
@@ -112,11 +112,7 @@ for (const [projectPath, info] of Object.entries(projects)) {
|
||||
const contextMd = existsSync(contextMdPath) ? readFileSync(contextMdPath, 'utf8') : '';
|
||||
let meta = {};
|
||||
if (existsSync(metaPath)) {
|
||||
try {
|
||||
meta = JSON.parse(readFileSync(metaPath, 'utf8'));
|
||||
} catch (e) {
|
||||
console.warn(` ! ${contextDir}: invalid meta.json, continuing with defaults (${e.message})`);
|
||||
}
|
||||
try { meta = JSON.parse(readFileSync(metaPath, 'utf8')); } catch {}
|
||||
}
|
||||
|
||||
// Extract fields from CONTEXT.md
|
||||
|
||||
@@ -20,8 +20,8 @@ import { readFileSync, mkdirSync, writeFileSync } from 'fs';
|
||||
import { resolve } from 'path';
|
||||
import {
|
||||
readProjects, writeProjects, loadContext, saveContext,
|
||||
today, shortId, gitSummary, nativeMemoryDir,
|
||||
CURRENT_SESSION,
|
||||
today, shortId, gitSummary, nativeMemoryDir, encodeProjectPath,
|
||||
CONTEXTS_DIR, CURRENT_SESSION,
|
||||
} from './shared.mjs';
|
||||
|
||||
const isInit = process.argv.includes('--init');
|
||||
|
||||
@@ -5,8 +5,8 @@
|
||||
* No external dependencies. Node.js stdlib only.
|
||||
*/
|
||||
|
||||
import { readFileSync, writeFileSync, existsSync, mkdirSync } from 'fs';
|
||||
import { resolve } from 'path';
|
||||
import { readFileSync, writeFileSync, existsSync, mkdirSync, readdirSync } from 'fs';
|
||||
import { resolve, basename } from 'path';
|
||||
import { homedir } from 'os';
|
||||
import { spawnSync } from 'child_process';
|
||||
import { randomBytes } from 'crypto';
|
||||
@@ -270,7 +270,7 @@ export function renderContextMd(ctx) {
|
||||
}
|
||||
|
||||
/** Render the bordered briefing box used by /ck:resume */
|
||||
export function renderBriefingBox(ctx, _meta = {}) {
|
||||
export function renderBriefingBox(ctx, meta = {}) {
|
||||
const latest = ctx.sessions?.[ctx.sessions.length - 1] || {};
|
||||
const W = 57;
|
||||
const pad = (str, w) => {
|
||||
@@ -344,7 +344,7 @@ export function renderInfoBlock(ctx) {
|
||||
}
|
||||
|
||||
/** Render ASCII list table used by /ck:list */
|
||||
export function renderListTable(entries, cwd, _todayStr) {
|
||||
export function renderListTable(entries, cwd, todayStr) {
|
||||
// entries: [{name, contextDir, path, context, lastUpdated}]
|
||||
// Sorted alphabetically by contextDir before calling
|
||||
const rows = entries.map((e, i) => {
|
||||
|
||||
@@ -1,189 +0,0 @@
|
||||
---
|
||||
name: connections-optimizer
|
||||
description: Reorganize the user's X and LinkedIn network with review-first pruning, add/follow recommendations, and channel-specific warm outreach drafted in the user's real voice. Use when the user wants to clean up following lists, grow toward current priorities, or rebalance a social graph around higher-signal relationships.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Connections Optimizer
|
||||
|
||||
Reorganize the user's network instead of treating outbound as a one-way prospecting list.
|
||||
|
||||
This skill handles:
|
||||
|
||||
- X following cleanup and expansion
|
||||
- LinkedIn follow and connection analysis
|
||||
- review-first prune queues
|
||||
- add and follow recommendations
|
||||
- warm-path identification
|
||||
- Apple Mail, X DM, and LinkedIn draft generation in the user's real voice
|
||||
|
||||
## When to Activate
|
||||
|
||||
- the user wants to prune their X following
|
||||
- the user wants to rebalance who they follow or stay connected to
|
||||
- the user says "clean up my network", "who should I unfollow", "who should I follow", "who should I reconnect with"
|
||||
- outreach quality depends on network structure, not just cold list generation
|
||||
|
||||
## Required Inputs
|
||||
|
||||
Collect or infer:
|
||||
|
||||
- current priorities and active work
|
||||
- target roles, industries, geos, or ecosystems
|
||||
- platform selection: X, LinkedIn, or both
|
||||
- do-not-touch list
|
||||
- mode: `light-pass`, `default`, or `aggressive`
|
||||
|
||||
If the user does not specify a mode, use `default`.
|
||||
|
||||
## Tool Requirements
|
||||
|
||||
### Preferred
|
||||
|
||||
- `x-api` for X graph inspection and recent activity
|
||||
- `lead-intelligence` for target discovery and warm-path ranking
|
||||
- `social-graph-ranker` when the user wants bridge value scored independently of the broader lead workflow
|
||||
- Exa / deep research for person and company enrichment
|
||||
- `brand-voice` before drafting outbound
|
||||
|
||||
### Fallbacks
|
||||
|
||||
- browser control for LinkedIn analysis and drafting
|
||||
- browser control for X if API coverage is constrained
|
||||
- Apple Mail or Mail.app drafting via desktop automation when email is the right channel
|
||||
|
||||
## Safety Defaults
|
||||
|
||||
- default is review-first, never blind auto-pruning
|
||||
- X: prune only accounts the user follows, never followers
|
||||
- LinkedIn: treat 1st-degree connection removal as manual-review-first
|
||||
- do not auto-send DMs, invites, or emails
|
||||
- emit a ranked action plan and drafts before any apply step
|
||||
|
||||
## Platform Rules
|
||||
|
||||
### X
|
||||
|
||||
- mutuals are stickier than one-way follows
|
||||
- non-follow-backs can be pruned more aggressively
|
||||
- heavily inactive or disappeared accounts should surface quickly
|
||||
- engagement, signal quality, and bridge value matter more than raw follower count
|
||||
|
||||
### LinkedIn
|
||||
|
||||
- API-first if the user actually has LinkedIn API access
|
||||
- browser workflow must work when API access is missing
|
||||
- distinguish outbound follows from accepted 1st-degree connections
|
||||
- outbound follows can be pruned more freely
|
||||
- accepted 1st-degree connections should default to review, not auto-remove
|
||||
|
||||
## Modes
|
||||
|
||||
### `light-pass`
|
||||
|
||||
- prune only high-confidence low-value one-way follows
|
||||
- surface the rest for review
|
||||
- generate a small add/follow list
|
||||
|
||||
### `default`
|
||||
|
||||
- balanced prune queue
|
||||
- balanced keep list
|
||||
- ranked add/follow queue
|
||||
- draft warm intros or direct outreach where useful
|
||||
|
||||
### `aggressive`
|
||||
|
||||
- larger prune queue
|
||||
- lower tolerance for stale non-follow-backs
|
||||
- still review-gated before apply
|
||||
|
||||
## Scoring Model
|
||||
|
||||
Use these positive signals:
|
||||
|
||||
- reciprocity
|
||||
- recent activity
|
||||
- alignment to current priorities
|
||||
- network bridge value
|
||||
- role relevance
|
||||
- real engagement history
|
||||
- recent presence and responsiveness
|
||||
|
||||
Use these negative signals:
|
||||
|
||||
- disappeared or abandoned account
|
||||
- stale one-way follow
|
||||
- off-priority topic cluster
|
||||
- low-value noise
|
||||
- repeated non-response
|
||||
- no follow-back when many better replacements exist
|
||||
|
||||
Mutuals and real warm-path bridges should be penalized less aggressively than one-way follows.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Capture priorities, do-not-touch constraints, and selected platforms.
|
||||
2. Pull the current following / connection inventory.
|
||||
3. Score prune candidates with explicit reasons.
|
||||
4. Score keep candidates with explicit reasons.
|
||||
5. Use `lead-intelligence` plus research surfaces to rank expansion candidates.
|
||||
6. Match the right channel:
|
||||
- X DM for warm, fast social touch points
|
||||
- LinkedIn message for professional graph adjacency
|
||||
- Apple Mail draft for higher-context intros or outreach
|
||||
7. Run `brand-voice` before drafting messages.
|
||||
8. Return a review pack before any apply step.
|
||||
|
||||
## Review Pack Format
|
||||
|
||||
```text
|
||||
CONNECTIONS OPTIMIZER REPORT
|
||||
============================
|
||||
|
||||
Mode:
|
||||
Platforms:
|
||||
Priority Set:
|
||||
|
||||
Prune Queue
|
||||
- handle / profile
|
||||
reason:
|
||||
confidence:
|
||||
action:
|
||||
|
||||
Review Queue
|
||||
- handle / profile
|
||||
reason:
|
||||
risk:
|
||||
|
||||
Keep / Protect
|
||||
- handle / profile
|
||||
bridge value:
|
||||
|
||||
Add / Follow Targets
|
||||
- person
|
||||
why now:
|
||||
warm path:
|
||||
preferred channel:
|
||||
|
||||
Drafts
|
||||
- X DM:
|
||||
- LinkedIn:
|
||||
- Apple Mail:
|
||||
```
|
||||
|
||||
## Outbound Rules
|
||||
|
||||
- Default email path is Apple Mail / Mail.app draft creation.
|
||||
- Do not send automatically.
|
||||
- Choose the channel based on warmth, relevance, and context depth.
|
||||
- Do not force a DM when an email or no outreach is the right move.
|
||||
- Drafts should sound like the user, not like automated sales copy.
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for the reusable voice profile
|
||||
- `social-graph-ranker` for the standalone bridge-scoring and warm-path math
|
||||
- `lead-intelligence` for weighted target and warm-path discovery
|
||||
- `x-api` for X graph access, drafting, and optional apply flows
|
||||
- `content-engine` when the user also wants public launch content around network moves
|
||||
@@ -36,30 +36,55 @@ Before drafting, identify the source set:
|
||||
- prior posts from the same author
|
||||
|
||||
If the user wants a specific voice, build a voice profile from real examples before writing.
|
||||
Use `brand-voice` as the canonical workflow when voice consistency matters across more than one output.
|
||||
|
||||
## Voice Handling
|
||||
## Voice Capture Workflow
|
||||
|
||||
`brand-voice` is the canonical voice layer.
|
||||
Collect 5 to 20 examples when available. Good sources:
|
||||
- articles or essays
|
||||
- X posts or threads
|
||||
- docs or release notes
|
||||
- newsletters
|
||||
- previous launch posts
|
||||
|
||||
Run it first when:
|
||||
If live X access is available, use `x-api` to pull recent original posts before drafting. If not, use the examples already provided or present in the repo.
|
||||
|
||||
- there are multiple downstream outputs
|
||||
- the user explicitly cares about writing style
|
||||
- the content is launch, outreach, or reputation-sensitive
|
||||
Extract and write down:
|
||||
- sentence length and rhythm
|
||||
- how compressed or explanatory the writing is
|
||||
- whether capitalization is conventional, mixed, or situational
|
||||
- how parentheses are used
|
||||
- whether the writer uses fragments, lists, or abrupt pivots
|
||||
- how often the writer asks questions
|
||||
- how sharp, formal, opinionated, or dry the voice is
|
||||
- what the writer never does
|
||||
|
||||
Reuse the resulting `VOICE PROFILE` here instead of rebuilding a second voice model.
|
||||
If the user wants Affaan / ECC voice specifically, still treat `brand-voice` as the source of truth and feed it the best live or source-derived material available.
|
||||
Do not start drafting until the voice profile is clear enough to enforce.
|
||||
|
||||
## Affaan / ECC Voice Reference
|
||||
|
||||
When the user wants Affaan / ECC voice specifically, default to this unless newer examples clearly override it:
|
||||
- direct, compressed, concrete
|
||||
- strong preference for specific claims, numbers, mechanisms, and receipts
|
||||
- parentheticals used to qualify, narrow, or over-clarify, not to do corny bits
|
||||
- lowercase is optional, not mandatory
|
||||
- questions are rare and should not be added as bait
|
||||
- transitions should feel earned, not polished
|
||||
- tone can be sharp or blunt, but should not sound like a content marketer
|
||||
|
||||
## Hard Bans
|
||||
|
||||
Delete and rewrite any of these:
|
||||
- "In today's rapidly evolving landscape"
|
||||
- "game-changer", "revolutionary", "cutting-edge"
|
||||
- "no fluff"
|
||||
- "not X, just Y"
|
||||
- "here's why this matters" unless it is followed immediately by something concrete
|
||||
- "Excited to share"
|
||||
- fake curiosity gaps
|
||||
- ending with a LinkedIn-style question just to farm replies
|
||||
- forced lowercase when the source voice does not call for it
|
||||
- forced casualness on LinkedIn
|
||||
- fake engagement padding that was not present in the source material
|
||||
- parenthetical jokes that were not present in the source voice
|
||||
|
||||
## Platform Adaptation Rules
|
||||
|
||||
@@ -123,9 +148,3 @@ Before delivering:
|
||||
- no fake engagement bait remains
|
||||
- no duplicated copy across platforms unless requested
|
||||
- any CTA is earned and user-approved
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for source-derived voice profiles
|
||||
- `crosspost` for platform-specific distribution
|
||||
- `x-api` for sourcing recent posts and publishing approved X output
|
||||
|
||||
@@ -14,9 +14,6 @@ ANALYZING=0
|
||||
LAST_ANALYSIS_EPOCH=0
|
||||
# Minimum seconds between analyses (prevents rapid re-triggering)
|
||||
ANALYSIS_COOLDOWN="${ECC_OBSERVER_ANALYSIS_COOLDOWN:-60}"
|
||||
IDLE_TIMEOUT_SECONDS="${ECC_OBSERVER_IDLE_TIMEOUT_SECONDS:-1800}"
|
||||
SESSION_LEASE_DIR="${PROJECT_DIR}/.observer-sessions"
|
||||
ACTIVITY_FILE="${PROJECT_DIR}/.observer-last-activity"
|
||||
|
||||
cleanup() {
|
||||
[ -n "$SLEEP_PID" ] && kill "$SLEEP_PID" 2>/dev/null
|
||||
@@ -27,62 +24,6 @@ cleanup() {
|
||||
}
|
||||
trap cleanup TERM INT
|
||||
|
||||
file_mtime_epoch() {
|
||||
local file="$1"
|
||||
if [ ! -f "$file" ]; then
|
||||
printf '0\n'
|
||||
return
|
||||
fi
|
||||
|
||||
if stat -c %Y "$file" >/dev/null 2>&1; then
|
||||
stat -c %Y "$file" 2>/dev/null || printf '0\n'
|
||||
return
|
||||
fi
|
||||
|
||||
if stat -f %m "$file" >/dev/null 2>&1; then
|
||||
stat -f %m "$file" 2>/dev/null || printf '0\n'
|
||||
return
|
||||
fi
|
||||
|
||||
printf '0\n'
|
||||
}
|
||||
|
||||
has_active_session_leases() {
|
||||
if [ ! -d "$SESSION_LEASE_DIR" ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
find "$SESSION_LEASE_DIR" -type f -name '*.json' -print -quit 2>/dev/null | grep -q .
|
||||
}
|
||||
|
||||
latest_activity_epoch() {
|
||||
local observations_epoch activity_epoch
|
||||
observations_epoch="$(file_mtime_epoch "$OBSERVATIONS_FILE")"
|
||||
activity_epoch="$(file_mtime_epoch "$ACTIVITY_FILE")"
|
||||
|
||||
if [ "$activity_epoch" -gt "$observations_epoch" ] 2>/dev/null; then
|
||||
printf '%s\n' "$activity_epoch"
|
||||
else
|
||||
printf '%s\n' "$observations_epoch"
|
||||
fi
|
||||
}
|
||||
|
||||
exit_if_idle_without_sessions() {
|
||||
if has_active_session_leases; then
|
||||
return
|
||||
fi
|
||||
|
||||
local last_activity now_epoch idle_for
|
||||
last_activity="$(latest_activity_epoch)"
|
||||
now_epoch="$(date +%s)"
|
||||
idle_for=$(( now_epoch - last_activity ))
|
||||
|
||||
if [ "$last_activity" -eq 0 ] || [ "$idle_for" -ge "$IDLE_TIMEOUT_SECONDS" ]; then
|
||||
echo "[$(date)] Observer idle without active session leases for ${idle_for}s; exiting" >> "$LOG_FILE"
|
||||
cleanup
|
||||
fi
|
||||
}
|
||||
|
||||
analyze_observations() {
|
||||
if [ ! -f "$OBSERVATIONS_FILE" ]; then
|
||||
return
|
||||
@@ -256,13 +197,11 @@ SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
"${CLV2_PYTHON_CMD:-python3}" "${SCRIPT_DIR}/../scripts/instinct-cli.py" prune --quiet >> "$LOG_FILE" 2>&1 || echo "[$(date)] Warning: instinct prune failed (non-fatal)" >> "$LOG_FILE"
|
||||
|
||||
while true; do
|
||||
exit_if_idle_without_sessions
|
||||
sleep "$OBSERVER_INTERVAL_SECONDS" &
|
||||
SLEEP_PID=$!
|
||||
wait "$SLEEP_PID" 2>/dev/null
|
||||
SLEEP_PID=""
|
||||
|
||||
exit_if_idle_without_sessions
|
||||
if [ "$USR1_FIRED" -eq 1 ]; then
|
||||
USR1_FIRED=0
|
||||
else
|
||||
|
||||
@@ -352,7 +352,7 @@ if [ "$OBSERVER_ENABLED" = "true" ]; then
|
||||
fi
|
||||
) 9>"$LAZY_START_LOCK"
|
||||
else
|
||||
# macOS fallback: use lockfile if available, otherwise mkdir-based lock
|
||||
# macOS fallback: use lockfile if available, otherwise skip
|
||||
if command -v lockfile >/dev/null 2>&1; then
|
||||
# Use subshell to isolate exit and add trap for cleanup
|
||||
(
|
||||
@@ -365,17 +365,6 @@ if [ "$OBSERVER_ENABLED" = "true" ]; then
|
||||
fi
|
||||
rm -f "$LAZY_START_LOCK" 2>/dev/null || true
|
||||
)
|
||||
else
|
||||
# POSIX fallback: mkdir is atomic -- fails if dir already exists
|
||||
(
|
||||
trap 'rmdir "${LAZY_START_LOCK}.d" 2>/dev/null || true' EXIT
|
||||
mkdir "${LAZY_START_LOCK}.d" 2>/dev/null || exit 0
|
||||
_CHECK_OBSERVER_RUNNING "${PROJECT_DIR}/.observer.pid" || true
|
||||
_CHECK_OBSERVER_RUNNING "${CONFIG_DIR}/.observer.pid" || true
|
||||
if [ ! -f "${PROJECT_DIR}/.observer.pid" ] && [ ! -f "${CONFIG_DIR}/.observer.pid" ]; then
|
||||
nohup "${SKILL_ROOT}/agents/start-observer.sh" start >/dev/null 2>&1 &
|
||||
fi
|
||||
)
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
@@ -386,9 +375,6 @@ fi
|
||||
# which caused runaway parallel Claude analysis processes.
|
||||
SIGNAL_EVERY_N="${ECC_OBSERVER_SIGNAL_EVERY_N:-20}"
|
||||
SIGNAL_COUNTER_FILE="${PROJECT_DIR}/.observer-signal-counter"
|
||||
ACTIVITY_FILE="${PROJECT_DIR}/.observer-last-activity"
|
||||
|
||||
touch "$ACTIVITY_FILE" 2>/dev/null || true
|
||||
|
||||
should_signal=0
|
||||
if [ -f "$SIGNAL_COUNTER_FILE" ]; then
|
||||
|
||||
@@ -37,10 +37,13 @@ Use `content-engine` first if the source still needs voice shaping.
|
||||
|
||||
### Step 2: Capture the Voice Fingerprint
|
||||
|
||||
Run `brand-voice` first if the source voice is not already captured in the current session.
|
||||
Before adapting, note:
|
||||
- how blunt or explanatory the source is
|
||||
- whether the source uses fragments, lists, or longer transitions
|
||||
- whether the source uses parentheses
|
||||
- whether the source avoids questions, hashtags, or CTA language
|
||||
|
||||
Reuse the resulting `VOICE PROFILE` directly.
|
||||
Do not build a second ad hoc voice checklist here unless the user explicitly wants a fresh override for this campaign.
|
||||
The adaptation should preserve that fingerprint.
|
||||
|
||||
### Step 3: Adapt by Platform Constraint
|
||||
|
||||
@@ -86,6 +89,7 @@ Delete and rewrite any of these:
|
||||
- "Here's what I learned"
|
||||
- "What do you think?"
|
||||
- "link in bio" unless that is literally true
|
||||
- "not X, just Y"
|
||||
- generic "professional takeaway" paragraphs that were not in the source
|
||||
|
||||
## Output Format
|
||||
@@ -106,6 +110,5 @@ Before delivering:
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for reusable source-derived voice capture
|
||||
- `content-engine` for voice capture and source shaping
|
||||
- `x-api` for X publishing workflows
|
||||
|
||||
@@ -1,321 +0,0 @@
|
||||
---
|
||||
name: csharp-testing
|
||||
description: C# and .NET testing patterns with xUnit, FluentAssertions, mocking, integration tests, and test organization best practices.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# C# Testing Patterns
|
||||
|
||||
Comprehensive testing patterns for .NET applications using xUnit, FluentAssertions, and modern testing practices.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- Writing new tests for C# code
|
||||
- Reviewing test quality and coverage
|
||||
- Setting up test infrastructure for .NET projects
|
||||
- Debugging flaky or slow tests
|
||||
|
||||
## Test Framework Stack
|
||||
|
||||
| Tool | Purpose |
|
||||
|---|---|
|
||||
| **xUnit** | Test framework (preferred for .NET) |
|
||||
| **FluentAssertions** | Readable assertion syntax |
|
||||
| **NSubstitute** or **Moq** | Mocking dependencies |
|
||||
| **Testcontainers** | Real infrastructure in integration tests |
|
||||
| **WebApplicationFactory** | ASP.NET Core integration tests |
|
||||
| **Bogus** | Realistic test data generation |
|
||||
|
||||
## Unit Test Structure
|
||||
|
||||
### Arrange-Act-Assert
|
||||
|
||||
```csharp
|
||||
public sealed class OrderServiceTests
|
||||
{
|
||||
private readonly IOrderRepository _repository = Substitute.For<IOrderRepository>();
|
||||
private readonly ILogger<OrderService> _logger = Substitute.For<ILogger<OrderService>>();
|
||||
private readonly OrderService _sut;
|
||||
|
||||
public OrderServiceTests()
|
||||
{
|
||||
_sut = new OrderService(_repository, _logger);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task PlaceOrderAsync_ReturnsSuccess_WhenRequestIsValid()
|
||||
{
|
||||
// Arrange
|
||||
var request = new CreateOrderRequest
|
||||
{
|
||||
CustomerId = "cust-123",
|
||||
Items = [new OrderItem("SKU-001", 2, 29.99m)]
|
||||
};
|
||||
|
||||
// Act
|
||||
var result = await _sut.PlaceOrderAsync(request, CancellationToken.None);
|
||||
|
||||
// Assert
|
||||
result.IsSuccess.Should().BeTrue();
|
||||
result.Value.Should().NotBeNull();
|
||||
result.Value!.CustomerId.Should().Be("cust-123");
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task PlaceOrderAsync_ReturnsFailure_WhenNoItems()
|
||||
{
|
||||
// Arrange
|
||||
var request = new CreateOrderRequest
|
||||
{
|
||||
CustomerId = "cust-123",
|
||||
Items = []
|
||||
};
|
||||
|
||||
// Act
|
||||
var result = await _sut.PlaceOrderAsync(request, CancellationToken.None);
|
||||
|
||||
// Assert
|
||||
result.IsSuccess.Should().BeFalse();
|
||||
result.Error.Should().Contain("at least one item");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Parameterized Tests with Theory
|
||||
|
||||
```csharp
|
||||
[Theory]
|
||||
[InlineData("", false)]
|
||||
[InlineData("a", false)]
|
||||
[InlineData("ab@c.d", false)]
|
||||
[InlineData("user@example.com", true)]
|
||||
[InlineData("user+tag@example.co.uk", true)]
|
||||
public void IsValidEmail_ReturnsExpected(string email, bool expected)
|
||||
{
|
||||
EmailValidator.IsValid(email).Should().Be(expected);
|
||||
}
|
||||
|
||||
[Theory]
|
||||
[MemberData(nameof(InvalidOrderCases))]
|
||||
public async Task PlaceOrderAsync_RejectsInvalidOrders(CreateOrderRequest request, string expectedError)
|
||||
{
|
||||
var result = await _sut.PlaceOrderAsync(request, CancellationToken.None);
|
||||
|
||||
result.IsSuccess.Should().BeFalse();
|
||||
result.Error.Should().Contain(expectedError);
|
||||
}
|
||||
|
||||
public static TheoryData<CreateOrderRequest, string> InvalidOrderCases => new()
|
||||
{
|
||||
{ new() { CustomerId = "", Items = [ValidItem()] }, "CustomerId" },
|
||||
{ new() { CustomerId = "c1", Items = [] }, "at least one item" },
|
||||
{ new() { CustomerId = "c1", Items = [new("", 1, 10m)] }, "SKU" },
|
||||
};
|
||||
```
|
||||
|
||||
## Mocking with NSubstitute
|
||||
|
||||
```csharp
|
||||
[Fact]
|
||||
public async Task GetOrderAsync_ReturnsNull_WhenNotFound()
|
||||
{
|
||||
// Arrange
|
||||
var orderId = Guid.NewGuid();
|
||||
_repository.FindByIdAsync(orderId, Arg.Any<CancellationToken>())
|
||||
.Returns((Order?)null);
|
||||
|
||||
// Act
|
||||
var result = await _sut.GetOrderAsync(orderId, CancellationToken.None);
|
||||
|
||||
// Assert
|
||||
result.Should().BeNull();
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task PlaceOrderAsync_PersistsOrder()
|
||||
{
|
||||
// Arrange
|
||||
var request = ValidOrderRequest();
|
||||
|
||||
// Act
|
||||
await _sut.PlaceOrderAsync(request, CancellationToken.None);
|
||||
|
||||
// Assert — verify the repository was called
|
||||
await _repository.Received(1).AddAsync(
|
||||
Arg.Is<Order>(o => o.CustomerId == request.CustomerId),
|
||||
Arg.Any<CancellationToken>());
|
||||
}
|
||||
```
|
||||
|
||||
## ASP.NET Core Integration Tests
|
||||
|
||||
### WebApplicationFactory Setup
|
||||
|
||||
```csharp
|
||||
public sealed class OrderApiTests : IClassFixture<WebApplicationFactory<Program>>
|
||||
{
|
||||
private readonly HttpClient _client;
|
||||
|
||||
public OrderApiTests(WebApplicationFactory<Program> factory)
|
||||
{
|
||||
_client = factory.WithWebHostBuilder(builder =>
|
||||
{
|
||||
builder.ConfigureServices(services =>
|
||||
{
|
||||
// Replace real DB with in-memory for tests
|
||||
services.RemoveAll<DbContextOptions<AppDbContext>>();
|
||||
services.AddDbContext<AppDbContext>(options =>
|
||||
options.UseInMemoryDatabase("TestDb"));
|
||||
});
|
||||
}).CreateClient();
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task GetOrder_Returns404_WhenNotFound()
|
||||
{
|
||||
var response = await _client.GetAsync($"/api/orders/{Guid.NewGuid()}");
|
||||
|
||||
response.StatusCode.Should().Be(HttpStatusCode.NotFound);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task CreateOrder_Returns201_WithValidRequest()
|
||||
{
|
||||
var request = new CreateOrderRequest
|
||||
{
|
||||
CustomerId = "cust-1",
|
||||
Items = [new("SKU-001", 1, 19.99m)]
|
||||
};
|
||||
|
||||
var response = await _client.PostAsJsonAsync("/api/orders", request);
|
||||
|
||||
response.StatusCode.Should().Be(HttpStatusCode.Created);
|
||||
response.Headers.Location.Should().NotBeNull();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Testing with Testcontainers
|
||||
|
||||
```csharp
|
||||
public sealed class PostgresOrderRepositoryTests : IAsyncLifetime
|
||||
{
|
||||
private readonly PostgreSqlContainer _postgres = new PostgreSqlBuilder()
|
||||
.WithImage("postgres:16-alpine")
|
||||
.Build();
|
||||
|
||||
private AppDbContext _db = null!;
|
||||
|
||||
public async Task InitializeAsync()
|
||||
{
|
||||
await _postgres.StartAsync();
|
||||
var options = new DbContextOptionsBuilder<AppDbContext>()
|
||||
.UseNpgsql(_postgres.GetConnectionString())
|
||||
.Options;
|
||||
_db = new AppDbContext(options);
|
||||
await _db.Database.MigrateAsync();
|
||||
}
|
||||
|
||||
public async Task DisposeAsync()
|
||||
{
|
||||
await _db.DisposeAsync();
|
||||
await _postgres.DisposeAsync();
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task AddAsync_PersistsOrder()
|
||||
{
|
||||
var repo = new SqlOrderRepository(_db);
|
||||
var order = Order.Create("cust-1", [new OrderItem("SKU-001", 2, 10m)]);
|
||||
|
||||
await repo.AddAsync(order, CancellationToken.None);
|
||||
|
||||
var found = await repo.FindByIdAsync(order.Id, CancellationToken.None);
|
||||
found.Should().NotBeNull();
|
||||
found!.Items.Should().HaveCount(1);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Test Organization
|
||||
|
||||
```
|
||||
tests/
|
||||
MyApp.UnitTests/
|
||||
Services/
|
||||
OrderServiceTests.cs
|
||||
PaymentServiceTests.cs
|
||||
Validators/
|
||||
EmailValidatorTests.cs
|
||||
MyApp.IntegrationTests/
|
||||
Api/
|
||||
OrderApiTests.cs
|
||||
Repositories/
|
||||
OrderRepositoryTests.cs
|
||||
MyApp.TestHelpers/
|
||||
Builders/
|
||||
OrderBuilder.cs
|
||||
Fixtures/
|
||||
DatabaseFixture.cs
|
||||
```
|
||||
|
||||
## Test Data Builders
|
||||
|
||||
```csharp
|
||||
public sealed class OrderBuilder
|
||||
{
|
||||
private string _customerId = "cust-default";
|
||||
private readonly List<OrderItem> _items = [new("SKU-001", 1, 10m)];
|
||||
|
||||
public OrderBuilder WithCustomer(string customerId)
|
||||
{
|
||||
_customerId = customerId;
|
||||
return this;
|
||||
}
|
||||
|
||||
public OrderBuilder WithItem(string sku, int quantity, decimal price)
|
||||
{
|
||||
_items.Add(new OrderItem(sku, quantity, price));
|
||||
return this;
|
||||
}
|
||||
|
||||
public Order Build() => Order.Create(_customerId, _items);
|
||||
}
|
||||
|
||||
// Usage in tests
|
||||
var order = new OrderBuilder()
|
||||
.WithCustomer("cust-vip")
|
||||
.WithItem("SKU-PREMIUM", 3, 99.99m)
|
||||
.Build();
|
||||
```
|
||||
|
||||
## Common Anti-Patterns
|
||||
|
||||
| Anti-Pattern | Fix |
|
||||
|---|---|
|
||||
| Testing implementation details | Test behavior and outcomes |
|
||||
| Shared mutable test state | Fresh instance per test (xUnit does this via constructors) |
|
||||
| `Thread.Sleep` in async tests | Use `Task.Delay` with timeout, or polling helpers |
|
||||
| Asserting on `ToString()` output | Assert on typed properties |
|
||||
| One giant assertion per test | One logical assertion per test |
|
||||
| Test names describing implementation | Name by behavior: `Method_ExpectedResult_WhenCondition` |
|
||||
| Ignoring `CancellationToken` | Always pass and verify cancellation |
|
||||
|
||||
## Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
dotnet test
|
||||
|
||||
# Run with coverage
|
||||
dotnet test --collect:"XPlat Code Coverage"
|
||||
|
||||
# Run specific project
|
||||
dotnet test tests/MyApp.UnitTests/
|
||||
|
||||
# Filter by test name
|
||||
dotnet test --filter "FullyQualifiedName~OrderService"
|
||||
|
||||
# Watch mode during development
|
||||
dotnet watch test --project tests/MyApp.UnitTests/
|
||||
```
|
||||
@@ -1,563 +0,0 @@
|
||||
---
|
||||
name: dart-flutter-patterns
|
||||
description: Production-ready Dart and Flutter patterns covering null safety, immutable state, async composition, widget architecture, popular state management frameworks (BLoC, Riverpod, Provider), GoRouter navigation, Dio networking, Freezed code generation, and clean architecture.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Dart/Flutter Patterns
|
||||
|
||||
## When to Use
|
||||
|
||||
Use this skill when:
|
||||
- Starting a new Flutter feature and need idiomatic patterns for state management, navigation, or data access
|
||||
- Reviewing or writing Dart code and need guidance on null safety, sealed types, or async composition
|
||||
- Setting up a new Flutter project and choosing between BLoC, Riverpod, or Provider
|
||||
- Implementing secure HTTP clients, WebView integration, or local storage
|
||||
- Writing tests for Flutter widgets, Cubits, or Riverpod providers
|
||||
- Wiring up GoRouter with authentication guards
|
||||
|
||||
## How It Works
|
||||
|
||||
This skill provides copy-paste-ready Dart/Flutter code patterns organized by concern:
|
||||
1. **Null safety** — avoid `!`, prefer `?.`/`??`/pattern matching
|
||||
2. **Immutable state** — sealed classes, `freezed`, `copyWith`
|
||||
3. **Async composition** — concurrent `Future.wait`, safe `BuildContext` after `await`
|
||||
4. **Widget architecture** — extract to classes (not methods), `const` propagation, scoped rebuilds
|
||||
5. **State management** — BLoC/Cubit events, Riverpod notifiers and derived providers
|
||||
6. **Navigation** — GoRouter with reactive auth guards via `refreshListenable`
|
||||
7. **Networking** — Dio with interceptors, token refresh with one-time retry guard
|
||||
8. **Error handling** — global capture, `ErrorWidget.builder`, crashlytics wiring
|
||||
9. **Testing** — unit (BLoC test), widget (ProviderScope overrides), fakes over mocks
|
||||
|
||||
## Examples
|
||||
|
||||
```dart
|
||||
// Sealed state — prevents impossible states
|
||||
sealed class AsyncState<T> {}
|
||||
final class Loading<T> extends AsyncState<T> {}
|
||||
final class Success<T> extends AsyncState<T> { final T data; const Success(this.data); }
|
||||
final class Failure<T> extends AsyncState<T> { final Object error; const Failure(this.error); }
|
||||
|
||||
// GoRouter with reactive auth redirect
|
||||
final router = GoRouter(
|
||||
refreshListenable: GoRouterRefreshStream(authCubit.stream),
|
||||
redirect: (context, state) {
|
||||
final authed = context.read<AuthCubit>().state is AuthAuthenticated;
|
||||
if (!authed && !state.matchedLocation.startsWith('/login')) return '/login';
|
||||
return null;
|
||||
},
|
||||
routes: [...],
|
||||
);
|
||||
|
||||
// Riverpod derived provider with safe firstWhereOrNull
|
||||
@riverpod
|
||||
double cartTotal(Ref ref) {
|
||||
final cart = ref.watch(cartNotifierProvider);
|
||||
final products = ref.watch(productsProvider).valueOrNull ?? [];
|
||||
return cart.fold(0.0, (total, item) {
|
||||
final product = products.firstWhereOrNull((p) => p.id == item.productId);
|
||||
return total + (product?.price ?? 0) * item.quantity;
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
Practical, production-ready patterns for Dart and Flutter applications. Library-agnostic where possible, with explicit coverage of the most common ecosystem packages.
|
||||
|
||||
---
|
||||
|
||||
## 1. Null Safety Fundamentals
|
||||
|
||||
### Prefer Patterns Over Bang Operator
|
||||
|
||||
```dart
|
||||
// BAD — crashes at runtime if null
|
||||
final name = user!.name;
|
||||
|
||||
// GOOD — provide fallback
|
||||
final name = user?.name ?? 'Unknown';
|
||||
|
||||
// GOOD — Dart 3 pattern matching (preferred for complex cases)
|
||||
final display = switch (user) {
|
||||
User(:final name, :final email) => '$name <$email>',
|
||||
null => 'Guest',
|
||||
};
|
||||
|
||||
// GOOD — guard early return
|
||||
String getUserName(User? user) {
|
||||
if (user == null) return 'Unknown';
|
||||
return user.name; // promoted to non-null after check
|
||||
}
|
||||
```
|
||||
|
||||
### Avoid `late` Overuse
|
||||
|
||||
```dart
|
||||
// BAD — defers null error to runtime
|
||||
late String userId;
|
||||
|
||||
// GOOD — nullable with explicit initialization
|
||||
String? userId;
|
||||
|
||||
// OK — use late only when initialization is guaranteed before first access
|
||||
// (e.g., in initState() before any widget interaction)
|
||||
late final AnimationController _controller;
|
||||
|
||||
@override
|
||||
void initState() {
|
||||
super.initState();
|
||||
_controller = AnimationController(vsync: this, duration: const Duration(milliseconds: 300));
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Immutable State
|
||||
|
||||
### Sealed Classes for State Hierarchies
|
||||
|
||||
```dart
|
||||
sealed class UserState {}
|
||||
|
||||
final class UserInitial extends UserState {}
|
||||
|
||||
final class UserLoading extends UserState {}
|
||||
|
||||
final class UserLoaded extends UserState {
|
||||
const UserLoaded(this.user);
|
||||
final User user;
|
||||
}
|
||||
|
||||
final class UserError extends UserState {
|
||||
const UserError(this.message);
|
||||
final String message;
|
||||
}
|
||||
|
||||
// Exhaustive switch — compiler enforces all branches
|
||||
Widget buildFrom(UserState state) => switch (state) {
|
||||
UserInitial() => const SizedBox.shrink(),
|
||||
UserLoading() => const CircularProgressIndicator(),
|
||||
UserLoaded(:final user) => UserCard(user: user),
|
||||
UserError(:final message) => ErrorText(message),
|
||||
};
|
||||
```
|
||||
|
||||
### Freezed for Boilerplate-Free Immutability
|
||||
|
||||
```dart
|
||||
import 'package:freezed_annotation/freezed_annotation.dart';
|
||||
|
||||
part 'user.freezed.dart';
|
||||
part 'user.g.dart';
|
||||
|
||||
@freezed
|
||||
class User with _$User {
|
||||
const factory User({
|
||||
required String id,
|
||||
required String name,
|
||||
required String email,
|
||||
@Default(false) bool isAdmin,
|
||||
}) = _User;
|
||||
|
||||
factory User.fromJson(Map<String, dynamic> json) => _$UserFromJson(json);
|
||||
}
|
||||
|
||||
// Usage
|
||||
final user = User(id: '1', name: 'Alice', email: 'alice@example.com');
|
||||
final updated = user.copyWith(name: 'Alice Smith'); // immutable update
|
||||
final json = user.toJson();
|
||||
final fromJson = User.fromJson(json);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Async Composition
|
||||
|
||||
### Structured Concurrency with Future.wait
|
||||
|
||||
```dart
|
||||
Future<DashboardData> loadDashboard(UserRepository users, OrderRepository orders) async {
|
||||
// Run concurrently — don't await sequentially
|
||||
final (userList, orderList) = await (
|
||||
users.getAll(),
|
||||
orders.getRecent(),
|
||||
).wait; // Dart 3 record destructuring + Future.wait extension
|
||||
|
||||
return DashboardData(users: userList, orders: orderList);
|
||||
}
|
||||
```
|
||||
|
||||
### Stream Patterns
|
||||
|
||||
```dart
|
||||
// Repository exposes reactive streams for live data
|
||||
Stream<List<Item>> watchCartItems() => _db
|
||||
.watchTable('cart_items')
|
||||
.map((rows) => rows.map(Item.fromRow).toList());
|
||||
|
||||
// In widget layer — declarative, no manual subscription
|
||||
StreamBuilder<List<Item>>(
|
||||
stream: cartRepository.watchCartItems(),
|
||||
builder: (context, snapshot) => switch (snapshot) {
|
||||
AsyncSnapshot(connectionState: ConnectionState.waiting) =>
|
||||
const CircularProgressIndicator(),
|
||||
AsyncSnapshot(:final error?) => ErrorWidget(error.toString()),
|
||||
AsyncSnapshot(:final data?) => CartList(items: data),
|
||||
_ => const SizedBox.shrink(),
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
### BuildContext After Await
|
||||
|
||||
```dart
|
||||
// CRITICAL — always check mounted after any await in StatefulWidget
|
||||
Future<void> _handleSubmit() async {
|
||||
setState(() => _isLoading = true);
|
||||
try {
|
||||
await authService.login(_email, _password);
|
||||
if (!mounted) return; // ← guard before using context
|
||||
context.go('/home');
|
||||
} on AuthException catch (e) {
|
||||
if (!mounted) return;
|
||||
ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text(e.message)));
|
||||
} finally {
|
||||
if (mounted) setState(() => _isLoading = false);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Widget Architecture
|
||||
|
||||
### Extract to Classes, Not Methods
|
||||
|
||||
```dart
|
||||
// BAD — private method returning widget, prevents optimization
|
||||
Widget _buildHeader() {
|
||||
return Container(
|
||||
padding: const EdgeInsets.all(16),
|
||||
child: Text(title, style: Theme.of(context).textTheme.headlineMedium),
|
||||
);
|
||||
}
|
||||
|
||||
// GOOD — separate widget class, enables const, element reuse
|
||||
class _PageHeader extends StatelessWidget {
|
||||
const _PageHeader(this.title);
|
||||
final String title;
|
||||
|
||||
@override
|
||||
Widget build(BuildContext context) {
|
||||
return Container(
|
||||
padding: const EdgeInsets.all(16),
|
||||
child: Text(title, style: Theme.of(context).textTheme.headlineMedium),
|
||||
);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### const Propagation
|
||||
|
||||
```dart
|
||||
// BAD — new instances every rebuild
|
||||
child: Padding(
|
||||
padding: EdgeInsets.all(16.0), // not const
|
||||
child: Icon(Icons.home, size: 24.0), // not const
|
||||
)
|
||||
|
||||
// GOOD — const stops rebuild propagation
|
||||
child: const Padding(
|
||||
padding: EdgeInsets.all(16.0),
|
||||
child: Icon(Icons.home, size: 24.0),
|
||||
)
|
||||
```
|
||||
|
||||
### Scoped Rebuilds
|
||||
|
||||
```dart
|
||||
// BAD — entire page rebuilds on every counter change
|
||||
class CounterPage extends ConsumerWidget {
|
||||
@override
|
||||
Widget build(BuildContext context, WidgetRef ref) {
|
||||
final count = ref.watch(counterProvider); // rebuilds everything
|
||||
return Scaffold(
|
||||
body: Column(children: [
|
||||
const ExpensiveHeader(), // unnecessarily rebuilt
|
||||
Text('$count'),
|
||||
const ExpensiveFooter(), // unnecessarily rebuilt
|
||||
]),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// GOOD — isolate the rebuilding part
|
||||
class CounterPage extends StatelessWidget {
|
||||
const CounterPage({super.key});
|
||||
|
||||
@override
|
||||
Widget build(BuildContext context) {
|
||||
return const Scaffold(
|
||||
body: Column(children: [
|
||||
ExpensiveHeader(), // never rebuilt (const)
|
||||
_CounterDisplay(), // only this rebuilds
|
||||
ExpensiveFooter(), // never rebuilt (const)
|
||||
]),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
class _CounterDisplay extends ConsumerWidget {
|
||||
const _CounterDisplay();
|
||||
|
||||
@override
|
||||
Widget build(BuildContext context, WidgetRef ref) {
|
||||
final count = ref.watch(counterProvider);
|
||||
return Text('$count');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. State Management: BLoC/Cubit
|
||||
|
||||
```dart
|
||||
// Cubit — synchronous or simple async state
|
||||
class AuthCubit extends Cubit<AuthState> {
|
||||
AuthCubit(this._authService) : super(const AuthState.initial());
|
||||
final AuthService _authService;
|
||||
|
||||
Future<void> login(String email, String password) async {
|
||||
emit(const AuthState.loading());
|
||||
try {
|
||||
final user = await _authService.login(email, password);
|
||||
emit(AuthState.authenticated(user));
|
||||
} on AuthException catch (e) {
|
||||
emit(AuthState.error(e.message));
|
||||
}
|
||||
}
|
||||
|
||||
void logout() {
|
||||
_authService.logout();
|
||||
emit(const AuthState.initial());
|
||||
}
|
||||
}
|
||||
|
||||
// In widget
|
||||
BlocBuilder<AuthCubit, AuthState>(
|
||||
builder: (context, state) => switch (state) {
|
||||
AuthInitial() => const LoginForm(),
|
||||
AuthLoading() => const CircularProgressIndicator(),
|
||||
AuthAuthenticated(:final user) => HomePage(user: user),
|
||||
AuthError(:final message) => ErrorView(message: message),
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. State Management: Riverpod
|
||||
|
||||
```dart
|
||||
// Auto-dispose async provider
|
||||
@riverpod
|
||||
Future<List<Product>> products(Ref ref) async {
|
||||
final repo = ref.watch(productRepositoryProvider);
|
||||
return repo.getAll();
|
||||
}
|
||||
|
||||
// Notifier with complex mutations
|
||||
@riverpod
|
||||
class CartNotifier extends _$CartNotifier {
|
||||
@override
|
||||
List<CartItem> build() => [];
|
||||
|
||||
void add(Product product) {
|
||||
final existing = state.where((i) => i.productId == product.id).firstOrNull;
|
||||
if (existing != null) {
|
||||
state = [
|
||||
for (final item in state)
|
||||
if (item.productId == product.id) item.copyWith(quantity: item.quantity + 1)
|
||||
else item,
|
||||
];
|
||||
} else {
|
||||
state = [...state, CartItem(productId: product.id, quantity: 1)];
|
||||
}
|
||||
}
|
||||
|
||||
void remove(String productId) =>
|
||||
state = state.where((i) => i.productId != productId).toList();
|
||||
|
||||
void clear() => state = [];
|
||||
}
|
||||
|
||||
// Derived provider (selector pattern)
|
||||
@riverpod
|
||||
int cartCount(Ref ref) => ref.watch(cartNotifierProvider).length;
|
||||
|
||||
@riverpod
|
||||
double cartTotal(Ref ref) {
|
||||
final cart = ref.watch(cartNotifierProvider);
|
||||
final products = ref.watch(productsProvider).valueOrNull ?? [];
|
||||
return cart.fold(0.0, (total, item) {
|
||||
// firstWhereOrNull (from collection package) avoids StateError when product is missing
|
||||
final product = products.firstWhereOrNull((p) => p.id == item.productId);
|
||||
return total + (product?.price ?? 0) * item.quantity;
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Navigation with GoRouter
|
||||
|
||||
```dart
|
||||
final router = GoRouter(
|
||||
initialLocation: '/',
|
||||
// refreshListenable re-evaluates redirect whenever auth state changes
|
||||
refreshListenable: GoRouterRefreshStream(authCubit.stream),
|
||||
redirect: (context, state) {
|
||||
final isLoggedIn = context.read<AuthCubit>().state is AuthAuthenticated;
|
||||
final isGoingToLogin = state.matchedLocation == '/login';
|
||||
if (!isLoggedIn && !isGoingToLogin) return '/login';
|
||||
if (isLoggedIn && isGoingToLogin) return '/';
|
||||
return null;
|
||||
},
|
||||
routes: [
|
||||
GoRoute(path: '/login', builder: (_, __) => const LoginPage()),
|
||||
ShellRoute(
|
||||
builder: (context, state, child) => AppShell(child: child),
|
||||
routes: [
|
||||
GoRoute(path: '/', builder: (_, __) => const HomePage()),
|
||||
GoRoute(
|
||||
path: '/products/:id',
|
||||
builder: (context, state) =>
|
||||
ProductDetailPage(id: state.pathParameters['id']!),
|
||||
),
|
||||
],
|
||||
),
|
||||
],
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. HTTP with Dio
|
||||
|
||||
```dart
|
||||
final dio = Dio(BaseOptions(
|
||||
baseUrl: const String.fromEnvironment('API_URL'),
|
||||
connectTimeout: const Duration(seconds: 10),
|
||||
receiveTimeout: const Duration(seconds: 30),
|
||||
headers: {'Content-Type': 'application/json'},
|
||||
));
|
||||
|
||||
// Add auth interceptor
|
||||
dio.interceptors.add(InterceptorsWrapper(
|
||||
onRequest: (options, handler) async {
|
||||
final token = await secureStorage.read(key: 'auth_token');
|
||||
if (token != null) options.headers['Authorization'] = 'Bearer $token';
|
||||
handler.next(options);
|
||||
},
|
||||
onError: (error, handler) async {
|
||||
// Guard against infinite retry loops: only attempt refresh once per request
|
||||
final isRetry = error.requestOptions.extra['_isRetry'] == true;
|
||||
if (!isRetry && error.response?.statusCode == 401) {
|
||||
final refreshed = await attemptTokenRefresh();
|
||||
if (refreshed) {
|
||||
error.requestOptions.extra['_isRetry'] = true;
|
||||
return handler.resolve(await dio.fetch(error.requestOptions));
|
||||
}
|
||||
}
|
||||
handler.next(error);
|
||||
},
|
||||
));
|
||||
|
||||
// Repository using Dio
|
||||
class UserApiDataSource {
|
||||
const UserApiDataSource(this._dio);
|
||||
final Dio _dio;
|
||||
|
||||
Future<User> getById(String id) async {
|
||||
final response = await _dio.get<Map<String, dynamic>>('/users/$id');
|
||||
return User.fromJson(response.data!);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Error Handling Architecture
|
||||
|
||||
```dart
|
||||
// Global error capture — set up in main()
|
||||
void main() {
|
||||
FlutterError.onError = (details) {
|
||||
FlutterError.presentError(details);
|
||||
crashlytics.recordFlutterFatalError(details);
|
||||
};
|
||||
|
||||
PlatformDispatcher.instance.onError = (error, stack) {
|
||||
crashlytics.recordError(error, stack, fatal: true);
|
||||
return true;
|
||||
};
|
||||
|
||||
runApp(const App());
|
||||
}
|
||||
|
||||
// Custom ErrorWidget for production
|
||||
class App extends StatelessWidget {
|
||||
@override
|
||||
Widget build(BuildContext context) {
|
||||
ErrorWidget.builder = (details) => ProductionErrorWidget(details);
|
||||
return MaterialApp.router(routerConfig: router);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10. Testing Quick Reference
|
||||
|
||||
```dart
|
||||
// Unit test — use case
|
||||
test('GetUserUseCase returns null for missing user', () async {
|
||||
final repo = FakeUserRepository();
|
||||
final useCase = GetUserUseCase(repo);
|
||||
expect(await useCase('missing-id'), isNull);
|
||||
});
|
||||
|
||||
// BLoC test
|
||||
blocTest<AuthCubit, AuthState>(
|
||||
'emits loading then error on failed login',
|
||||
build: () => AuthCubit(FakeAuthService(throwsOn: 'login')),
|
||||
act: (cubit) => cubit.login('user@test.com', 'wrong'),
|
||||
expect: () => [const AuthState.loading(), isA<AuthError>()],
|
||||
);
|
||||
|
||||
// Widget test
|
||||
testWidgets('CartBadge shows item count', (tester) async {
|
||||
await tester.pumpWidget(
|
||||
ProviderScope(
|
||||
overrides: [cartNotifierProvider.overrideWith(() => FakeCartNotifier(count: 3))],
|
||||
child: const MaterialApp(home: CartBadge()),
|
||||
),
|
||||
);
|
||||
expect(find.text('3'), findsOneWidget);
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [Effective Dart: Design](https://dart.dev/effective-dart/design)
|
||||
- [Flutter Performance Best Practices](https://docs.flutter.dev/perf/best-practices)
|
||||
- [Riverpod Documentation](https://riverpod.dev/)
|
||||
- [BLoC Library](https://bloclibrary.dev/)
|
||||
- [GoRouter](https://pub.dev/packages/go_router)
|
||||
- [Freezed](https://pub.dev/packages/freezed)
|
||||
- Skill: `flutter-dart-code-review` — comprehensive review checklist
|
||||
- Rules: `rules/dart/` — coding style, patterns, security, testing, hooks
|
||||
@@ -1,321 +0,0 @@
|
||||
---
|
||||
name: dotnet-patterns
|
||||
description: Idiomatic C# and .NET patterns, conventions, dependency injection, async/await, and best practices for building robust, maintainable .NET applications.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# .NET Development Patterns
|
||||
|
||||
Idiomatic C# and .NET patterns for building robust, performant, and maintainable applications.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- Writing new C# code
|
||||
- Reviewing C# code
|
||||
- Refactoring existing .NET applications
|
||||
- Designing service architectures with ASP.NET Core
|
||||
|
||||
## Core Principles
|
||||
|
||||
### 1. Prefer Immutability
|
||||
|
||||
Use records and init-only properties for data models. Mutability should be an explicit, justified choice.
|
||||
|
||||
```csharp
|
||||
// Good: Immutable value object
|
||||
public sealed record Money(decimal Amount, string Currency);
|
||||
|
||||
// Good: Immutable DTO with init setters
|
||||
public sealed class CreateOrderRequest
|
||||
{
|
||||
public required string CustomerId { get; init; }
|
||||
public required IReadOnlyList<OrderItem> Items { get; init; }
|
||||
}
|
||||
|
||||
// Bad: Mutable model with public setters
|
||||
public class Order
|
||||
{
|
||||
public string CustomerId { get; set; }
|
||||
public List<OrderItem> Items { get; set; }
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Explicit Over Implicit
|
||||
|
||||
Be clear about nullability, access modifiers, and intent.
|
||||
|
||||
```csharp
|
||||
// Good: Explicit access modifiers and nullability
|
||||
public sealed class UserService
|
||||
{
|
||||
private readonly IUserRepository _repository;
|
||||
private readonly ILogger<UserService> _logger;
|
||||
|
||||
public UserService(IUserRepository repository, ILogger<UserService> logger)
|
||||
{
|
||||
_repository = repository ?? throw new ArgumentNullException(nameof(repository));
|
||||
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
|
||||
}
|
||||
|
||||
public async Task<User?> FindByIdAsync(Guid id, CancellationToken cancellationToken)
|
||||
{
|
||||
return await _repository.FindByIdAsync(id, cancellationToken);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Depend on Abstractions
|
||||
|
||||
Use interfaces for service boundaries. Register via DI container.
|
||||
|
||||
```csharp
|
||||
// Good: Interface-based dependency
|
||||
public interface IOrderRepository
|
||||
{
|
||||
Task<Order?> FindByIdAsync(Guid id, CancellationToken cancellationToken);
|
||||
Task<IReadOnlyList<Order>> FindByCustomerAsync(string customerId, CancellationToken cancellationToken);
|
||||
Task AddAsync(Order order, CancellationToken cancellationToken);
|
||||
}
|
||||
|
||||
// Registration
|
||||
builder.Services.AddScoped<IOrderRepository, SqlOrderRepository>();
|
||||
```
|
||||
|
||||
## Async/Await Patterns
|
||||
|
||||
### Proper Async Usage
|
||||
|
||||
```csharp
|
||||
// Good: Async all the way, with CancellationToken
|
||||
public async Task<OrderSummary> GetOrderSummaryAsync(
|
||||
Guid orderId,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
var order = await _repository.FindByIdAsync(orderId, cancellationToken)
|
||||
?? throw new NotFoundException($"Order {orderId} not found");
|
||||
|
||||
var customer = await _customerService.GetAsync(order.CustomerId, cancellationToken);
|
||||
|
||||
return new OrderSummary(order, customer);
|
||||
}
|
||||
|
||||
// Bad: Blocking on async
|
||||
public OrderSummary GetOrderSummary(Guid orderId)
|
||||
{
|
||||
var order = _repository.FindByIdAsync(orderId, CancellationToken.None).Result; // Deadlock risk
|
||||
return new OrderSummary(order);
|
||||
}
|
||||
```
|
||||
|
||||
### Parallel Async Operations
|
||||
|
||||
```csharp
|
||||
// Good: Concurrent independent operations
|
||||
public async Task<DashboardData> LoadDashboardAsync(CancellationToken cancellationToken)
|
||||
{
|
||||
var ordersTask = _orderService.GetRecentAsync(cancellationToken);
|
||||
var metricsTask = _metricsService.GetCurrentAsync(cancellationToken);
|
||||
var alertsTask = _alertService.GetActiveAsync(cancellationToken);
|
||||
|
||||
await Task.WhenAll(ordersTask, metricsTask, alertsTask);
|
||||
|
||||
return new DashboardData(
|
||||
Orders: await ordersTask,
|
||||
Metrics: await metricsTask,
|
||||
Alerts: await alertsTask);
|
||||
}
|
||||
```
|
||||
|
||||
## Options Pattern
|
||||
|
||||
Bind configuration sections to strongly-typed objects.
|
||||
|
||||
```csharp
|
||||
public sealed class SmtpOptions
|
||||
{
|
||||
public const string SectionName = "Smtp";
|
||||
|
||||
public required string Host { get; init; }
|
||||
public required int Port { get; init; }
|
||||
public required string Username { get; init; }
|
||||
public bool UseSsl { get; init; } = true;
|
||||
}
|
||||
|
||||
// Registration
|
||||
builder.Services.Configure<SmtpOptions>(
|
||||
builder.Configuration.GetSection(SmtpOptions.SectionName));
|
||||
|
||||
// Usage via injection
|
||||
public class EmailService(IOptions<SmtpOptions> options)
|
||||
{
|
||||
private readonly SmtpOptions _smtp = options.Value;
|
||||
}
|
||||
```
|
||||
|
||||
## Result Pattern
|
||||
|
||||
Return explicit success/failure instead of throwing for expected failures.
|
||||
|
||||
```csharp
|
||||
public sealed record Result<T>
|
||||
{
|
||||
public bool IsSuccess { get; }
|
||||
public T? Value { get; }
|
||||
public string? Error { get; }
|
||||
|
||||
private Result(T value) { IsSuccess = true; Value = value; }
|
||||
private Result(string error) { IsSuccess = false; Error = error; }
|
||||
|
||||
public static Result<T> Success(T value) => new(value);
|
||||
public static Result<T> Failure(string error) => new(error);
|
||||
}
|
||||
|
||||
// Usage
|
||||
public async Task<Result<Order>> PlaceOrderAsync(CreateOrderRequest request)
|
||||
{
|
||||
if (request.Items.Count == 0)
|
||||
return Result<Order>.Failure("Order must contain at least one item");
|
||||
|
||||
var order = Order.Create(request);
|
||||
await _repository.AddAsync(order, CancellationToken.None);
|
||||
return Result<Order>.Success(order);
|
||||
}
|
||||
```
|
||||
|
||||
## Repository Pattern with EF Core
|
||||
|
||||
```csharp
|
||||
public sealed class SqlOrderRepository : IOrderRepository
|
||||
{
|
||||
private readonly AppDbContext _db;
|
||||
|
||||
public SqlOrderRepository(AppDbContext db) => _db = db;
|
||||
|
||||
public async Task<Order?> FindByIdAsync(Guid id, CancellationToken cancellationToken)
|
||||
{
|
||||
return await _db.Orders
|
||||
.Include(o => o.Items)
|
||||
.AsNoTracking()
|
||||
.FirstOrDefaultAsync(o => o.Id == id, cancellationToken);
|
||||
}
|
||||
|
||||
public async Task<IReadOnlyList<Order>> FindByCustomerAsync(
|
||||
string customerId,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
return await _db.Orders
|
||||
.Where(o => o.CustomerId == customerId)
|
||||
.OrderByDescending(o => o.CreatedAt)
|
||||
.AsNoTracking()
|
||||
.ToListAsync(cancellationToken);
|
||||
}
|
||||
|
||||
public async Task AddAsync(Order order, CancellationToken cancellationToken)
|
||||
{
|
||||
_db.Orders.Add(order);
|
||||
await _db.SaveChangesAsync(cancellationToken);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Middleware and Pipeline
|
||||
|
||||
```csharp
|
||||
// Custom middleware
|
||||
public sealed class RequestTimingMiddleware
|
||||
{
|
||||
private readonly RequestDelegate _next;
|
||||
private readonly ILogger<RequestTimingMiddleware> _logger;
|
||||
|
||||
public RequestTimingMiddleware(RequestDelegate next, ILogger<RequestTimingMiddleware> logger)
|
||||
{
|
||||
_next = next;
|
||||
_logger = logger;
|
||||
}
|
||||
|
||||
public async Task InvokeAsync(HttpContext context)
|
||||
{
|
||||
var stopwatch = Stopwatch.StartNew();
|
||||
try
|
||||
{
|
||||
await _next(context);
|
||||
}
|
||||
finally
|
||||
{
|
||||
stopwatch.Stop();
|
||||
_logger.LogInformation(
|
||||
"Request {Method} {Path} completed in {ElapsedMs}ms with status {StatusCode}",
|
||||
context.Request.Method,
|
||||
context.Request.Path,
|
||||
stopwatch.ElapsedMilliseconds,
|
||||
context.Response.StatusCode);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Minimal API Patterns
|
||||
|
||||
```csharp
|
||||
// Organized with route groups
|
||||
var orders = app.MapGroup("/api/orders")
|
||||
.RequireAuthorization()
|
||||
.WithTags("Orders");
|
||||
|
||||
orders.MapGet("/{id:guid}", async (
|
||||
Guid id,
|
||||
IOrderRepository repository,
|
||||
CancellationToken cancellationToken) =>
|
||||
{
|
||||
var order = await repository.FindByIdAsync(id, cancellationToken);
|
||||
return order is not null
|
||||
? TypedResults.Ok(order)
|
||||
: TypedResults.NotFound();
|
||||
});
|
||||
|
||||
orders.MapPost("/", async (
|
||||
CreateOrderRequest request,
|
||||
IOrderService service,
|
||||
CancellationToken cancellationToken) =>
|
||||
{
|
||||
var result = await service.PlaceOrderAsync(request, cancellationToken);
|
||||
return result.IsSuccess
|
||||
? TypedResults.Created($"/api/orders/{result.Value!.Id}", result.Value)
|
||||
: TypedResults.BadRequest(result.Error);
|
||||
});
|
||||
```
|
||||
|
||||
## Guard Clauses
|
||||
|
||||
```csharp
|
||||
// Good: Early returns with clear validation
|
||||
public async Task<ProcessResult> ProcessPaymentAsync(
|
||||
PaymentRequest request,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(request);
|
||||
|
||||
if (request.Amount <= 0)
|
||||
throw new ArgumentOutOfRangeException(nameof(request.Amount), "Amount must be positive");
|
||||
|
||||
if (string.IsNullOrWhiteSpace(request.Currency))
|
||||
throw new ArgumentException("Currency is required", nameof(request.Currency));
|
||||
|
||||
// Happy path continues here without nesting
|
||||
var gateway = _gatewayFactory.Create(request.Currency);
|
||||
return await gateway.ChargeAsync(request, cancellationToken);
|
||||
}
|
||||
```
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
| Anti-Pattern | Fix |
|
||||
|---|---|
|
||||
| `async void` methods | Return `Task` (except event handlers) |
|
||||
| `.Result` or `.Wait()` | Use `await` |
|
||||
| `catch (Exception) { }` | Handle or rethrow with context |
|
||||
| `new Service()` in constructors | Use constructor injection |
|
||||
| `public` fields | Use properties with appropriate accessors |
|
||||
| `dynamic` in business logic | Use generics or explicit types |
|
||||
| Mutable `static` state | Use DI scoping or `ConcurrentDictionary` |
|
||||
| `string.Format` in loops | Use `StringBuilder` or interpolated string handlers |
|
||||
@@ -206,25 +206,25 @@ export const buildCreateOrderUseCase = (deps: { db: SqlClient; stripe: StripeCli
|
||||
Use the same boundary rules across ecosystems; only syntax and wiring style change.
|
||||
|
||||
- **TypeScript/JavaScript**
|
||||
- Ports: `application/ports/*` as interfaces/types.
|
||||
- Use cases: classes/functions with constructor/argument injection.
|
||||
- Adapters: `adapters/inbound/*`, `adapters/outbound/*`.
|
||||
- Composition: explicit factory/container module (no hidden globals).
|
||||
- Ports: `application/ports/*` as interfaces/types.
|
||||
- Use cases: classes/functions with constructor/argument injection.
|
||||
- Adapters: `adapters/inbound/*`, `adapters/outbound/*`.
|
||||
- Composition: explicit factory/container module (no hidden globals).
|
||||
- **Java**
|
||||
- Packages: `domain`, `application.port.in`, `application.port.out`, `application.usecase`, `adapter.in`, `adapter.out`.
|
||||
- Ports: interfaces in `application.port.*`.
|
||||
- Use cases: plain classes (Spring `@Service` is optional, not required).
|
||||
- Composition: Spring config or manual wiring class; keep wiring out of domain/use-case classes.
|
||||
- Packages: `domain`, `application.port.in`, `application.port.out`, `application.usecase`, `adapter.in`, `adapter.out`.
|
||||
- Ports: interfaces in `application.port.*`.
|
||||
- Use cases: plain classes (Spring `@Service` is optional, not required).
|
||||
- Composition: Spring config or manual wiring class; keep wiring out of domain/use-case classes.
|
||||
- **Kotlin**
|
||||
- Modules/packages mirror the Java split (`domain`, `application.port`, `application.usecase`, `adapter`).
|
||||
- Ports: Kotlin interfaces.
|
||||
- Use cases: classes with constructor injection (Koin/Dagger/Spring/manual).
|
||||
- Composition: module definitions or dedicated composition functions; avoid service locator patterns.
|
||||
- Modules/packages mirror the Java split (`domain`, `application.port`, `application.usecase`, `adapter`).
|
||||
- Ports: Kotlin interfaces.
|
||||
- Use cases: classes with constructor injection (Koin/Dagger/Spring/manual).
|
||||
- Composition: module definitions or dedicated composition functions; avoid service locator patterns.
|
||||
- **Go**
|
||||
- Packages: `internal/<feature>/domain`, `application`, `ports`, `adapters/inbound`, `adapters/outbound`.
|
||||
- Ports: small interfaces owned by the consuming application package.
|
||||
- Use cases: structs with interface fields plus explicit `New...` constructors.
|
||||
- Composition: wire in `cmd/<app>/main.go` (or dedicated wiring package), keep constructors explicit.
|
||||
- Packages: `internal/<feature>/domain`, `application`, `ports`, `adapters/inbound`, `adapters/outbound`.
|
||||
- Ports: small interfaces owned by the consuming application package.
|
||||
- Use cases: structs with interface fields plus explicit `New...` constructors.
|
||||
- Composition: wire in `cmd/<app>/main.go` (or dedicated wiring package), keep constructors explicit.
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
|
||||
@@ -24,11 +24,6 @@ Write investor communication that is short, concrete, and easy to act on.
|
||||
4. Stay concise.
|
||||
5. Never send copy that could go to any investor.
|
||||
|
||||
## Voice Handling
|
||||
|
||||
If the user's voice matters, run `brand-voice` first and reuse its `VOICE PROFILE`.
|
||||
This skill should keep the investor-specific structure and ask discipline, not recreate its own parallel voice system.
|
||||
|
||||
## Hard Bans
|
||||
|
||||
Delete and rewrite any of these:
|
||||
@@ -36,6 +31,7 @@ Delete and rewrite any of these:
|
||||
- "excited to share"
|
||||
- generic thesis praise without a real tie-in
|
||||
- vague founder adjectives
|
||||
- "no fluff"
|
||||
- begging language
|
||||
- soft closing questions when a direct ask is clearer
|
||||
|
||||
|
||||
@@ -1,293 +0,0 @@
|
||||
---
|
||||
name: jira-integration
|
||||
description: Use this skill when retrieving Jira tickets, analyzing requirements, updating ticket status, adding comments, or transitioning issues. Provides Jira API patterns via MCP or direct REST calls.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Jira Integration Skill
|
||||
|
||||
Retrieve, analyze, and update Jira tickets directly from your AI coding workflow. Supports both **MCP-based** (recommended) and **direct REST API** approaches.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- Fetching a Jira ticket to understand requirements
|
||||
- Extracting testable acceptance criteria from a ticket
|
||||
- Adding progress comments to a Jira issue
|
||||
- Transitioning a ticket status (To Do → In Progress → Done)
|
||||
- Linking merge requests or branches to a Jira issue
|
||||
- Searching for issues by JQL query
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Option A: MCP Server (Recommended)
|
||||
|
||||
Install the `mcp-atlassian` MCP server. This exposes Jira tools directly to your AI agent.
|
||||
|
||||
**Requirements:**
|
||||
- Python 3.10+
|
||||
- `uvx` (from `uv`), installed via your package manager or the official `uv` installation documentation
|
||||
|
||||
**Add to your MCP config** (e.g., `~/.claude.json` → `mcpServers`):
|
||||
|
||||
```json
|
||||
{
|
||||
"jira": {
|
||||
"command": "uvx",
|
||||
"args": ["mcp-atlassian==0.21.0"],
|
||||
"env": {
|
||||
"JIRA_URL": "https://YOUR_ORG.atlassian.net",
|
||||
"JIRA_EMAIL": "your.email@example.com",
|
||||
"JIRA_API_TOKEN": "your-api-token"
|
||||
},
|
||||
"description": "Jira issue tracking — search, create, update, comment, transition"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> **Security:** Never hardcode secrets. Prefer setting `JIRA_URL`, `JIRA_EMAIL`, and `JIRA_API_TOKEN` in your system environment (or a secrets manager). Only use the MCP `env` block for local, uncommitted config files.
|
||||
|
||||
**To get a Jira API token:**
|
||||
1. Go to <https://id.atlassian.com/manage-profile/security/api-tokens>
|
||||
2. Click **Create API token**
|
||||
3. Copy the token — store it in your environment, never in source code
|
||||
|
||||
### Option B: Direct REST API
|
||||
|
||||
If MCP is not available, use the Jira REST API v3 directly via `curl` or a helper script.
|
||||
|
||||
**Required environment variables:**
|
||||
|
||||
| Variable | Description |
|
||||
|----------|-------------|
|
||||
| `JIRA_URL` | Your Jira instance URL (e.g., `https://yourorg.atlassian.net`) |
|
||||
| `JIRA_EMAIL` | Your Atlassian account email |
|
||||
| `JIRA_API_TOKEN` | API token from id.atlassian.com |
|
||||
|
||||
Store these in your shell environment, secrets manager, or an untracked local env file. Do not commit them to the repo.
|
||||
|
||||
## MCP Tools Reference
|
||||
|
||||
When the `mcp-atlassian` MCP server is configured, these tools are available:
|
||||
|
||||
| Tool | Purpose | Example |
|
||||
|------|---------|---------|
|
||||
| `jira_search` | JQL queries | `project = PROJ AND status = "In Progress"` |
|
||||
| `jira_get_issue` | Fetch full issue details by key | `PROJ-1234` |
|
||||
| `jira_create_issue` | Create issues (Task, Bug, Story, Epic) | New bug report |
|
||||
| `jira_update_issue` | Update fields (summary, description, assignee) | Change assignee |
|
||||
| `jira_transition_issue` | Change status | Move to "In Review" |
|
||||
| `jira_add_comment` | Add comments | Progress update |
|
||||
| `jira_get_sprint_issues` | List issues in a sprint | Active sprint review |
|
||||
| `jira_create_issue_link` | Link issues (Blocks, Relates to) | Dependency tracking |
|
||||
| `jira_get_issue_development_info` | See linked PRs, branches, commits | Dev context |
|
||||
|
||||
> **Tip:** Always call `jira_get_transitions` before transitioning — transition IDs vary per project workflow.
|
||||
|
||||
## Direct REST API Reference
|
||||
|
||||
### Fetch a Ticket
|
||||
|
||||
```bash
|
||||
curl -s -u "$JIRA_EMAIL:$JIRA_API_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
"$JIRA_URL/rest/api/3/issue/PROJ-1234" | jq '{
|
||||
key: .key,
|
||||
summary: .fields.summary,
|
||||
status: .fields.status.name,
|
||||
priority: .fields.priority.name,
|
||||
type: .fields.issuetype.name,
|
||||
assignee: .fields.assignee.displayName,
|
||||
labels: .fields.labels,
|
||||
description: .fields.description
|
||||
}'
|
||||
```
|
||||
|
||||
### Fetch Comments
|
||||
|
||||
```bash
|
||||
curl -s -u "$JIRA_EMAIL:$JIRA_API_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
"$JIRA_URL/rest/api/3/issue/PROJ-1234?fields=comment" | jq '.fields.comment.comments[] | {
|
||||
author: .author.displayName,
|
||||
created: .created[:10],
|
||||
body: .body
|
||||
}'
|
||||
```
|
||||
|
||||
### Add a Comment
|
||||
|
||||
```bash
|
||||
curl -s -X POST -u "$JIRA_EMAIL:$JIRA_API_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"body": {
|
||||
"version": 1,
|
||||
"type": "doc",
|
||||
"content": [{
|
||||
"type": "paragraph",
|
||||
"content": [{"type": "text", "text": "Your comment here"}]
|
||||
}]
|
||||
}
|
||||
}' \
|
||||
"$JIRA_URL/rest/api/3/issue/PROJ-1234/comment"
|
||||
```
|
||||
|
||||
### Transition a Ticket
|
||||
|
||||
```bash
|
||||
# 1. Get available transitions
|
||||
curl -s -u "$JIRA_EMAIL:$JIRA_API_TOKEN" \
|
||||
"$JIRA_URL/rest/api/3/issue/PROJ-1234/transitions" | jq '.transitions[] | {id, name: .name}'
|
||||
|
||||
# 2. Execute transition (replace TRANSITION_ID)
|
||||
curl -s -X POST -u "$JIRA_EMAIL:$JIRA_API_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"transition": {"id": "TRANSITION_ID"}}' \
|
||||
"$JIRA_URL/rest/api/3/issue/PROJ-1234/transitions"
|
||||
```
|
||||
|
||||
### Search with JQL
|
||||
|
||||
```bash
|
||||
curl -s -G -u "$JIRA_EMAIL:$JIRA_API_TOKEN" \
|
||||
--data-urlencode "jql=project = PROJ AND status = 'In Progress'" \
|
||||
"$JIRA_URL/rest/api/3/search"
|
||||
```
|
||||
|
||||
## Analyzing a Ticket
|
||||
|
||||
When retrieving a ticket for development or test automation, extract:
|
||||
|
||||
### 1. Testable Requirements
|
||||
- **Functional requirements** — What the feature does
|
||||
- **Acceptance criteria** — Conditions that must be met
|
||||
- **Testable behaviors** — Specific actions and expected outcomes
|
||||
- **User roles** — Who uses this feature and their permissions
|
||||
- **Data requirements** — What data is needed
|
||||
- **Integration points** — APIs, services, or systems involved
|
||||
|
||||
### 2. Test Types Needed
|
||||
- **Unit tests** — Individual functions and utilities
|
||||
- **Integration tests** — API endpoints and service interactions
|
||||
- **E2E tests** — User-facing UI flows
|
||||
- **API tests** — Endpoint contracts and error handling
|
||||
|
||||
### 3. Edge Cases & Error Scenarios
|
||||
- Invalid inputs (empty, too long, special characters)
|
||||
- Unauthorized access
|
||||
- Network failures or timeouts
|
||||
- Concurrent users or race conditions
|
||||
- Boundary conditions
|
||||
- Missing or null data
|
||||
- State transitions (back navigation, refresh, etc.)
|
||||
|
||||
### 4. Structured Analysis Output
|
||||
|
||||
```
|
||||
Ticket: PROJ-1234
|
||||
Summary: [ticket title]
|
||||
Status: [current status]
|
||||
Priority: [High/Medium/Low]
|
||||
Test Types: Unit, Integration, E2E
|
||||
|
||||
Requirements:
|
||||
1. [requirement 1]
|
||||
2. [requirement 2]
|
||||
|
||||
Acceptance Criteria:
|
||||
- [ ] [criterion 1]
|
||||
- [ ] [criterion 2]
|
||||
|
||||
Test Scenarios:
|
||||
- Happy Path: [description]
|
||||
- Error Case: [description]
|
||||
- Edge Case: [description]
|
||||
|
||||
Test Data Needed:
|
||||
- [data item 1]
|
||||
- [data item 2]
|
||||
|
||||
Dependencies:
|
||||
- [dependency 1]
|
||||
- [dependency 2]
|
||||
```
|
||||
|
||||
## Updating Tickets
|
||||
|
||||
### When to Update
|
||||
|
||||
| Workflow Step | Jira Update |
|
||||
|---|---|
|
||||
| Start work | Transition to "In Progress" |
|
||||
| Tests written | Comment with test coverage summary |
|
||||
| Branch created | Comment with branch name |
|
||||
| PR/MR created | Comment with link, link issue |
|
||||
| Tests passing | Comment with results summary |
|
||||
| PR/MR merged | Transition to "Done" or "In Review" |
|
||||
|
||||
### Comment Templates
|
||||
|
||||
**Starting Work:**
|
||||
```
|
||||
Starting implementation for this ticket.
|
||||
Branch: feat/PROJ-1234-feature-name
|
||||
```
|
||||
|
||||
**Tests Implemented:**
|
||||
```
|
||||
Automated tests implemented:
|
||||
|
||||
Unit Tests:
|
||||
- [test file 1] — [what it covers]
|
||||
- [test file 2] — [what it covers]
|
||||
|
||||
Integration Tests:
|
||||
- [test file] — [endpoints/flows covered]
|
||||
|
||||
All tests passing locally. Coverage: XX%
|
||||
```
|
||||
|
||||
**PR Created:**
|
||||
```
|
||||
Pull request created:
|
||||
[PR Title](https://github.com/org/repo/pull/XXX)
|
||||
|
||||
Ready for review.
|
||||
```
|
||||
|
||||
**Work Complete:**
|
||||
```
|
||||
Implementation complete.
|
||||
|
||||
PR merged: [link]
|
||||
Test results: All passing (X/Y)
|
||||
Coverage: XX%
|
||||
```
|
||||
|
||||
## Security Guidelines
|
||||
|
||||
- **Never hardcode** Jira API tokens in source code or skill files
|
||||
- **Always use** environment variables or a secrets manager
|
||||
- **Add `.env`** to `.gitignore` in every project
|
||||
- **Rotate tokens** immediately if exposed in git history
|
||||
- **Use least-privilege** API tokens scoped to required projects
|
||||
- **Validate** that credentials are set before making API calls — fail fast with a clear message
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Error | Cause | Fix |
|
||||
|---|---|---|
|
||||
| `401 Unauthorized` | Invalid or expired API token | Regenerate at id.atlassian.com |
|
||||
| `403 Forbidden` | Token lacks project permissions | Check token scopes and project access |
|
||||
| `404 Not Found` | Wrong ticket key or base URL | Verify `JIRA_URL` and ticket key |
|
||||
| `spawn uvx ENOENT` | IDE cannot find `uvx` on PATH | Use full path (e.g., `~/.local/bin/uvx`) or set PATH in `~/.zprofile` |
|
||||
| Connection timeout | Network/VPN issue | Check VPN connection and firewall rules |
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Update Jira as you go, not all at once at the end
|
||||
- Keep comments concise but informative
|
||||
- Link rather than copy — point to PRs, test reports, and dashboards
|
||||
- Use @mentions if you need input from others
|
||||
- Check linked issues to understand full feature scope before starting
|
||||
- If acceptance criteria are vague, ask for clarification before writing code
|
||||
@@ -21,7 +21,7 @@ Agent-powered lead intelligence pipeline that finds, scores, and reaches high-va
|
||||
|
||||
### Required
|
||||
- **Exa MCP** — Deep web search for people, companies, and signals (`web_search_exa`)
|
||||
- **X API** — Follower/following graph, mutual analysis, recent activity (`X_BEARER_TOKEN`, plus write-context credentials such as `X_CONSUMER_KEY`, `X_CONSUMER_SECRET`, `X_ACCESS_TOKEN`, `X_ACCESS_TOKEN_SECRET`)
|
||||
- **X API** — Follower/following graph, mutual analysis, recent activity (`X_BEARER_TOKEN`, `X_ACCESS_TOKEN`)
|
||||
|
||||
### Optional (enhance results)
|
||||
- **LinkedIn** — Direct API if available, otherwise browser control for search, profile inspection, and drafting
|
||||
@@ -43,10 +43,36 @@ Agent-powered lead intelligence pipeline that finds, scores, and reaches high-va
|
||||
|
||||
Do not draft outbound from generic sales copy.
|
||||
|
||||
Run `brand-voice` first whenever the user's voice matters. Reuse its `VOICE PROFILE` instead of re-deriving style ad hoc inside this skill.
|
||||
Before writing a message, build a voice profile from real source material. Prefer:
|
||||
|
||||
- recent X posts and threads
|
||||
- published articles, memos, or launch notes
|
||||
- prior outreach emails that actually worked
|
||||
- docs, changelogs, or product writing if those are the strongest signals
|
||||
|
||||
If live X access is available, pull recent original posts before drafting. If not, use supplied examples or the best repo/site material available.
|
||||
|
||||
Extract:
|
||||
|
||||
- sentence length and rhythm
|
||||
- how compressed or explanatory the writing is
|
||||
- how parentheses are used
|
||||
- whether capitalization is conventional or situational
|
||||
- how often questions are used
|
||||
- phrases or transitions the author never uses
|
||||
|
||||
For Affaan / ECC style specifically:
|
||||
|
||||
- direct, compressed, concrete
|
||||
- strong preference for specifics, mechanisms, and receipts
|
||||
- parentheticals are for qualification or over-clarification, not jokes
|
||||
- lowercase is optional, not mandatory
|
||||
- no fake curiosity hooks
|
||||
- no "not X, just Y"
|
||||
- no "no fluff"
|
||||
- no LinkedIn thought-leader cadence
|
||||
- no bait question at the end
|
||||
|
||||
## Stage 1: Signal Scoring
|
||||
|
||||
Search for high-signal people in target verticals. Assign a weight to each based on:
|
||||
@@ -89,12 +115,11 @@ x_search = search_recent_tweets(
|
||||
|
||||
For each scored target, analyze the user's social graph to find the warmest path.
|
||||
|
||||
### Ranking Model
|
||||
### Algorithm
|
||||
|
||||
1. Pull user's X following list and LinkedIn connections
|
||||
2. For each high-signal target, check for shared connections
|
||||
3. Apply the `social-graph-ranker` model to score bridge value
|
||||
4. Rank mutuals by:
|
||||
3. Rank mutuals by:
|
||||
|
||||
| Factor | Weight |
|
||||
|--------|--------|
|
||||
@@ -104,20 +129,47 @@ For each scored target, analyze the user's social graph to find the warmest path
|
||||
| Industry alignment | 15% — same vertical = natural intro |
|
||||
| Mutual's X handle / LinkedIn | 10% — identifiability for outreach |
|
||||
|
||||
Canonical rule:
|
||||
### Weighted Bridge Ranking
|
||||
|
||||
```text
|
||||
Use social-graph-ranker when the user wants the graph math itself,
|
||||
the bridge ranking as a standalone report, or explicit decay-model tuning.
|
||||
```
|
||||
Treat this as the canonical network-ranking stage for lead intelligence. Do not run a separate graph skill when this stage is enough.
|
||||
|
||||
Inside this skill, use the same weighted bridge model:
|
||||
Given:
|
||||
- `T` = target leads
|
||||
- `M` = your mutuals / existing connections
|
||||
- `d(m, t)` = shortest hop distance from mutual `m` to target `t`
|
||||
- `w(t)` = target weight from signal scoring
|
||||
|
||||
Compute the base bridge score for each mutual:
|
||||
|
||||
```text
|
||||
B(m) = Σ_{t ∈ T} w(t) · λ^(d(m,t) - 1)
|
||||
```
|
||||
|
||||
Where:
|
||||
- `λ` is the decay factor, usually `0.5`
|
||||
- a direct connection contributes full value
|
||||
- each extra hop halves the contribution
|
||||
|
||||
For second-order reach, expand one level into the mutual's own network:
|
||||
|
||||
```text
|
||||
B_ext(m) = B(m) + α · Σ_{m' ∈ N(m) \\ M} Σ_{t ∈ T} w(t) · λ^(d(m',t))
|
||||
```
|
||||
|
||||
Where:
|
||||
- `N(m) \\ M` is the set of people the mutual knows that you do not
|
||||
- `α` is the second-order discount, usually `0.3`
|
||||
|
||||
Then rank by response-adjusted bridge value:
|
||||
|
||||
```text
|
||||
R(m) = B_ext(m) · (1 + β · engagement(m))
|
||||
```
|
||||
|
||||
Where:
|
||||
- `engagement(m)` is a normalized responsiveness score
|
||||
- `β` is the engagement bonus, usually `0.2`
|
||||
|
||||
Interpretation:
|
||||
- Tier 1: high `R(m)` and direct bridge paths -> warm intro asks
|
||||
- Tier 2: medium `R(m)` and one-hop bridge paths -> conditional intro asks
|
||||
@@ -126,8 +178,6 @@ Interpretation:
|
||||
### Output Format
|
||||
|
||||
```
|
||||
|
||||
If the user explicitly wants the ranking engine broken out, the math visualized, or the network scored outside the full lead workflow, run `social-graph-ranker` as a standalone pass first and feed the result back into this pipeline.
|
||||
MUTUAL RANKING REPORT
|
||||
=====================
|
||||
|
||||
@@ -283,8 +333,8 @@ Users should set these environment variables:
|
||||
export X_BEARER_TOKEN="..."
|
||||
export X_ACCESS_TOKEN="..."
|
||||
export X_ACCESS_TOKEN_SECRET="..."
|
||||
export X_CONSUMER_KEY="..."
|
||||
export X_CONSUMER_SECRET="..."
|
||||
export X_API_KEY="..."
|
||||
export X_API_SECRET="..."
|
||||
export EXA_API_KEY="..."
|
||||
|
||||
# Optional
|
||||
@@ -314,8 +364,3 @@ Agent workflow:
|
||||
|
||||
Output: Ranked list with warm paths, voice profile summary, and channel-specific outreach drafts or drafts-in-app
|
||||
```
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` for canonical voice capture
|
||||
- `connections-optimizer` for review-first network pruning and expansion before outreach
|
||||
|
||||
@@ -1,89 +0,0 @@
|
||||
---
|
||||
name: manim-video
|
||||
description: Build reusable Manim explainers for technical concepts, graphs, system diagrams, and product walkthroughs, then hand off to the wider ECC video stack if needed. Use when the user wants a clean animated explainer rather than a generic talking-head script.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Manim Video
|
||||
|
||||
Use Manim for technical explainers where motion, structure, and clarity matter more than photorealism.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- the user wants a technical explainer animation
|
||||
- the concept is a graph, workflow, architecture, metric progression, or system diagram
|
||||
- the user wants a short product or launch explainer for X or a landing page
|
||||
- the visual should feel precise instead of generically cinematic
|
||||
|
||||
## Tool Requirements
|
||||
|
||||
- `manim` CLI for scene rendering
|
||||
- `ffmpeg` for post-processing if needed
|
||||
- `video-editing` for final assembly or polish
|
||||
- `remotion-video-creation` when the final package needs composited UI, captions, or additional motion layers
|
||||
|
||||
## Default Output
|
||||
|
||||
- short 16:9 MP4
|
||||
- one thumbnail or poster frame
|
||||
- storyboard plus scene plan
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Define the core visual thesis in one sentence.
|
||||
2. Break the concept into 3 to 6 scenes.
|
||||
3. Decide what each scene proves.
|
||||
4. Write the scene outline before writing Manim code.
|
||||
5. Render the smallest working version first.
|
||||
6. Tighten typography, spacing, color, and pacing after the render works.
|
||||
7. Hand off to the wider video stack only if it adds value.
|
||||
|
||||
## Scene Planning Rules
|
||||
|
||||
- each scene should prove one thing
|
||||
- avoid overstuffed diagrams
|
||||
- prefer progressive reveal over full-screen clutter
|
||||
- use motion to explain state change, not just to keep the screen busy
|
||||
- title cards should be short and loaded with meaning
|
||||
|
||||
## Network Graph Default
|
||||
|
||||
For social-graph and network-optimization explainers:
|
||||
|
||||
- show the current graph before showing the optimized graph
|
||||
- distinguish low-signal follow clutter from high-signal bridges
|
||||
- highlight warm-path nodes and target clusters
|
||||
- if useful, add a final scene showing the self-improvement lineage that informed the skill
|
||||
|
||||
## Render Conventions
|
||||
|
||||
- default to 16:9 landscape unless the user asks for vertical
|
||||
- start with a low-quality smoke test render
|
||||
- only push to higher quality after composition and timing are stable
|
||||
- export one clean thumbnail frame that reads at social size
|
||||
|
||||
## Reusable Starter
|
||||
|
||||
Use [assets/network_graph_scene.py](assets/network_graph_scene.py) as a starting point for network-graph explainers.
|
||||
|
||||
Example smoke test:
|
||||
|
||||
```bash
|
||||
manim -ql assets/network_graph_scene.py NetworkGraphExplainer
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
Return:
|
||||
|
||||
- core visual thesis
|
||||
- storyboard
|
||||
- scene outline
|
||||
- render plan
|
||||
- any follow-on polish recommendations
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `video-editing` for final polish
|
||||
- `remotion-video-creation` for motion-heavy post-processing or compositing
|
||||
- `content-engine` when the animation is part of a broader launch
|
||||
@@ -1,52 +0,0 @@
|
||||
from manim import DOWN, LEFT, RIGHT, UP, Circle, Create, FadeIn, FadeOut, Scene, Text, VGroup, CurvedArrow
|
||||
|
||||
|
||||
class NetworkGraphExplainer(Scene):
|
||||
def construct(self):
|
||||
title = Text("Connections Optimizer", font_size=40).to_edge(UP)
|
||||
subtitle = Text("Prune low-signal follows. Strengthen warm paths.", font_size=20).next_to(title, DOWN)
|
||||
|
||||
you = Circle(radius=0.45, color="#4F8EF7").shift(LEFT * 4 + DOWN * 0.5)
|
||||
you_label = Text("You", font_size=22).move_to(you.get_center())
|
||||
|
||||
stale_a = Circle(radius=0.32, color="#7A7A7A").shift(LEFT * 1.6 + UP * 1.2)
|
||||
stale_b = Circle(radius=0.32, color="#7A7A7A").shift(LEFT * 1.2 + DOWN * 1.4)
|
||||
bridge = Circle(radius=0.38, color="#21A179").shift(RIGHT * 0.2 + UP * 0.2)
|
||||
target = Circle(radius=0.42, color="#FF9F1C").shift(RIGHT * 3.2 + UP * 0.7)
|
||||
new_target = Circle(radius=0.42, color="#FF9F1C").shift(RIGHT * 3.0 + DOWN * 1.4)
|
||||
|
||||
stale_a_label = Text("stale", font_size=18).move_to(stale_a.get_center())
|
||||
stale_b_label = Text("noise", font_size=18).move_to(stale_b.get_center())
|
||||
bridge_label = Text("bridge", font_size=18).move_to(bridge.get_center())
|
||||
target_label = Text("target", font_size=18).move_to(target.get_center())
|
||||
new_target_label = Text("add", font_size=18).move_to(new_target.get_center())
|
||||
|
||||
edge_stale_a = CurvedArrow(you.get_right(), stale_a.get_left(), angle=0.2, color="#7A7A7A")
|
||||
edge_stale_b = CurvedArrow(you.get_right(), stale_b.get_left(), angle=-0.2, color="#7A7A7A")
|
||||
edge_bridge = CurvedArrow(you.get_right(), bridge.get_left(), angle=0.0, color="#21A179")
|
||||
edge_target = CurvedArrow(bridge.get_right(), target.get_left(), angle=0.1, color="#21A179")
|
||||
edge_new_target = CurvedArrow(bridge.get_right(), new_target.get_left(), angle=-0.12, color="#21A179")
|
||||
|
||||
self.play(FadeIn(title), FadeIn(subtitle))
|
||||
self.play(
|
||||
Create(you),
|
||||
FadeIn(you_label),
|
||||
Create(stale_a),
|
||||
Create(stale_b),
|
||||
Create(bridge),
|
||||
Create(target),
|
||||
FadeIn(stale_a_label),
|
||||
FadeIn(stale_b_label),
|
||||
FadeIn(bridge_label),
|
||||
FadeIn(target_label),
|
||||
)
|
||||
self.play(Create(edge_stale_a), Create(edge_stale_b), Create(edge_bridge), Create(edge_target))
|
||||
|
||||
optimize = Text("Optimize the graph", font_size=24).to_edge(DOWN)
|
||||
self.play(FadeIn(optimize))
|
||||
self.play(FadeOut(stale_a), FadeOut(stale_b), FadeOut(stale_a_label), FadeOut(stale_b_label), FadeOut(edge_stale_a), FadeOut(edge_stale_b))
|
||||
self.play(Create(new_target), FadeIn(new_target_label), Create(edge_new_target))
|
||||
|
||||
final_group = VGroup(you, you_label, bridge, bridge_label, target, target_label, new_target, new_target_label)
|
||||
self.play(final_group.animate.shift(UP * 0.1))
|
||||
self.wait(1)
|
||||
@@ -1,230 +0,0 @@
|
||||
---
|
||||
name: nestjs-patterns
|
||||
description: NestJS architecture patterns for modules, controllers, providers, DTO validation, guards, interceptors, config, and production-grade TypeScript backends.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# NestJS Development Patterns
|
||||
|
||||
Production-grade NestJS patterns for modular TypeScript backends.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- Building NestJS APIs or services
|
||||
- Structuring modules, controllers, and providers
|
||||
- Adding DTO validation, guards, interceptors, or exception filters
|
||||
- Configuring environment-aware settings and database integrations
|
||||
- Testing NestJS units or HTTP endpoints
|
||||
|
||||
## Project Structure
|
||||
|
||||
```text
|
||||
src/
|
||||
├── app.module.ts
|
||||
├── main.ts
|
||||
├── common/
|
||||
│ ├── filters/
|
||||
│ ├── guards/
|
||||
│ ├── interceptors/
|
||||
│ └── pipes/
|
||||
├── config/
|
||||
│ ├── configuration.ts
|
||||
│ └── validation.ts
|
||||
├── modules/
|
||||
│ ├── auth/
|
||||
│ │ ├── auth.controller.ts
|
||||
│ │ ├── auth.module.ts
|
||||
│ │ ├── auth.service.ts
|
||||
│ │ ├── dto/
|
||||
│ │ ├── guards/
|
||||
│ │ └── strategies/
|
||||
│ └── users/
|
||||
│ ├── dto/
|
||||
│ ├── entities/
|
||||
│ ├── users.controller.ts
|
||||
│ ├── users.module.ts
|
||||
│ └── users.service.ts
|
||||
└── prisma/ or database/
|
||||
```
|
||||
|
||||
- Keep domain code inside feature modules.
|
||||
- Put cross-cutting filters, decorators, guards, and interceptors in `common/`.
|
||||
- Keep DTOs close to the module that owns them.
|
||||
|
||||
## Bootstrap and Global Validation
|
||||
|
||||
```ts
|
||||
async function bootstrap() {
|
||||
const app = await NestFactory.create(AppModule, { bufferLogs: true });
|
||||
|
||||
app.useGlobalPipes(
|
||||
new ValidationPipe({
|
||||
whitelist: true,
|
||||
forbidNonWhitelisted: true,
|
||||
transform: true,
|
||||
transformOptions: { enableImplicitConversion: true },
|
||||
}),
|
||||
);
|
||||
|
||||
app.useGlobalInterceptors(new ClassSerializerInterceptor(app.get(Reflector)));
|
||||
app.useGlobalFilters(new HttpExceptionFilter());
|
||||
|
||||
await app.listen(process.env.PORT ?? 3000);
|
||||
}
|
||||
bootstrap();
|
||||
```
|
||||
|
||||
- Always enable `whitelist` and `forbidNonWhitelisted` on public APIs.
|
||||
- Prefer one global validation pipe instead of repeating validation config per route.
|
||||
|
||||
## Modules, Controllers, and Providers
|
||||
|
||||
```ts
|
||||
@Module({
|
||||
controllers: [UsersController],
|
||||
providers: [UsersService],
|
||||
exports: [UsersService],
|
||||
})
|
||||
export class UsersModule {}
|
||||
|
||||
@Controller('users')
|
||||
export class UsersController {
|
||||
constructor(private readonly usersService: UsersService) {}
|
||||
|
||||
@Get(':id')
|
||||
getById(@Param('id', ParseUUIDPipe) id: string) {
|
||||
return this.usersService.getById(id);
|
||||
}
|
||||
|
||||
@Post()
|
||||
create(@Body() dto: CreateUserDto) {
|
||||
return this.usersService.create(dto);
|
||||
}
|
||||
}
|
||||
|
||||
@Injectable()
|
||||
export class UsersService {
|
||||
constructor(private readonly usersRepo: UsersRepository) {}
|
||||
|
||||
async create(dto: CreateUserDto) {
|
||||
return this.usersRepo.create(dto);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- Controllers should stay thin: parse HTTP input, call a provider, return response DTOs.
|
||||
- Put business logic in injectable services, not controllers.
|
||||
- Export only the providers other modules genuinely need.
|
||||
|
||||
## DTOs and Validation
|
||||
|
||||
```ts
|
||||
export class CreateUserDto {
|
||||
@IsEmail()
|
||||
email!: string;
|
||||
|
||||
@IsString()
|
||||
@Length(2, 80)
|
||||
name!: string;
|
||||
|
||||
@IsOptional()
|
||||
@IsEnum(UserRole)
|
||||
role?: UserRole;
|
||||
}
|
||||
```
|
||||
|
||||
- Validate every request DTO with `class-validator`.
|
||||
- Use dedicated response DTOs or serializers instead of returning ORM entities directly.
|
||||
- Avoid leaking internal fields such as password hashes, tokens, or audit columns.
|
||||
|
||||
## Auth, Guards, and Request Context
|
||||
|
||||
```ts
|
||||
@UseGuards(JwtAuthGuard, RolesGuard)
|
||||
@Roles('admin')
|
||||
@Get('admin/report')
|
||||
getAdminReport(@Req() req: AuthenticatedRequest) {
|
||||
return this.reportService.getForUser(req.user.id);
|
||||
}
|
||||
```
|
||||
|
||||
- Keep auth strategies and guards module-local unless they are truly shared.
|
||||
- Encode coarse access rules in guards, then do resource-specific authorization in services.
|
||||
- Prefer explicit request types for authenticated request objects.
|
||||
|
||||
## Exception Filters and Error Shape
|
||||
|
||||
```ts
|
||||
@Catch()
|
||||
export class HttpExceptionFilter implements ExceptionFilter {
|
||||
catch(exception: unknown, host: ArgumentsHost) {
|
||||
const response = host.switchToHttp().getResponse<Response>();
|
||||
const request = host.switchToHttp().getRequest<Request>();
|
||||
|
||||
if (exception instanceof HttpException) {
|
||||
return response.status(exception.getStatus()).json({
|
||||
path: request.url,
|
||||
error: exception.getResponse(),
|
||||
});
|
||||
}
|
||||
|
||||
return response.status(500).json({
|
||||
path: request.url,
|
||||
error: 'Internal server error',
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- Keep one consistent error envelope across the API.
|
||||
- Throw framework exceptions for expected client errors; log and wrap unexpected failures centrally.
|
||||
|
||||
## Config and Environment Validation
|
||||
|
||||
```ts
|
||||
ConfigModule.forRoot({
|
||||
isGlobal: true,
|
||||
load: [configuration],
|
||||
validate: validateEnv,
|
||||
});
|
||||
```
|
||||
|
||||
- Validate env at boot, not lazily at first request.
|
||||
- Keep config access behind typed helpers or config services.
|
||||
- Split dev/staging/prod concerns in config factories instead of branching throughout feature code.
|
||||
|
||||
## Persistence and Transactions
|
||||
|
||||
- Keep repository / ORM code behind providers that speak domain language.
|
||||
- For Prisma or TypeORM, isolate transactional workflows in services that own the unit of work.
|
||||
- Do not let controllers coordinate multi-step writes directly.
|
||||
|
||||
## Testing
|
||||
|
||||
```ts
|
||||
describe('UsersController', () => {
|
||||
let app: INestApplication;
|
||||
|
||||
beforeAll(async () => {
|
||||
const moduleRef = await Test.createTestingModule({
|
||||
imports: [UsersModule],
|
||||
}).compile();
|
||||
|
||||
app = moduleRef.createNestApplication();
|
||||
app.useGlobalPipes(new ValidationPipe({ whitelist: true, transform: true }));
|
||||
await app.init();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
- Unit test providers in isolation with mocked dependencies.
|
||||
- Add request-level tests for guards, validation pipes, and exception filters.
|
||||
- Reuse the same global pipes/filters in tests that you use in production.
|
||||
|
||||
## Production Defaults
|
||||
|
||||
- Enable structured logging and request correlation ids.
|
||||
- Terminate on invalid env/config instead of booting partially.
|
||||
- Prefer async provider initialization for DB/cache clients with explicit health checks.
|
||||
- Keep background jobs and event consumers in their own modules, not inside HTTP controllers.
|
||||
- Make rate limiting, auth, and audit logging explicit for public endpoints.
|
||||
@@ -83,4 +83,4 @@ const { width, height } = useVideoConfig();
|
||||
</mesh>
|
||||
</Sequence>
|
||||
</ThreeCanvas>
|
||||
```
|
||||
```
|
||||
@@ -26,4 +26,4 @@ export const FadeIn = () => {
|
||||
```
|
||||
|
||||
CSS transitions or animations are FORBIDDEN - they will not render correctly.
|
||||
Tailwind animation class names are FORBIDDEN - they will not render correctly.
|
||||
Tailwind animation class names are FORBIDDEN - they will not render correctly.
|
||||
@@ -143,4 +143,4 @@ export const RemotionRoot = () => {
|
||||
};
|
||||
```
|
||||
|
||||
The function can return `props`, `durationInFrames`, `width`, `height`, `fps`, and codec-related defaults. It runs once before rendering begins.
|
||||
The function can return `props`, `durationInFrames`, `width`, `height`, `fps`, and codec-related defaults. It runs once before rendering begins.
|
||||
@@ -8,4 +8,4 @@ You can and should use TailwindCSS in Remotion, if TailwindCSS is installed in t
|
||||
|
||||
Don't use `transition-*` or `animate-*` classes - always animate using the `useCurrentFrame()` hook.
|
||||
|
||||
Tailwind must be installed and enabled first in a Remotion project - fetch <https://www.remotion.dev/docs/tailwind> using WebFetch for instructions.
|
||||
Tailwind must be installed and enabled first in a Remotion project - fetch <https://www.remotion.dev/docs/tailwind> using WebFetch for instructions.
|
||||
@@ -1,154 +0,0 @@
|
||||
---
|
||||
name: social-graph-ranker
|
||||
description: Weighted social-graph ranking for warm intro discovery, bridge scoring, and network gap analysis across X and LinkedIn. Use when the user wants the reusable graph-ranking engine itself, not the broader outreach or network-maintenance workflow layered on top of it.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Social Graph Ranker
|
||||
|
||||
Canonical weighted graph-ranking layer for network-aware outreach.
|
||||
|
||||
Use this when the user needs to:
|
||||
|
||||
- rank existing mutuals or connections by intro value
|
||||
- map warm paths to a target list
|
||||
- measure bridge value across first- and second-order connections
|
||||
- decide which targets deserve warm intros versus direct cold outreach
|
||||
- understand the graph math independently from `lead-intelligence` or `connections-optimizer`
|
||||
|
||||
## When To Use This Standalone
|
||||
|
||||
Choose this skill when the user primarily wants the ranking engine:
|
||||
|
||||
- "who in my network is best positioned to introduce me?"
|
||||
- "rank my mutuals by who can get me to these people"
|
||||
- "map my graph against this ICP"
|
||||
- "show me the bridge math"
|
||||
|
||||
Do not use this by itself when the user really wants:
|
||||
|
||||
- full lead generation and outbound sequencing -> use `lead-intelligence`
|
||||
- pruning, rebalancing, and growing the network -> use `connections-optimizer`
|
||||
|
||||
## Inputs
|
||||
|
||||
Collect or infer:
|
||||
|
||||
- target people, companies, or ICP definition
|
||||
- the user's current graph on X, LinkedIn, or both
|
||||
- weighting priorities such as role, industry, geography, and responsiveness
|
||||
- traversal depth and decay tolerance
|
||||
|
||||
## Core Model
|
||||
|
||||
Given:
|
||||
|
||||
- `T` = weighted target set
|
||||
- `M` = your current mutuals / direct connections
|
||||
- `d(m, t)` = shortest hop distance from mutual `m` to target `t`
|
||||
- `w(t)` = target weight from signal scoring
|
||||
|
||||
Base bridge score:
|
||||
|
||||
```text
|
||||
B(m) = Σ_{t ∈ T} w(t) · λ^(d(m,t) - 1)
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `λ` is the decay factor, usually `0.5`
|
||||
- a direct path contributes full value
|
||||
- each extra hop halves the contribution
|
||||
|
||||
Second-order expansion:
|
||||
|
||||
```text
|
||||
B_ext(m) = B(m) + α · Σ_{m' ∈ N(m) \\ M} Σ_{t ∈ T} w(t) · λ^(d(m',t))
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `N(m) \\ M` is the set of people the mutual knows that you do not
|
||||
- `α` discounts second-order reach, usually `0.3`
|
||||
|
||||
Response-adjusted final ranking:
|
||||
|
||||
```text
|
||||
R(m) = B_ext(m) · (1 + β · engagement(m))
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `engagement(m)` is normalized responsiveness or relationship strength
|
||||
- `β` is the engagement bonus, usually `0.2`
|
||||
|
||||
Interpretation:
|
||||
|
||||
- Tier 1: high `R(m)` and direct bridge paths -> warm intro asks
|
||||
- Tier 2: medium `R(m)` and one-hop bridge paths -> conditional intro asks
|
||||
- Tier 3: low `R(m)` or no viable bridge -> direct outreach or follow-gap fill
|
||||
|
||||
## Scoring Signals
|
||||
|
||||
Weight targets before graph traversal with whatever matters for the current priority set:
|
||||
|
||||
- role or title alignment
|
||||
- company or industry fit
|
||||
- current activity and recency
|
||||
- geographic relevance
|
||||
- influence or reach
|
||||
- likelihood of response
|
||||
|
||||
Weight mutuals after traversal with:
|
||||
|
||||
- number of weighted paths into the target set
|
||||
- directness of those paths
|
||||
- responsiveness or prior interaction history
|
||||
- contextual fit for making the intro
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Build the weighted target set.
|
||||
2. Pull the user's graph from X, LinkedIn, or both.
|
||||
3. Compute direct bridge scores.
|
||||
4. Expand second-order candidates for the highest-value mutuals.
|
||||
5. Rank by `R(m)`.
|
||||
6. Return:
|
||||
- best warm intro asks
|
||||
- conditional bridge paths
|
||||
- graph gaps where no warm path exists
|
||||
|
||||
## Output Shape
|
||||
|
||||
```text
|
||||
SOCIAL GRAPH RANKING
|
||||
====================
|
||||
|
||||
Priority Set:
|
||||
Platforms:
|
||||
Decay Model:
|
||||
|
||||
Top Bridges
|
||||
- mutual / connection
|
||||
base_score:
|
||||
extended_score:
|
||||
best_targets:
|
||||
path_summary:
|
||||
recommended_action:
|
||||
|
||||
Conditional Paths
|
||||
- mutual / connection
|
||||
reason:
|
||||
extra hop cost:
|
||||
|
||||
No Warm Path
|
||||
- target
|
||||
recommendation: direct outreach / fill graph gap
|
||||
```
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `lead-intelligence` uses this ranking model inside the broader target-discovery and outreach pipeline
|
||||
- `connections-optimizer` uses the same bridge logic when deciding who to keep, prune, or add
|
||||
- `brand-voice` should run before drafting any intro request or direct outreach
|
||||
- `x-api` provides X graph access and optional execution paths
|
||||
@@ -46,27 +46,25 @@ tweets = resp.json()
|
||||
|
||||
### OAuth 1.0a (User Context)
|
||||
|
||||
Required for: posting tweets, managing account, DMs, and any write flow.
|
||||
Required for: posting tweets, managing account, DMs.
|
||||
|
||||
```bash
|
||||
# Environment setup — source before use
|
||||
export X_CONSUMER_KEY="your-consumer-key"
|
||||
export X_CONSUMER_SECRET="your-consumer-secret"
|
||||
export X_API_KEY="your-api-key"
|
||||
export X_API_SECRET="your-api-secret"
|
||||
export X_ACCESS_TOKEN="your-access-token"
|
||||
export X_ACCESS_TOKEN_SECRET="your-access-token-secret"
|
||||
export X_ACCESS_SECRET="your-access-secret"
|
||||
```
|
||||
|
||||
Legacy aliases such as `X_API_KEY`, `X_API_SECRET`, and `X_ACCESS_SECRET` may exist in older setups. Prefer the `X_CONSUMER_*` and `X_ACCESS_TOKEN_SECRET` names when documenting or wiring new flows.
|
||||
|
||||
```python
|
||||
import os
|
||||
from requests_oauthlib import OAuth1Session
|
||||
|
||||
oauth = OAuth1Session(
|
||||
os.environ["X_CONSUMER_KEY"],
|
||||
client_secret=os.environ["X_CONSUMER_SECRET"],
|
||||
os.environ["X_API_KEY"],
|
||||
client_secret=os.environ["X_API_SECRET"],
|
||||
resource_owner_key=os.environ["X_ACCESS_TOKEN"],
|
||||
resource_owner_secret=os.environ["X_ACCESS_TOKEN_SECRET"],
|
||||
resource_owner_secret=os.environ["X_ACCESS_SECRET"],
|
||||
)
|
||||
```
|
||||
|
||||
@@ -127,21 +125,6 @@ resp = requests.get(
|
||||
)
|
||||
```
|
||||
|
||||
### Pull Recent Original Posts for Voice Modeling
|
||||
|
||||
```python
|
||||
resp = requests.get(
|
||||
"https://api.x.com/2/tweets/search/recent",
|
||||
headers=headers,
|
||||
params={
|
||||
"query": "from:affaanmustafa -is:retweet -is:reply",
|
||||
"max_results": 25,
|
||||
"tweet.fields": "created_at,public_metrics",
|
||||
}
|
||||
)
|
||||
voice_samples = resp.json()
|
||||
```
|
||||
|
||||
### Get User by Username
|
||||
|
||||
```python
|
||||
@@ -213,18 +196,13 @@ else:
|
||||
|
||||
## Integration with Content Engine
|
||||
|
||||
Use `brand-voice` plus `content-engine` to generate platform-native content, then post via X API:
|
||||
1. Pull recent original posts when voice matching matters
|
||||
2. Build or reuse a `VOICE PROFILE`
|
||||
3. Generate content with `content-engine` in X-native format
|
||||
4. Validate length and thread structure
|
||||
5. Return the draft for approval unless the user explicitly asked to post now
|
||||
6. Post via X API only after approval
|
||||
7. Track engagement via public_metrics
|
||||
Use `content-engine` skill to generate platform-native content, then post via X API:
|
||||
1. Generate content with content-engine (X platform format)
|
||||
2. Validate length (280 chars for single tweet)
|
||||
3. Post via X API using patterns above
|
||||
4. Track engagement via public_metrics
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `brand-voice` — Build a reusable voice profile from real X and site/source material
|
||||
- `content-engine` — Generate platform-native content for X
|
||||
- `crosspost` — Distribute content across X, LinkedIn, and other platforms
|
||||
- `connections-optimizer` — Reorganize the X graph before drafting network-driven outreach
|
||||
|
||||
@@ -1,82 +0,0 @@
|
||||
/**
|
||||
* Tests for scripts/hooks/design-quality-check.js
|
||||
*
|
||||
* Run with: node tests/hooks/design-quality-check.test.js
|
||||
*/
|
||||
|
||||
const assert = require('assert');
|
||||
const fs = require('fs');
|
||||
const os = require('os');
|
||||
const path = require('path');
|
||||
|
||||
const hook = require('../../scripts/hooks/design-quality-check');
|
||||
|
||||
function test(name, fn) {
|
||||
try {
|
||||
fn();
|
||||
console.log(` ✓ ${name}`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.log(` ✗ ${name}`);
|
||||
console.log(` Error: ${err.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
console.log('\nDesign Quality Hook Tests');
|
||||
console.log('=========================\n');
|
||||
|
||||
if (test('passes through non-frontend files silently', () => {
|
||||
const input = JSON.stringify({ tool_input: { file_path: '/tmp/file.py' } });
|
||||
const result = hook.run(input);
|
||||
assert.strictEqual(result.exitCode, 0);
|
||||
assert.strictEqual(result.stdout, input);
|
||||
assert.ok(!result.stderr);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('warns for frontend file path', () => {
|
||||
const tmpFile = path.join(os.tmpdir(), `design-quality-${Date.now()}.tsx`);
|
||||
fs.writeFileSync(tmpFile, 'export function Hero(){ return <div className="text-center">Get Started</div>; }\n');
|
||||
try {
|
||||
const input = JSON.stringify({ tool_input: { file_path: tmpFile } });
|
||||
const result = hook.run(input);
|
||||
assert.strictEqual(result.exitCode, 0);
|
||||
assert.strictEqual(result.stdout, input);
|
||||
assert.match(result.stderr, /DESIGN CHECK/);
|
||||
assert.match(result.stderr, /Get Started/);
|
||||
} finally {
|
||||
fs.unlinkSync(tmpFile);
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('handles MultiEdit edits[] payloads', () => {
|
||||
const tmpFile = path.join(os.tmpdir(), `design-quality-${Date.now()}.css`);
|
||||
fs.writeFileSync(tmpFile, '.hero{background:linear-gradient(to right,#000,#333)}\n');
|
||||
try {
|
||||
const input = JSON.stringify({
|
||||
tool_input: {
|
||||
edits: [{ file_path: tmpFile }, { file_path: '/tmp/notes.md' }]
|
||||
}
|
||||
});
|
||||
const result = hook.run(input);
|
||||
assert.strictEqual(result.exitCode, 0);
|
||||
assert.strictEqual(result.stdout, input);
|
||||
assert.match(result.stderr, /frontend file\(s\) modified/);
|
||||
assert.match(result.stderr, /\.css/);
|
||||
} finally {
|
||||
fs.unlinkSync(tmpFile);
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns original stdout on invalid JSON', () => {
|
||||
const input = '{not valid json';
|
||||
const result = hook.run(input);
|
||||
assert.strictEqual(result.exitCode, 0);
|
||||
assert.strictEqual(result.stdout, input);
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
@@ -76,12 +76,6 @@ test('observe.sh default throttle is 20 observations per signal', () => {
|
||||
assert.ok(content.includes('ECC_OBSERVER_SIGNAL_EVERY_N:-20'), 'Default signal frequency should be every 20 observations');
|
||||
});
|
||||
|
||||
test('observe.sh touches observer activity marker on each observation', () => {
|
||||
const content = fs.readFileSync(observeShPath, 'utf8');
|
||||
assert.ok(content.includes('ACTIVITY_FILE="${PROJECT_DIR}/.observer-last-activity"'), 'observe.sh should define a project-scoped activity marker');
|
||||
assert.ok(content.includes('touch "$ACTIVITY_FILE"'), 'observe.sh should update activity marker during observation capture');
|
||||
});
|
||||
|
||||
// ──────────────────────────────────────────────────────
|
||||
// Test group 2: observer-loop.sh re-entrancy guard
|
||||
// ──────────────────────────────────────────────────────
|
||||
@@ -132,19 +126,6 @@ test('default cooldown is 60 seconds', () => {
|
||||
assert.ok(content.includes('ECC_OBSERVER_ANALYSIS_COOLDOWN:-60'), 'Default cooldown should be 60 seconds');
|
||||
});
|
||||
|
||||
test('observer-loop.sh defines idle timeout fallback', () => {
|
||||
const content = fs.readFileSync(observerLoopPath, 'utf8');
|
||||
assert.ok(content.includes('IDLE_TIMEOUT_SECONDS'), 'observer-loop.sh should define an idle timeout');
|
||||
assert.ok(content.includes('ECC_OBSERVER_IDLE_TIMEOUT_SECONDS:-1800'), 'Default idle timeout should be 30 minutes');
|
||||
});
|
||||
|
||||
test('observer-loop.sh checks session lease directory before self-termination', () => {
|
||||
const content = fs.readFileSync(observerLoopPath, 'utf8');
|
||||
assert.ok(content.includes('SESSION_LEASE_DIR="${PROJECT_DIR}/.observer-sessions"'), 'observer-loop.sh should track active observer session leases');
|
||||
assert.ok(content.includes('has_active_session_leases'), 'observer-loop.sh should define active session lease checks');
|
||||
assert.ok(content.includes('exit_if_idle_without_sessions'), 'observer-loop.sh should define idle self-termination helper');
|
||||
});
|
||||
|
||||
// ──────────────────────────────────────────────────────
|
||||
// Test group 4: Tail-based sampling (no full file load)
|
||||
// ──────────────────────────────────────────────────────
|
||||
|
||||
@@ -303,101 +303,6 @@ async function runTests() {
|
||||
assert.strictEqual(result.code, 0, 'Non-blocking hook should exit 0');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await asyncTest('session-start registers an observer lease for the active session', async () => {
|
||||
const testDir = createTestDir();
|
||||
const projectDir = path.join(testDir, 'project');
|
||||
fs.mkdirSync(projectDir, { recursive: true });
|
||||
|
||||
try {
|
||||
const sessionId = `session-${Date.now()}`;
|
||||
const result = await runHookWithInput(
|
||||
path.join(scriptsDir, 'session-start.js'),
|
||||
{},
|
||||
{
|
||||
HOME: testDir,
|
||||
CLAUDE_PROJECT_DIR: projectDir,
|
||||
CLAUDE_SESSION_ID: sessionId
|
||||
}
|
||||
);
|
||||
|
||||
assert.strictEqual(result.code, 0, 'SessionStart should exit 0');
|
||||
const homunculusDir = path.join(testDir, '.claude', 'homunculus');
|
||||
const projectsDir = path.join(homunculusDir, 'projects');
|
||||
const projectEntries = fs.existsSync(projectsDir) ? fs.readdirSync(projectsDir) : [];
|
||||
assert.ok(projectEntries.length > 0, 'SessionStart should create a homunculus project directory');
|
||||
const leaseDir = path.join(projectsDir, projectEntries[0], '.observer-sessions');
|
||||
const leaseFiles = fs.existsSync(leaseDir) ? fs.readdirSync(leaseDir).filter(name => name.endsWith('.json')) : [];
|
||||
assert.ok(leaseFiles.length === 1, `Expected one observer lease file, found ${leaseFiles.length}`);
|
||||
} finally {
|
||||
cleanupTestDir(testDir);
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await asyncTest('session-end-marker removes the last lease and stops the observer process', async () => {
|
||||
const testDir = createTestDir();
|
||||
const projectDir = path.join(testDir, 'project');
|
||||
fs.mkdirSync(projectDir, { recursive: true });
|
||||
|
||||
const sessionId = `session-${Date.now()}`;
|
||||
const sleeper = spawn(process.execPath, ['-e', "process.on('SIGTERM', () => process.exit(0)); setInterval(() => {}, 1000)"], {
|
||||
stdio: 'ignore'
|
||||
});
|
||||
|
||||
try {
|
||||
await runHookWithInput(
|
||||
path.join(scriptsDir, 'session-start.js'),
|
||||
{},
|
||||
{
|
||||
HOME: testDir,
|
||||
CLAUDE_PROJECT_DIR: projectDir,
|
||||
CLAUDE_SESSION_ID: sessionId
|
||||
}
|
||||
);
|
||||
|
||||
const homunculusDir = path.join(testDir, '.claude', 'homunculus');
|
||||
const projectsDir = path.join(homunculusDir, 'projects');
|
||||
const projectEntries = fs.existsSync(projectsDir) ? fs.readdirSync(projectsDir) : [];
|
||||
assert.ok(projectEntries.length > 0, 'Expected SessionStart to create a homunculus project directory');
|
||||
const projectStorageDir = path.join(projectsDir, projectEntries[0]);
|
||||
const pidFile = path.join(projectStorageDir, '.observer.pid');
|
||||
fs.writeFileSync(pidFile, `${sleeper.pid}\n`);
|
||||
|
||||
const markerInput = { hook_event_name: 'SessionEnd' };
|
||||
const result = await runHookWithInput(
|
||||
path.join(scriptsDir, 'session-end-marker.js'),
|
||||
markerInput,
|
||||
{
|
||||
HOME: testDir,
|
||||
CLAUDE_PROJECT_DIR: projectDir,
|
||||
CLAUDE_SESSION_ID: sessionId
|
||||
}
|
||||
);
|
||||
|
||||
assert.strictEqual(result.code, 0, 'SessionEnd marker should exit 0');
|
||||
assert.strictEqual(result.stdout, JSON.stringify(markerInput), 'SessionEnd marker should pass stdin through unchanged');
|
||||
|
||||
await new Promise(resolve => setTimeout(resolve, 150));
|
||||
const exited = sleeper.exitCode !== null || sleeper.signalCode !== null;
|
||||
let processAlive = !exited;
|
||||
if (processAlive) {
|
||||
try {
|
||||
process.kill(sleeper.pid, 0);
|
||||
} catch {
|
||||
processAlive = false;
|
||||
}
|
||||
}
|
||||
assert.strictEqual(processAlive, false, 'SessionEnd marker should stop the observer process when the last lease ends');
|
||||
|
||||
const leaseDir = path.join(projectStorageDir, '.observer-sessions');
|
||||
const leaseFiles = fs.existsSync(leaseDir) ? fs.readdirSync(leaseDir).filter(name => name.endsWith('.json')) : [];
|
||||
assert.strictEqual(leaseFiles.length, 0, 'SessionEnd marker should remove the finished session lease');
|
||||
assert.strictEqual(fs.existsSync(pidFile), false, 'SessionEnd marker should remove the observer pid file after stopping it');
|
||||
} finally {
|
||||
sleeper.kill();
|
||||
cleanupTestDir(testDir);
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (await asyncTest('dev server hook transforms yarn dev to tmux session', async () => {
|
||||
// The auto-tmux dev hook transforms dev commands (yarn dev, npm run dev, etc.)
|
||||
const hookCommand = getHookCommandByDescription(
|
||||
|
||||
@@ -24,7 +24,7 @@ async function runTests() {
|
||||
let store
|
||||
try {
|
||||
store = await import(pathToFileURL(storePath).href)
|
||||
} catch (_err) {
|
||||
} catch (err) {
|
||||
console.log('\n[warn] Skipping: build .opencode first (cd .opencode && npm run build)\n')
|
||||
process.exit(0)
|
||||
}
|
||||
|
||||
@@ -60,50 +60,6 @@ function runTests() {
|
||||
assert.ok(fs.existsSync(home), 'Home dir should exist');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('getHomeDir prefers HOME override when set', () => {
|
||||
const originalHome = process.env.HOME;
|
||||
const originalUserProfile = process.env.USERPROFILE;
|
||||
const fakeHome = path.join(process.cwd(), 'tmp-home-override');
|
||||
try {
|
||||
process.env.HOME = fakeHome;
|
||||
process.env.USERPROFILE = '';
|
||||
assert.strictEqual(utils.getHomeDir(), fakeHome);
|
||||
} finally {
|
||||
if (originalHome === undefined) {
|
||||
delete process.env.HOME;
|
||||
} else {
|
||||
process.env.HOME = originalHome;
|
||||
}
|
||||
if (originalUserProfile === undefined) {
|
||||
delete process.env.USERPROFILE;
|
||||
} else {
|
||||
process.env.USERPROFILE = originalUserProfile;
|
||||
}
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('getHomeDir falls back to USERPROFILE when HOME is empty', () => {
|
||||
const originalHome = process.env.HOME;
|
||||
const originalUserProfile = process.env.USERPROFILE;
|
||||
const fakeHome = path.join(process.cwd(), 'tmp-userprofile-override');
|
||||
try {
|
||||
process.env.HOME = '';
|
||||
process.env.USERPROFILE = fakeHome;
|
||||
assert.strictEqual(utils.getHomeDir(), fakeHome);
|
||||
} finally {
|
||||
if (originalHome === undefined) {
|
||||
delete process.env.HOME;
|
||||
} else {
|
||||
process.env.HOME = originalHome;
|
||||
}
|
||||
if (originalUserProfile === undefined) {
|
||||
delete process.env.USERPROFILE;
|
||||
} else {
|
||||
process.env.USERPROFILE = originalUserProfile;
|
||||
}
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('getClaudeDir returns path under home', () => {
|
||||
const claudeDir = utils.getClaudeDir();
|
||||
const homeDir = utils.getHomeDir();
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user