Compare commits

..

3 Commits

Author SHA1 Message Date
Affaan Mustafa
5196647681 chore: checkpoint hermes-generated ops skills 2026-04-02 15:14:20 -07:00
Affaan Mustafa
4813ed753f feat: consolidate all Anthropic plugins into ECC v2.0.0
Ports functionality from 10+ separate plugins into ECC so users only
need one plugin installed. Consolidates: pr-review-toolkit, feature-dev,
commit-commands, hookify, code-simplifier, security-guidance,
frontend-design, explanatory-output-style, and personal skills.

New agents (8): code-architect, code-explorer, code-simplifier,
comment-analyzer, conversation-analyzer, pr-test-analyzer,
silent-failure-hunter, type-design-analyzer

New commands (9): commit, commit-push-pr, clean-gone, review-pr,
feature-dev, hookify, hookify-list, hookify-configure, hookify-help

New skills (8): frontend-design, hookify-rules, github-ops,
knowledge-ops, lead-intelligence, oura-health, pmx-guidelines, remotion

Enhanced skills (8): article-writing, content-engine, market-research,
investor-materials, investor-outreach, x-api, security-scan,
autonomous-loops — merged with personal skill content

New hook: security-reminder.py (pattern-based OWASP vulnerability
warnings on file edits)

Totals: 36 agents, 69 commands, 128 skills, 29 hook scripts
2026-03-31 21:55:43 -07:00
Affaan Mustafa
19755f6c52 feat: add hermes-generated ops skills 2026-03-25 02:41:08 -07:00
122 changed files with 6997 additions and 4257 deletions

View File

@@ -13,8 +13,8 @@
{
"name": "everything-claude-code",
"source": "./",
"description": "The most comprehensive Claude Code plugin — 14+ agents, 56+ skills, 33+ commands, and production-ready hooks for TDD, security scanning, code review, and continuous learning",
"version": "1.9.0",
"description": "The complete Claude Code plugin — 36 agents, 128 skills, 69 commands. Consolidates all official Anthropic plugins (pr-review-toolkit, feature-dev, commit-commands, hookify, code-simplifier, security-guidance, frontend-design) into one package.",
"version": "2.0.0",
"author": {
"name": "Affaan Mustafa",
"email": "me@affaanmustafa.com"

View File

@@ -1,7 +1,7 @@
{
"name": "everything-claude-code",
"version": "1.9.0",
"description": "Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, and rules evolved over 10+ months of intensive daily use",
"version": "2.0.0",
"description": "The complete Claude Code plugin - 36 agents, 128 skills, 69 commands, and production hooks. Consolidates all official Anthropic plugins (pr-review-toolkit, feature-dev, commit-commands, hookify, code-simplifier, security-guidance, frontend-design, explanatory-output-style) into one battle-tested package.",
"author": {
"name": "Affaan Mustafa",
"url": "https://x.com/affaanmustafa"
@@ -14,12 +14,20 @@
"agents",
"skills",
"hooks",
"commands",
"rules",
"tdd",
"code-review",
"security",
"workflow",
"automation",
"best-practices"
]
"best-practices",
"pr-review",
"feature-dev",
"hookify",
"frontend-design"
],
"commands": "../commands",
"skills": "../skills",
"agents": "../agents"
}

3
.gitignore vendored
View File

@@ -83,9 +83,6 @@ temp/
*.bak
*.backup
# Rust build artifacts
ecc2/target/
# Bootstrap pipeline outputs
# Generated lock files in tool subdirectories
.opencode/package-lock.json

View File

@@ -1,7 +1,7 @@
{
"name": "ecc-universal",
"version": "1.9.0",
"description": "Everything Claude Code (ECC) plugin for OpenCode - agents, commands, hooks, and skills",
"version": "2.0.0",
"description": "Everything Claude Code (ECC) 2.0 preview for OpenCode - cross-harness agents, commands, hooks, skills, and operator workflows",
"main": "dist/index.js",
"types": "dist/index.d.ts",
"type": "module",

View File

@@ -1,8 +1,8 @@
# Everything Claude Code (ECC) — Agent Instructions
This is a **production-ready AI coding plugin** providing 28 specialized agents, 125 skills, 60 commands, and automated hook workflows for software development.
This is a **production-ready AI coding plugin and harness system** providing 36 specialized agents, 128 skills, 69 commands, and automated hook workflows for software development and operator workflows.
**Version:** 1.9.0
**Version:** 2.0.0 (preview)
## Core Principles
@@ -141,9 +141,9 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
## Project Structure
```
agents/ — 28 specialized subagents
skills/ — 117 workflow skills and domain knowledge
commands/ — 60 slash commands
agents/ — 36 specialized subagents
skills/ — 128 workflow skills and domain knowledge
commands/ — 69 slash commands
hooks/ — Trigger-based automations
rules/ — Always-follow guidelines (common + per-language)
scripts/ — Cross-platform Node.js utilities

View File

@@ -1,5 +1,30 @@
# Changelog
## 2.0.0 - 2026-04-01
### Highlights
- Public ECC 2.0 preview release surface aligned across root package metadata, docs, and release collateral.
- Hermes is now part of the public ECC story via a sanitized setup guide, generated-skills surface, and operator workflow documentation.
- Launch pack added for same-day rollout: release notes, X thread, LinkedIn post, article outline, launch checklist, and short-form video scripts.
- Repo version drift fixed between `package.json`, `package-lock.json`, `.opencode/package.json`, `AGENTS.md`, and `VERSION`.
### New Docs
- `docs/HERMES-SETUP.md` — sanitized public guide for running Hermes with ECC as the operator surface
- `docs/releases/2.0.0-preview/release-notes.md`
- `docs/releases/2.0.0-preview/x-thread.md`
- `docs/releases/2.0.0-preview/linkedin-post.md`
- `docs/releases/2.0.0-preview/video-shorts.md`
- `docs/releases/2.0.0-preview/article-outline.md`
- `docs/releases/2.0.0-preview/launch-checklist.md`
### Positioning
- ECC 2.0 is framed as a cross-harness operating system for agentic work, not only a Claude Code plugin.
- Hermes is positioned as the opinionated operator shell sitting on top of ECC skills, MCPs, hooks, crons, and workflow assets.
- The public preview intentionally ships the documented setup and launch assets first; additional private/local integrations can land incrementally.
## 1.9.0 - 2026-03-20
### Highlights

View File

@@ -38,7 +38,9 @@
Not just configs. A complete system: skills, instincts, memory optimization, continuous learning, security scanning, and research-first development. Production-ready agents, hooks, commands, rules, and MCP configurations evolved over 10+ months of intensive daily use building real products.
Works across **Claude Code**, **Codex**, **Cowork**, and other AI agent harnesses.
Works across **Claude Code**, **Codex**, **Cowork**, and other AI agent harnesses, with **Hermes x ECC** now documented as a public preview operator stack.
**New in ECC 2.0 preview:** [Hermes setup guide](docs/HERMES-SETUP.md) · [release notes](docs/releases/2.0.0-preview/release-notes.md) · [launch pack](docs/releases/2.0.0-preview/launch-checklist.md)
---
@@ -84,6 +86,14 @@ This repo is the raw code only. The guides explain everything.
## What's New
### v2.0.0 Preview — Hermes x ECC Operator Stack (Apr 2026)
- **Hermes x ECC public preview** — ECC now documents a sanitized Hermes operator setup built around ECC skills, MCPs, hooks, crons, and generated workflow packs.
- **Launch-ready release pack** — Added release notes, X thread, LinkedIn draft, article outline, launch checklist, and short-form video scripts for same-day distribution.
- **Cross-harness positioning cleanup** — Public repo metadata now matches the 2.0 direction already reflected in the Claude plugin manifest and internal architecture docs.
- **Version drift fixed** — Root `package.json`, lockfile, `VERSION`, `.opencode/package.json`, and `AGENTS.md` are back in sync.
- **Preview boundary made explicit** — Public docs show the setup surface and workflow map first; private auth, secrets, and local operator state remain intentionally out of repo.
### v1.9.0 — Selective Install & Language Expansion (Mar 2026)
- **Selective install architecture** — Manifest-driven install pipeline with `install-plan.js` and `install-apply.js` for targeted component installation. State store tracks what's installed and enables incremental updates.
@@ -212,7 +222,7 @@ For manual install instructions see the README in the `rules/` folder.
/plugin list everything-claude-code@everything-claude-code
```
**That's it!** You now have access to 28 agents, 125 skills, and 60 commands.
**That's it!** You now have access to 36 agents, 128 skills, and 69 commands.
---
@@ -1083,9 +1093,9 @@ The configuration is automatically detected from `.opencode/opencode.json`.
| Feature | Claude Code | OpenCode | Status |
|---------|-------------|----------|--------|
| Agents | ✅ 28 agents | ✅ 12 agents | **Claude Code leads** |
| Commands | ✅ 60 commands | ✅ 31 commands | **Claude Code leads** |
| Skills | ✅ 125 skills | ✅ 37 skills | **Claude Code leads** |
| Agents | ✅ 36 agents | ✅ 12 agents | **Claude Code leads** |
| Commands | ✅ 69 commands | ✅ 31 commands | **Claude Code leads** |
| Skills | ✅ 128 skills | ✅ 37 skills | **Claude Code leads** |
| Hooks | ✅ 8 event types | ✅ 11 events | **OpenCode has more!** |
| Rules | ✅ 29 rules | ✅ 13 instructions | **Claude Code leads** |
| MCP Servers | ✅ 14 servers | ✅ Full | **Full parity** |
@@ -1204,7 +1214,7 @@ ECC is the **first plugin to maximize every major AI coding tool**. Here's how e
| **Context File** | CLAUDE.md + AGENTS.md | AGENTS.md | AGENTS.md | AGENTS.md |
| **Secret Detection** | Hook-based | beforeSubmitPrompt hook | Sandbox-based | Hook-based |
| **Auto-Format** | PostToolUse hook | afterFileEdit hook | N/A | file.edited hook |
| **Version** | Plugin | Plugin | Reference config | 1.9.0 |
| **Version** | Plugin | Plugin | Reference config | 2.0.0 preview |
**Key architectural decisions:**
- **AGENTS.md** at root is the universal cross-tool file (read by all 4 tools)

View File

@@ -1 +1 @@
0.1.0
2.0.0

69
agents/code-architect.md Normal file
View File

@@ -0,0 +1,69 @@
---
name: code-architect
description: Designs feature architectures by analyzing existing codebase patterns and conventions, then providing comprehensive implementation blueprints with specific files to create/modify, component designs, data flows, and build sequences.
model: sonnet
tools: [Read, Grep, Glob, Bash]
---
# Code Architect Agent
You design feature architectures based on deep understanding of the existing codebase.
## Process
### 1. Pattern Analysis
- Study existing code organization and naming conventions
- Identify architectural patterns in use (MVC, feature-based, layered, etc.)
- Note testing patterns and conventions
- Understand the dependency graph
### 2. Architecture Design
- Design the feature to fit naturally into existing patterns
- Choose the simplest architecture that meets requirements
- Consider scalability, but don't over-engineer
- Make confident decisions with clear rationale
### 3. Implementation Blueprint
Provide for each component:
- **File path**: Where to create or modify
- **Purpose**: What this file does
- **Key interfaces**: Types, function signatures
- **Dependencies**: What it imports and what imports it
- **Data flow**: How data moves through the component
### 4. Build Sequence
Order the implementation steps by dependency:
1. Types and interfaces first
2. Core business logic
3. Integration layer (API, database)
4. UI components
5. Tests (or interleaved with TDD)
6. Documentation
## Output Format
```markdown
## Architecture: [Feature Name]
### Design Decisions
- Decision 1: [Rationale]
- Decision 2: [Rationale]
### Files to Create
| File | Purpose | Priority |
|------|---------|----------|
### Files to Modify
| File | Changes | Priority |
|------|---------|----------|
### Data Flow
[Description or diagram]
### Build Sequence
1. Step 1 (depends on: nothing)
2. Step 2 (depends on: step 1)
...
```

65
agents/code-explorer.md Normal file
View File

@@ -0,0 +1,65 @@
---
name: code-explorer
description: Deeply analyzes existing codebase features by tracing execution paths, mapping architecture layers, understanding patterns and abstractions, and documenting dependencies to inform new development.
model: sonnet
tools: [Read, Grep, Glob, Bash]
---
# Code Explorer Agent
You deeply analyze codebases to understand how existing features work.
## Analysis Process
### 1. Entry Point Discovery
- Find the main entry points for the feature/area being explored
- Trace from user action (UI click, API call) through the stack
### 2. Execution Path Tracing
- Follow the call chain from entry to completion
- Note branching logic and conditional paths
- Identify async boundaries and data transformations
- Map error handling and edge case paths
### 3. Architecture Layer Mapping
- Identify which layers the code touches (UI, API, service, data)
- Understand the boundaries between layers
- Note how layers communicate (props, events, API calls, shared state)
### 4. Pattern Recognition
- Identify design patterns in use (repository, factory, observer, etc.)
- Note abstractions and their purposes
- Understand naming conventions and code organization principles
### 5. Dependency Documentation
- Map external dependencies (libraries, services, APIs)
- Identify internal dependencies between modules
- Note shared utilities and common patterns
## Output Format
```markdown
## Exploration: [Feature/Area Name]
### Entry Points
- [Entry point 1]: [How it's triggered]
### Execution Flow
1. [Step] → [Step] → [Step]
### Architecture Insights
- [Pattern]: [Where and why it's used]
### Key Files
| File | Role | Importance |
|------|------|------------|
### Dependencies
- External: [lib1, lib2]
- Internal: [module1 → module2]
### Recommendations for New Development
- Follow [pattern] for consistency
- Reuse [utility] for [purpose]
- Avoid [anti-pattern] because [reason]
```

50
agents/code-simplifier.md Normal file
View File

@@ -0,0 +1,50 @@
---
name: code-simplifier
description: Simplifies and refines code for clarity, consistency, and maintainability while preserving all functionality. Focuses on recently modified code unless instructed otherwise.
model: sonnet
tools: [Read, Write, Edit, Bash, Grep, Glob]
---
# Code Simplifier Agent
You simplify code while preserving all functionality. Focus on recently modified code unless told otherwise.
## Principles
1. **Clarity over brevity**: Readable code beats clever code
2. **Consistency**: Match the project's existing patterns and style
3. **Preserve behavior**: Every simplification must be functionally equivalent
4. **Respect boundaries**: Follow project standards from CLAUDE.md
## Simplification Targets
### Structure
- Extract deeply nested logic into named functions
- Replace complex conditionals with early returns
- Simplify callback chains with async/await
- Remove dead code and unused imports
### Readability
- Use descriptive variable names (no single letters except loop indices)
- Avoid nested ternary operators — use if/else or extracted functions
- Break long function chains into intermediate named variables
- Use destructuring to clarify object access
### Patterns
- Replace imperative loops with declarative array methods where clearer
- Use const by default, let only when mutation is required
- Prefer function declarations over arrow functions for named functions
- Use optional chaining and nullish coalescing appropriately
### Quality
- Remove console.log statements
- Remove commented-out code
- Consolidate duplicate logic
- Simplify overly abstract code that only has one use site
## Approach
1. Read the changed files (check `git diff` or specified files)
2. Identify simplification opportunities
3. Apply changes preserving exact functionality
4. Verify no behavioral changes were introduced

View File

@@ -0,0 +1,43 @@
---
name: comment-analyzer
description: Use this agent when analyzing code comments for accuracy, completeness, and long-term maintainability. This includes after generating large documentation comments or docstrings, before finalizing a PR that adds or modifies comments, when reviewing existing comments for potential technical debt or comment rot, and when verifying that comments accurately reflect the code they describe.
model: sonnet
tools: [Read, Grep, Glob, Bash]
---
# Comment Analyzer Agent
You are a code comment analysis specialist. Your job is to ensure every comment in the codebase is accurate, helpful, and maintainable.
## Analysis Framework
### 1. Factual Accuracy
- Verify every claim in comments against the actual code
- Check that parameter descriptions match function signatures
- Ensure return value documentation matches implementation
- Flag outdated references to removed/renamed entities
### 2. Completeness Assessment
- Check that complex logic has adequate explanation
- Verify edge cases and side effects are documented
- Ensure public API functions have complete documentation
- Check that "why" comments explain non-obvious decisions
### 3. Long-term Value
- Flag comments that just restate the code (low value)
- Identify comments that will break when code changes (fragile)
- Check for TODO/FIXME/HACK comments that need resolution
- Evaluate whether comments add genuine insight
### 4. Misleading Elements
- Find comments that contradict the code
- Flag stale comments referencing old behavior
- Identify over-promised or under-promised behavior descriptions
## Output Format
Provide advisory feedback only — do not modify files directly. Group findings by severity:
- **Inaccurate**: Comments that contradict the code (highest priority)
- **Stale**: Comments referencing removed/changed functionality
- **Incomplete**: Missing important context or edge cases
- **Low-value**: Comments that just restate obvious code

View File

@@ -0,0 +1,51 @@
---
name: conversation-analyzer
description: Use this agent when analyzing conversation transcripts to find behaviors worth preventing with hooks. Triggered by /hookify without arguments.
model: sonnet
tools: [Read, Grep]
---
# Conversation Analyzer Agent
You analyze conversation history to identify problematic Claude Code behaviors that should be prevented with hooks.
## What to Look For
### Explicit Corrections
- "No, don't do that"
- "Stop doing X"
- "I said NOT to..."
- "That's wrong, use Y instead"
### Frustrated Reactions
- User reverting changes Claude made
- Repeated "no" or "wrong" responses
- User manually fixing Claude's output
- Escalating frustration in tone
### Repeated Issues
- Same mistake appearing multiple times in the conversation
- Claude repeatedly using a tool in an undesired way
- Patterns of behavior the user keeps correcting
### Reverted Changes
- `git checkout -- file` or `git restore file` after Claude's edit
- User undoing or reverting Claude's work
- Re-editing files Claude just edited
## Output Format
For each identified behavior:
```yaml
behavior: "Description of what Claude did wrong"
frequency: "How often it occurred"
severity: high|medium|low
suggested_rule:
name: "descriptive-rule-name"
event: bash|file|stop|prompt
pattern: "regex pattern to match"
action: block|warn
message: "What to show when triggered"
```
Prioritize high-frequency, high-severity behaviors first.

View File

@@ -0,0 +1,43 @@
---
name: pr-test-analyzer
description: Use this agent when reviewing a pull request for test coverage quality and completeness. Invoked after a PR is created or updated to ensure tests adequately cover new functionality and edge cases.
model: sonnet
tools: [Read, Grep, Glob, Bash]
---
# PR Test Analyzer Agent
You review test coverage quality and completeness for pull requests.
## Analysis Process
### 1. Identify Changed Code
- Parse the PR diff to find new/modified functions, classes, and modules
- Map changed code to existing test files
- Identify untested new code paths
### 2. Behavioral Coverage Analysis
- Check that each new feature has corresponding test cases
- Verify edge cases are covered (null, empty, boundary values, error states)
- Ensure error handling paths are tested
- Check that integration points have integration tests
### 3. Test Quality Assessment
- Verify tests actually assert meaningful behavior (not just "no throw")
- Check for proper test isolation (no shared mutable state)
- Ensure test descriptions accurately describe what's being tested
- Look for flaky test patterns (timing, ordering, external dependencies)
### 4. Coverage Gap Identification
Rate each gap by impact (1-10) focused on preventing real bugs:
- **Critical gaps (8-10)**: Core business logic, security paths, data integrity
- **Important gaps (5-7)**: Error handling, edge cases, integration boundaries
- **Nice-to-have (1-4)**: UI variations, logging, non-critical paths
## Output Format
Provide a structured report:
1. **Coverage Summary**: What's tested vs what's not
2. **Critical Gaps**: Must-fix before merge
3. **Improvement Suggestions**: Would strengthen confidence
4. **Positive Observations**: Well-tested areas worth noting

View File

@@ -0,0 +1,47 @@
---
name: silent-failure-hunter
description: Use this agent when reviewing code changes to identify silent failures, inadequate error handling, and inappropriate fallback behavior. Invoke after completing work that involves error handling, catch blocks, fallback logic, or any code that could suppress errors.
model: sonnet
tools: [Read, Grep, Glob, Bash]
---
# Silent Failure Hunter Agent
You have zero tolerance for silent failures. Your mission is to find every place where errors could be swallowed, logged-and-forgotten, or hidden behind inappropriate fallbacks.
## Hunt Targets
### 1. Empty Catch Blocks
- `catch (e) {}` — swallowed errors
- `catch (e) { /* ignore */ }` — intentionally swallowed but still dangerous
- `catch (e) { return null }` — error converted to null without logging
### 2. Inadequate Logging
- `catch (e) { console.log(e) }` — logged but not handled
- Logging without context (no function name, no input data)
- Logging at wrong severity level (error logged as info/debug)
### 3. Dangerous Fallbacks
- `catch (e) { return defaultValue }` — masks the error from callers
- `.catch(() => [])` — promise errors silently returning empty data
- Fallback values that make bugs harder to detect downstream
### 4. Error Propagation Issues
- Functions that catch and re-throw without the original stack trace
- Error types being lost (generic Error instead of specific types)
- Async errors without proper await/catch chains
### 5. Missing Error Handling
- Async functions without try/catch or .catch()
- Network requests without timeout or error handling
- File operations without existence checks
- Database operations without transaction rollback
## Output Format
For each finding:
- **Location**: File and line number
- **Severity**: Critical / Important / Advisory
- **Issue**: What's wrong
- **Impact**: What happens when this fails silently
- **Fix**: Specific recommendation

View File

@@ -0,0 +1,48 @@
---
name: type-design-analyzer
description: Use this agent for expert analysis of type design. Use when introducing new types, during PR review of type changes, or when refactoring types. Evaluates encapsulation, invariant expression, and enforcement.
model: sonnet
tools: [Read, Grep, Glob, Bash]
---
# Type Design Analyzer Agent
You are a type system design expert. Your goal is to make illegal states unrepresentable.
## Evaluation Criteria (each rated 1-10)
### 1. Encapsulation
- Are internal implementation details hidden?
- Can the type's invariants be violated from outside?
- Are mutation points controlled and minimal?
- Score 10: Fully opaque type with controlled API
- Score 1: All fields public, no access control
### 2. Invariant Expression
- Do the types encode business rules?
- Are impossible states prevented at the type level?
- Does the type system catch errors at compile time vs runtime?
- Score 10: Type makes invalid states impossible to construct
- Score 1: Plain strings/numbers with runtime validation only
### 3. Invariant Usefulness
- Do the invariants prevent real bugs?
- Are they too restrictive (preventing valid use cases)?
- Do they align with business domain requirements?
- Score 10: Invariants prevent common, costly bugs
- Score 1: Over-engineered constraints with no practical value
### 4. Enforcement
- Are invariants enforced by the type system (not just conventions)?
- Can invariants be bypassed via casts or escape hatches?
- Are factory functions / constructors the only creation path?
- Score 10: Invariants enforced by compiler, no escape hatches
- Score 1: Invariants are just comments, easily violated
## Output Format
For each type reviewed:
- Type name and location
- Scores (Encapsulation, Invariant Expression, Usefulness, Enforcement)
- Overall rating and qualitative assessment
- Specific improvement suggestions with code examples

30
commands/clean-gone.md Normal file
View File

@@ -0,0 +1,30 @@
---
description: Cleans up all git branches marked as [gone] (branches deleted on remote but still exist locally), including removing associated worktrees
---
Clean up stale local branches that have been deleted on the remote.
## Steps
1. Fetch and prune remote tracking refs:
```bash
git fetch --prune
```
2. Find branches marked as [gone]:
```bash
git branch -vv | grep ': gone]' | awk '{print $1}'
```
3. For each gone branch, remove associated worktrees if any:
```bash
git worktree list
git worktree remove <path> --force
```
4. Delete the gone branches:
```bash
git branch -D <branch-name>
```
5. Report what was cleaned up

View File

@@ -0,0 +1,30 @@
---
description: Commit, push, and open a PR
---
Commit all changes, push to remote, and create a pull request in one workflow.
## Steps
1. Run in parallel: `git status`, `git diff`, `git log --oneline -5`, check current branch tracking
2. Stage and commit changes following conventional commit format
3. Create a new branch if on main/master: `git checkout -b <descriptive-branch-name>`
4. Push to remote with `-u` flag: `git push -u origin <branch>`
5. Create PR using gh CLI:
```bash
gh pr create --title "short title" --body "$(cat <<'EOF'
## Summary
<1-3 bullet points>
## Test plan
- [ ] Testing checklist items
EOF
)"
```
6. Return the PR URL
**Rules:**
- Keep PR title under 70 characters
- Analyze ALL commits in the branch (not just the latest)
- Never force push
- Never skip hooks

28
commands/commit.md Normal file
View File

@@ -0,0 +1,28 @@
---
description: Create a git commit
---
Create a single git commit from staged and unstaged changes with an appropriate message.
## Steps
1. Run `git status` (never use `-uall` flag) and `git diff` in parallel to see all changes
2. Run `git log --oneline -5` to match commit message style
3. Analyze changes and draft a concise commit message following conventional commits format: `<type>: <description>`
- Types: feat, fix, refactor, docs, test, chore, perf, ci
- Focus on "why" not "what"
4. Stage relevant files (prefer specific files over `git add -A`)
5. Create the commit using a HEREDOC for the message:
```bash
git commit -m "$(cat <<'EOF'
type: description
EOF
)"
```
6. Run `git status` to verify success
**Rules:**
- Never skip hooks (--no-verify) or bypass signing
- Never amend existing commits unless explicitly asked
- Do not commit files that likely contain secrets (.env, credentials.json)
- If pre-commit hook fails, fix the issue and create a NEW commit

45
commands/feature-dev.md Normal file
View File

@@ -0,0 +1,45 @@
---
description: Guided feature development with codebase understanding and architecture focus
---
A structured 7-phase feature development workflow that emphasizes understanding existing code before writing new code.
## Phases
### Phase 1: Discovery
- Read the user's feature request carefully
- Identify key requirements, constraints, and acceptance criteria
- Ask clarifying questions if the request is ambiguous
### Phase 2: Codebase Exploration
- Use the **code-explorer** agent to deeply analyze relevant existing code
- Trace execution paths and map architecture layers
- Understand existing patterns, abstractions, and conventions
- Document dependencies and integration points
### Phase 3: Clarifying Questions
- Present findings from exploration to the user
- Ask targeted questions about design decisions, edge cases, and preferences
- **Wait for user response before proceeding**
### Phase 4: Architecture Design
- Use the **code-architect** agent to design the feature architecture
- Provide implementation blueprint: files to create/modify, component designs, data flows
- Present the plan to the user for approval
- **Wait for user approval before implementing**
### Phase 5: Implementation
- Implement the feature following the approved architecture
- Use TDD approach: write tests first, then implementation
- Follow existing codebase patterns and conventions
- Make small, focused commits
### Phase 6: Quality Review
- Use the **code-reviewer** agent to review the implementation
- Address Critical and Important issues
- Verify test coverage meets requirements
### Phase 7: Summary
- Summarize what was built and how it integrates
- List any follow-up items or known limitations
- Provide testing instructions

View File

@@ -0,0 +1,14 @@
---
description: Enable or disable hookify rules interactively
---
Interactively enable or disable existing hookify rules.
## Steps
1. Find all `.claude/hookify.*.local.md` files
2. Read current state of each rule
3. Present list with current enabled/disabled status
4. Ask user which rules to toggle
5. Update the `enabled:` field in selected rule files
6. Confirm changes

42
commands/hookify-help.md Normal file
View File

@@ -0,0 +1,42 @@
---
description: Get help with the hookify system
---
Display comprehensive hookify documentation.
## Hook System Overview
Hookify creates rule files that integrate with Claude Code's hook system to prevent unwanted behaviors.
### Event Types
- **bash**: Triggers on Bash tool use — match command patterns
- **file**: Triggers on Write/Edit tool use — match file paths
- **stop**: Triggers when session ends — final checks
- **prompt**: Triggers on user message submission — match input patterns
- **all**: Triggers on all events
### Rule File Format
Files are stored as `.claude/hookify.{name}.local.md`:
```yaml
---
name: descriptive-name
enabled: true
event: bash|file|stop|prompt|all
action: block|warn
pattern: "regex pattern to match"
---
Message to display when rule triggers.
Supports multiple lines.
```
### Commands
- `/hookify [description]` — Create new rules (auto-analyzes conversation if no args)
- `/hookify-list` — View all rules
- `/hookify-configure` — Toggle rules on/off
### Pattern Tips
- Use regex syntax (pipes for OR, brackets for groups)
- For bash: match against the full command string
- For file: match against the file path
- Test patterns before deploying

16
commands/hookify-list.md Normal file
View File

@@ -0,0 +1,16 @@
---
description: List all configured hookify rules
---
Find and display all hookify rules in a formatted table.
## Steps
1. Find all `.claude/hookify.*.local.md` files using Glob
2. Read each file's frontmatter (name, enabled, event, action, pattern)
3. Display as table:
| Rule | Enabled | Event | Pattern | File |
|------|---------|-------|---------|------|
4. Show count and helpful footer about `/hookify-configure` for modifications

45
commands/hookify.md Normal file
View File

@@ -0,0 +1,45 @@
---
description: Create hooks to prevent unwanted behaviors from conversation analysis or explicit instructions
---
Create hook rules to prevent unwanted Claude Code behaviors by analyzing conversation patterns or explicit user instructions.
## Usage
`/hookify [description of behavior to prevent]`
If no arguments provided, analyze the current conversation to find behaviors worth preventing.
## Workflow
### Step 1: Gather Behavior Info
- **With arguments**: Parse the user's description of the unwanted behavior
- **Without arguments**: Use the conversation-analyzer agent to find:
- Explicit corrections ("no, don't do that", "stop doing X")
- Frustrated reactions to repeated mistakes
- Reverted changes
- Repeated similar issues
### Step 2: Present Findings
Show the user what behaviors were identified and proposed hook rules:
- Behavior description
- Proposed event type (bash/file/stop/prompt/all)
- Proposed pattern/matcher
- Proposed action (block/warn)
### Step 3: Generate Rule Files
For each approved rule, create a file at `.claude/hookify.{name}.local.md`:
```yaml
---
name: rule-name
enabled: true
event: bash|file|stop|prompt|all
action: block|warn
pattern: "regex pattern"
---
Message shown when rule triggers.
```
### Step 4: Confirm
Report created rules and how to manage them (`/hookify-list`, `/hookify-configure`).

36
commands/review-pr.md Normal file
View File

@@ -0,0 +1,36 @@
---
description: Comprehensive PR review using specialized agents
---
Run a comprehensive, multi-perspective review of a pull request.
## Usage
`/review-pr [PR-number-or-URL] [--focus=comments|tests|errors|types|code|simplify]`
If no PR specified, review the current branch's PR. If no focus specified, run all reviews.
## Steps
1. **Identify the PR**: Use `gh pr view` to get PR details, changed files, and diff
2. **Find project guidelines**: Look for CLAUDE.md, .eslintrc, tsconfig.json, etc.
3. **Launch specialized review agents** (in parallel where possible):
| Agent | Focus |
|-------|-------|
| code-reviewer | Code quality, bugs, security, style guide adherence |
| comment-analyzer | Comment accuracy, completeness, maintainability |
| pr-test-analyzer | Test coverage quality and completeness |
| silent-failure-hunter | Silent failures, inadequate error handling |
| type-design-analyzer | Type design, invariants, encapsulation |
| code-simplifier | Code clarity, consistency, maintainability |
4. **Aggregate results**: Collect findings, deduplicate, rank by severity
5. **Report**: Present organized findings grouped by severity (Critical > Important > Advisory)
## Confidence Scoring
Only report issues with confidence ≥ 80:
- **Critical (90-100)**: Bugs, security vulnerabilities, data loss risks
- **Important (80-89)**: Style violations, missing tests, code quality
- **Advisory (<80)**: Suggestions, nice-to-haves (only if explicitly requested)

110
docs/HERMES-SETUP.md Normal file
View File

@@ -0,0 +1,110 @@
# Hermes x ECC Setup
Hermes is the operator shell. ECC is the reusable system behind it.
This guide is the public, sanitized version of the Hermes stack used to run content, outreach, research, sales ops, finance checks, and engineering workflows from one terminal-native surface.
## What Ships Publicly
- ECC skills, agents, commands, hooks, and MCP configs from this repo
- Hermes-generated workflow skills that are stable enough to reuse
- a documented operator topology for chat, crons, workspace memory, and distribution flows
- launch collateral for sharing the stack publicly
This guide does **not** include private secrets, live tokens, personal data, or a raw `~/.hermes` export.
## Architecture
Use Hermes as the front door and ECC as the reusable workflow substrate.
```text
Telegram / CLI / TUI
Hermes
ECC skills + hooks + MCPs + generated workflow packs
Google Drive / GitHub / browser automation / research APIs / media tools / finance tools
```
## Public Workspace Map
Use this as the minimal surface to reproduce the setup without leaking private state.
- `~/.hermes/config.yaml`
- model routing
- MCP server registration
- plugin loading
- `~/.hermes/skills/ecc-imports/`
- ECC skills copied in for Hermes-native use
- `skills/hermes-generated/`
- operator patterns distilled from repeated Hermes sessions
- `~/.hermes/plugins/`
- bridge plugins for hooks, reminders, and workflow-specific tool glue
- `~/.hermes/cron/jobs.json`
- scheduled automation runs with explicit prompts and channels
- `~/.hermes/workspace/`
- business, ops, health, content, and memory artifacts
## Recommended Capability Stack
### Core
- Hermes for chat, cron, orchestration, and workspace state
- ECC for skills, rules, prompts, and cross-harness conventions
- GitHub + Context7 + Exa + Firecrawl + Playwright as the baseline MCP layer
### Content
- FFmpeg for local edit and assembly
- Remotion for programmable clips
- fal.ai for image/video generation
- ElevenLabs for voice, cleanup, and audio packaging
- CapCut or VectCutAPI for final social-native polish
### Business Ops
- Google Drive as the system of record for docs, sheets, decks, and research dumps
- Stripe for revenue and payment operations
- GitHub for engineering execution
- Telegram and iMessage style channels for urgent nudges and approvals
## What Still Requires Local Auth
These stay local and should be configured per operator:
- Google OAuth token for Drive / Docs / Sheets / Slides
- X / LinkedIn / outbound distribution credentials
- Stripe keys
- browser automation credentials and stealth/proxy settings
- any CRM or project system credentials such as Linear or Apollo
- Apple Health export or ingest path if health automations are enabled
## Suggested Bring-Up Order
1. Install ECC and verify the baseline harness setup.
2. Install Hermes and point it at ECC-imported skills.
3. Register the MCP servers you actually use every day.
4. Authenticate Google Drive first, then GitHub, then distribution channels.
5. Start with a small cron surface: readiness check, content accountability, inbox triage, revenue monitor.
6. Only then add heavier personal workflows like health, relationship graphing, or outbound sequencing.
## Why Hermes x ECC
This stack is useful when you want:
- one terminal-native place to run business and engineering operations
- reusable skills instead of one-off prompts
- automation that can nudge, audit, and escalate
- a public repo that shows the system shape without exposing your private operator state
## Public Preview Scope
ECC 2.0 preview documents the Hermes surface and ships launch collateral now.
The remaining private pieces can be layered later:
- additional sanitized templates
- richer public examples
- more generated workflow packs
- tighter CRM and Google Workspace integrations

View File

@@ -0,0 +1,57 @@
# Article Outline - ECC 2.0 Preview
## Working Title
How I Turned ECC Into an Operator System With Hermes
## Core Argument
Most people treat AI coding tools like isolated chat products.
The leverage comes when you treat the harness, workflow surface, and operator stack as a system:
- reusable skills
- stable hooks
- MCP-backed tools
- cron/accountability loops
- one operator shell tying the pieces together
## Structure
### 1. The Problem
- too many tools
- too much context switching
- too many workflows stuck in personal muscle memory
### 2. What ECC Already Solved
- reusable skills
- cross-harness portability
- hook discipline
- verification and security patterns
### 3. Why Hermes Was the Missing Layer
- chat + TUI + cron + workspace memory
- business and content ops live next to engineering
- terminal-native operator flow instead of app sprawl
### 4. What Ships in the Public Preview
- sanitized Hermes setup guide
- generated workflow skills
- release and distribution collateral
- cross-harness 2.0 positioning
### 5. What Is Still Private or Still Coming
- secrets and auth
- personal datasets
- some operator-specific automation packs
- deeper CRM/finance/Google Workspace integrations
### 6. Closing Point
The goal is not “use my exact stack.”
The goal is to build an operator system that compounds.

View File

@@ -0,0 +1,37 @@
# ECC 2.0 Preview Launch Checklist
## Repo
- update public version surface
- publish Hermes setup guide
- verify release notes and launch assets are committed
- leave private tokens, personal docs, and raw workspace exports out of repo
## Content
- post X thread from `x-thread.md`
- post LinkedIn draft from `linkedin-post.md`
- turn one of the short clips in `video-shorts.md` into a 30-60 second video
- use `article-outline.md` to draft the longer writeup
## Demo Asset Suggestions
- terminal view of Hermes + ECC side by side
- Drive playbook -> brief -> post workflow
- cron or readiness artifact showing operator accountability
- one short proof-of-work clip instead of a polished brand reel
## Call To Action
- repo link
- Hermes setup doc link
- one sentence on why this is preview and what comes next
## Preview Messaging
Use language like:
- "public preview"
- "sanitized operator stack"
- "shipping the reusable surface first"
- "private/local integrations land later"

View File

@@ -0,0 +1,29 @@
# LinkedIn Draft - ECC 2.0 Preview
ECC 2.0 preview is live.
This is the first public version of a setup I have been converging toward for a while: one operator stack for engineering, research, content, outreach, and business operations.
The practical shift is that ECC is no longer framed as only a Claude Code plugin or config bundle.
It is a cross-harness operating system for agentic work:
- reusable skills instead of one-off prompts
- hooks and workflow automation instead of manual checklists
- MCP-backed access to research, docs, browser automation, finance, and distribution surfaces
- a Hermes operator shell on top for chat, cron, accountability, and orchestration
For the public preview I kept the repo honest.
I did not dump a private workspace into GitHub. I shipped:
- a sanitized Hermes setup guide
- launch-ready release collateral
- the reusable parts of the workflow surface
- aligned versioning across the public repo
If you are building with AI coding tools daily, the real leverage is not just prompting better.
It is reducing surface area, consolidating workflows, and making your operator system measurable and repeatable.
I will keep adding the remaining pieces incrementally, but this preview is enough to start from.

View File

@@ -0,0 +1,51 @@
# ECC 2.0 Preview Release Notes
## Positioning
ECC 2.0 preview turns the repo into a more explicit cross-harness operating system.
Claude Code is still a core target. Codex, OpenCode, and Cursor remain first-class. Hermes now joins the public story as the operator shell that can sit on top of ECC.
## What Changed
- Public repo metadata now aligns with the 2.0 direction already visible in the plugin manifest and architecture docs.
- Hermes setup is documented as a sanitized, reusable operator stack.
- Hermes-generated skills are now easier to explain as part of ECC's reusable workflow layer.
- Same-day launch collateral is included in-repo so the release can ship without rebuilding the messaging from scratch.
## Why This Matters
ECC is no longer just “Claude Code tips in a repo.”
It is a reusable system for:
- engineering execution
- research and market intelligence
- outbound and comms operations
- content production and distribution
- operator accountability through hooks, cron jobs, and workflow packs
## Preview Boundaries
This is a public preview, not the final form.
What ships now:
- cross-harness positioning
- Hermes setup documentation
- launch content pack
- version alignment across root package surfaces
What can land later:
- richer sanitized Hermes templates
- more public MCP presets
- deeper CRM and Google Workspace playbooks
- additional operator packs distilled from live usage
## Suggested Upgrade Motion
1. Update to the ECC 2.0 public surface.
2. Read the [Hermes setup guide](../../HERMES-SETUP.md).
3. Pick the workflows you want public first: content, research, outreach, or engineering.
4. Use the launch pack in this folder to announce the release without re-drafting everything from scratch.

View File

@@ -0,0 +1,79 @@
# Short-Form Video Scripts - ECC 2.0 Preview
These are designed for 30-60 second clips. Record directly from the terminal and Hermes UI where possible.
## Clip 1 - "I Replaced Five Apps With One Operator Stack"
### Hook
"I got tired of switching between coding tools, research tabs, notes, and outreach dashboards, so I collapsed it into one operator stack."
### Beat Outline
1. Show ECC repo and Hermes side by side.
2. Explain: ECC holds the reusable skills, hooks, and MCP patterns.
3. Explain: Hermes is the operator shell that runs the workflows.
4. Flash examples:
- coding
- research
- content
- cron nudges
5. Close with: "This is ECC 2.0 preview."
### On-Screen Text
- "one operator stack"
- "skills + MCPs + automations"
- "Hermes x ECC"
## Clip 2 - "How I Turn Drive Playbooks Into Content"
### Hook
"This is how I turn raw operating docs into posts and videos without starting from a blank page."
### Beat Outline
1. Open the Ito playbooks folder.
2. Show Hermes pulling the source material into a working brief.
3. Show ECC release/content docs as the packaging layer.
4. Show output targets:
- X thread
- LinkedIn post
- short clip script
5. Close with: "One source, multiple outputs."
### On-Screen Text
- "Drive -> brief -> posts -> clips"
- "no blank page"
## Clip 3 - "Why Hermes Matters"
### Hook
"The point of Hermes is not another chat app. It is operator control."
### Beat Outline
1. Show a cron or readiness check artifact.
2. Explain that Hermes can audit, remind, and route work.
3. Show ECC skills and MCP-backed workflows behind the scenes.
4. Explain why terminal-native matters:
- fewer tabs
- better repeatability
- faster execution
5. Close with the preview framing.
### On-Screen Text
- "operator control"
- "terminal-native"
- "ECC 2.0 preview"
## Recording Notes
- Prefer live typing plus quick jump cuts over long screen recordings.
- Keep each beat under 5 seconds.
- Use captions aggressively.
- End each clip with the repo URL or doc title on screen.

View File

@@ -0,0 +1,59 @@
# X Thread Draft - ECC 2.0 Preview
1/ ECC 2.0 preview is live.
This is the first public pass at the setup I actually want to run daily: one operator stack for coding, research, content, outreach, and business ops.
2/ The shift is simple:
ECC is no longer just a Claude Code config pack.
It is a cross-harness operating system for agentic work.
3/ Claude Code, Codex, Cursor, and OpenCode are still in the mix.
Now Hermes joins the public story as the operator shell on top of ECC skills, MCPs, hooks, and workflow packs.
4/ I wanted fewer surfaces, not more.
Less “which app do I open?”
More “one place where the system already knows the workflows.”
5/ The preview ships a sanitized Hermes setup guide, not a raw private workspace dump.
That means:
- no secrets
- no personal tokens
- no fake polish
- just the reusable system shape
6/ I also added release collateral directly in the repo:
- release notes
- launch checklist
- LinkedIn draft
- short-form video scripts
7/ Why?
Because half the battle with agent systems is not building them.
It is operationalizing them:
- shipping content
- tracking revenue
- triaging comms
- keeping research and execution in one loop
8/ If you want the repo:
read the Hermes setup doc first, then lift the parts you need.
You do not need my exact stack.
You need a system that compounds.
9/ ECC 2.0 is still preview.
The public docs ship now.
The rest lands incrementally as the operator surface hardens.
10/ Repo + docs:
<repo-link>
Hermes x ECC setup:
<repo-link>/blob/main/docs/HERMES-SETUP.md

2017
ecc2/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,53 +0,0 @@
[package]
name = "ecc-tui"
version = "0.1.0"
edition = "2021"
description = "ECC 2.0 — Agentic IDE control plane with TUI dashboard"
license = "MIT"
authors = ["Affaan Mustafa <me@affaanmustafa.com>"]
repository = "https://github.com/affaan-m/everything-claude-code"
[dependencies]
# TUI
ratatui = "0.29"
crossterm = "0.28"
# Async runtime
tokio = { version = "1", features = ["full"] }
# State store
rusqlite = { version = "0.32", features = ["bundled"] }
# Git integration
git2 = "0.19"
# Serialization
serde = { version = "1", features = ["derive"] }
serde_json = "1"
toml = "0.8"
# CLI
clap = { version = "4", features = ["derive"] }
# Logging & tracing
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
# Error handling
anyhow = "1"
thiserror = "2"
libc = "0.2"
# Time
chrono = { version = "0.4", features = ["serde"] }
# UUID for session IDs
uuid = { version = "1", features = ["v4"] }
# Directory paths
dirs = "6"
[profile.release]
lto = true
codegen-units = 1
strip = true

View File

@@ -1,33 +0,0 @@
use anyhow::Result;
use serde::{Deserialize, Serialize};
use crate::session::store::StateStore;
/// Message types for inter-agent communication.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum MessageType {
/// Task handoff from one agent to another
TaskHandoff { task: String, context: String },
/// Agent requesting information from another
Query { question: String },
/// Response to a query
Response { answer: String },
/// Notification of completion
Completed { summary: String, files_changed: Vec<String> },
/// Conflict detected (e.g., two agents editing the same file)
Conflict { file: String, description: String },
}
/// Send a structured message between sessions.
pub fn send(db: &StateStore, from: &str, to: &str, msg: &MessageType) -> Result<()> {
let content = serde_json::to_string(msg)?;
let msg_type = match msg {
MessageType::TaskHandoff { .. } => "task_handoff",
MessageType::Query { .. } => "query",
MessageType::Response { .. } => "response",
MessageType::Completed { .. } => "completed",
MessageType::Conflict { .. } => "conflict",
};
db.send_message(from, to, &content, msg_type)?;
Ok(())
}

View File

@@ -1,54 +0,0 @@
use anyhow::Result;
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Config {
pub db_path: PathBuf,
pub worktree_root: PathBuf,
pub max_parallel_sessions: usize,
pub max_parallel_worktrees: usize,
pub session_timeout_secs: u64,
pub heartbeat_interval_secs: u64,
pub default_agent: String,
pub theme: Theme,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum Theme {
Dark,
Light,
}
impl Default for Config {
fn default() -> Self {
let home = dirs::home_dir().unwrap_or_else(|| PathBuf::from("."));
Self {
db_path: home.join(".claude").join("ecc2.db"),
worktree_root: PathBuf::from("/tmp/ecc-worktrees"),
max_parallel_sessions: 8,
max_parallel_worktrees: 6,
session_timeout_secs: 3600,
heartbeat_interval_secs: 30,
default_agent: "claude".to_string(),
theme: Theme::Dark,
}
}
}
impl Config {
pub fn load() -> Result<Self> {
let config_path = dirs::home_dir()
.unwrap_or_else(|| PathBuf::from("."))
.join(".claude")
.join("ecc2.toml");
if config_path.exists() {
let content = std::fs::read_to_string(&config_path)?;
let config: Config = toml::from_str(&content)?;
Ok(config)
} else {
Ok(Config::default())
}
}
}

View File

@@ -1,94 +0,0 @@
mod config;
mod session;
mod tui;
mod worktree;
mod observability;
mod comms;
use anyhow::Result;
use clap::Parser;
use tracing_subscriber::EnvFilter;
#[derive(Parser, Debug)]
#[command(name = "ecc", version, about = "ECC 2.0 — Agentic IDE control plane")]
struct Cli {
#[command(subcommand)]
command: Option<Commands>,
}
#[derive(clap::Subcommand, Debug)]
enum Commands {
/// Launch the TUI dashboard
Dashboard,
/// Start a new agent session
Start {
/// Task description for the agent
#[arg(short, long)]
task: String,
/// Agent type (claude, codex, custom)
#[arg(short, long, default_value = "claude")]
agent: String,
/// Create a dedicated worktree for this session
#[arg(short, long)]
worktree: bool,
},
/// List active sessions
Sessions,
/// Show session details
Status {
/// Session ID or alias
session_id: Option<String>,
},
/// Stop a running session
Stop {
/// Session ID or alias
session_id: String,
},
/// Run as background daemon
Daemon,
}
#[tokio::main]
async fn main() -> Result<()> {
tracing_subscriber::fmt()
.with_env_filter(EnvFilter::from_default_env())
.init();
let cli = Cli::parse();
let cfg = config::Config::load()?;
let db = session::store::StateStore::open(&cfg.db_path)?;
match cli.command {
Some(Commands::Dashboard) | None => {
tui::app::run(db, cfg).await?;
}
Some(Commands::Start { task, agent, worktree: use_worktree }) => {
let session_id = session::manager::create_session(
&db, &cfg, &task, &agent, use_worktree,
).await?;
println!("Session started: {session_id}");
}
Some(Commands::Sessions) => {
let sessions = session::manager::list_sessions(&db)?;
for s in sessions {
println!("{} [{}] {}", s.id, s.state, s.task);
}
}
Some(Commands::Status { session_id }) => {
let id = session_id.unwrap_or_else(|| "latest".to_string());
let status = session::manager::get_status(&db, &id)?;
println!("{status}");
}
Some(Commands::Stop { session_id }) => {
session::manager::stop_session(&db, &session_id).await?;
println!("Session stopped: {session_id}");
}
Some(Commands::Daemon) => {
println!("Starting ECC daemon...");
session::daemon::run(db, cfg).await?;
}
}
Ok(())
}

View File

@@ -1,54 +0,0 @@
use anyhow::Result;
use serde::{Deserialize, Serialize};
use crate::session::store::StateStore;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ToolCallEvent {
pub session_id: String,
pub tool_name: String,
pub input_summary: String,
pub output_summary: String,
pub duration_ms: u64,
pub risk_score: f64,
}
impl ToolCallEvent {
/// Compute risk score based on tool type and input patterns.
pub fn compute_risk(tool_name: &str, input: &str) -> f64 {
let mut score: f64 = 0.0;
// Destructive tools get higher base risk
match tool_name {
"Bash" => score += 0.3,
"Write" => score += 0.2,
"Edit" => score += 0.1,
_ => score += 0.05,
}
// Dangerous patterns in bash commands
if tool_name == "Bash" {
if input.contains("rm -rf") || input.contains("--force") {
score += 0.4;
}
if input.contains("git push") || input.contains("git reset") {
score += 0.3;
}
if input.contains("sudo") || input.contains("chmod 777") {
score += 0.5;
}
}
score.min(1.0)
}
}
pub fn log_tool_call(db: &StateStore, event: &ToolCallEvent) -> Result<()> {
db.send_message(
&event.session_id,
"observability",
&serde_json::to_string(event)?,
"tool_call",
)?;
Ok(())
}

View File

@@ -1,46 +0,0 @@
use anyhow::Result;
use std::time::Duration;
use tokio::time;
use super::store::StateStore;
use super::SessionState;
use crate::config::Config;
/// Background daemon that monitors sessions, handles heartbeats,
/// and cleans up stale resources.
pub async fn run(db: StateStore, cfg: Config) -> Result<()> {
tracing::info!("ECC daemon started");
let heartbeat_interval = Duration::from_secs(cfg.heartbeat_interval_secs);
let timeout = Duration::from_secs(cfg.session_timeout_secs);
loop {
if let Err(e) = check_sessions(&db, timeout) {
tracing::error!("Session check failed: {e}");
}
time::sleep(heartbeat_interval).await;
}
}
fn check_sessions(db: &StateStore, timeout: Duration) -> Result<()> {
let sessions = db.list_sessions()?;
for session in sessions {
if session.state != SessionState::Running {
continue;
}
let elapsed = chrono::Utc::now()
.signed_duration_since(session.updated_at)
.to_std()
.unwrap_or(Duration::ZERO);
if elapsed > timeout {
tracing::warn!("Session {} timed out after {:?}", session.id, elapsed);
db.update_state(&session.id, &SessionState::Failed)?;
}
}
Ok(())
}

View File

@@ -1,488 +0,0 @@
use anyhow::{Context, Result};
use std::fmt;
use std::path::{Path, PathBuf};
use std::process::Stdio;
use tokio::process::Command;
use super::store::StateStore;
use super::{Session, SessionMetrics, SessionState};
use crate::config::Config;
use crate::worktree;
pub async fn create_session(
db: &StateStore,
cfg: &Config,
task: &str,
agent_type: &str,
use_worktree: bool,
) -> Result<String> {
let repo_root =
std::env::current_dir().context("Failed to resolve current working directory")?;
let agent_program = agent_program(agent_type)?;
create_session_in_dir(
db,
cfg,
task,
agent_type,
use_worktree,
&repo_root,
&agent_program,
)
.await
}
pub fn list_sessions(db: &StateStore) -> Result<Vec<Session>> {
db.list_sessions()
}
pub fn get_status(db: &StateStore, id: &str) -> Result<SessionStatus> {
let session = resolve_session(db, id)?;
Ok(SessionStatus(session))
}
pub async fn stop_session(db: &StateStore, id: &str) -> Result<()> {
stop_session_with_options(db, id, true).await
}
fn agent_program(agent_type: &str) -> Result<PathBuf> {
match agent_type {
"claude" => Ok(PathBuf::from("claude")),
other => anyhow::bail!("Unsupported agent type: {other}"),
}
}
fn resolve_session(db: &StateStore, id: &str) -> Result<Session> {
let session = if id == "latest" {
db.get_latest_session()?
} else {
db.get_session(id)?
};
session.ok_or_else(|| anyhow::anyhow!("Session not found: {id}"))
}
async fn create_session_in_dir(
db: &StateStore,
cfg: &Config,
task: &str,
agent_type: &str,
use_worktree: bool,
repo_root: &Path,
agent_program: &Path,
) -> Result<String> {
let id = uuid::Uuid::new_v4().to_string()[..8].to_string();
let now = chrono::Utc::now();
let wt = if use_worktree {
Some(worktree::create_for_session_in_repo(&id, cfg, repo_root)?)
} else {
None
};
let session = Session {
id: id.clone(),
task: task.to_string(),
agent_type: agent_type.to_string(),
state: SessionState::Pending,
pid: None,
worktree: wt,
created_at: now,
updated_at: now,
metrics: SessionMetrics::default(),
};
db.insert_session(&session)?;
let working_dir = session
.worktree
.as_ref()
.map(|worktree| worktree.path.as_path())
.unwrap_or(repo_root);
match spawn_claude_code(agent_program, task, &session.id, working_dir).await {
Ok(pid) => {
db.update_pid(&session.id, Some(pid))?;
db.update_state(&session.id, &SessionState::Running)?;
Ok(session.id)
}
Err(error) => {
db.update_state(&session.id, &SessionState::Failed)?;
if let Some(worktree) = session.worktree.as_ref() {
let _ = crate::worktree::remove(&worktree.path);
}
Err(error.context(format!("Failed to start session {}", session.id)))
}
}
}
async fn spawn_claude_code(
agent_program: &Path,
task: &str,
session_id: &str,
working_dir: &Path,
) -> Result<u32> {
let child = Command::new(agent_program)
.arg("--print")
.arg("--name")
.arg(format!("ecc-{session_id}"))
.arg(task)
.current_dir(working_dir)
.stdin(Stdio::null())
.stdout(Stdio::null())
.stderr(Stdio::null())
.spawn()
.with_context(|| {
format!(
"Failed to spawn Claude Code from {}",
agent_program.display()
)
})?;
child
.id()
.ok_or_else(|| anyhow::anyhow!("Claude Code did not expose a process id"))
}
async fn stop_session_with_options(
db: &StateStore,
id: &str,
cleanup_worktree: bool,
) -> Result<()> {
let session = resolve_session(db, id)?;
if let Some(pid) = session.pid {
kill_process(pid).await?;
}
db.update_pid(&session.id, None)?;
db.update_state(&session.id, &SessionState::Stopped)?;
if cleanup_worktree {
if let Some(worktree) = session.worktree.as_ref() {
crate::worktree::remove(&worktree.path)?;
}
}
Ok(())
}
#[cfg(unix)]
async fn kill_process(pid: u32) -> Result<()> {
send_signal(pid, libc::SIGTERM)?;
tokio::time::sleep(std::time::Duration::from_millis(1200)).await;
send_signal(pid, libc::SIGKILL)?;
Ok(())
}
#[cfg(unix)]
fn send_signal(pid: u32, signal: i32) -> Result<()> {
let outcome = unsafe { libc::kill(pid as i32, signal) };
if outcome == 0 {
return Ok(());
}
let error = std::io::Error::last_os_error();
if error.raw_os_error() == Some(libc::ESRCH) {
return Ok(());
}
Err(error).with_context(|| format!("Failed to kill process {pid}"))
}
#[cfg(not(unix))]
async fn kill_process(pid: u32) -> Result<()> {
let status = Command::new("taskkill")
.args(["/F", "/PID", &pid.to_string()])
.stdin(Stdio::null())
.stdout(Stdio::null())
.stderr(Stdio::null())
.status()
.await
.with_context(|| format!("Failed to invoke taskkill for process {pid}"))?;
if status.success() {
Ok(())
} else {
anyhow::bail!("taskkill failed for process {pid}");
}
}
pub struct SessionStatus(Session);
impl fmt::Display for SessionStatus {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let s = &self.0;
writeln!(f, "Session: {}", s.id)?;
writeln!(f, "Task: {}", s.task)?;
writeln!(f, "Agent: {}", s.agent_type)?;
writeln!(f, "State: {}", s.state)?;
if let Some(pid) = s.pid {
writeln!(f, "PID: {}", pid)?;
}
if let Some(ref wt) = s.worktree {
writeln!(f, "Branch: {}", wt.branch)?;
writeln!(f, "Worktree: {}", wt.path.display())?;
}
writeln!(f, "Tokens: {}", s.metrics.tokens_used)?;
writeln!(f, "Tools: {}", s.metrics.tool_calls)?;
writeln!(f, "Files: {}", s.metrics.files_changed)?;
writeln!(f, "Cost: ${:.4}", s.metrics.cost_usd)?;
writeln!(f, "Created: {}", s.created_at)?;
write!(f, "Updated: {}", s.updated_at)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::config::{Config, Theme};
use crate::session::{Session, SessionMetrics, SessionState};
use anyhow::{Context, Result};
use chrono::{Duration, Utc};
use std::fs;
use std::os::unix::fs::PermissionsExt;
use std::path::{Path, PathBuf};
use std::process::Command as StdCommand;
use std::thread;
use std::time::Duration as StdDuration;
struct TestDir {
path: PathBuf,
}
impl TestDir {
fn new(label: &str) -> Result<Self> {
let path =
std::env::temp_dir().join(format!("ecc2-{}-{}", label, uuid::Uuid::new_v4()));
fs::create_dir_all(&path)?;
Ok(Self { path })
}
fn path(&self) -> &Path {
&self.path
}
}
impl Drop for TestDir {
fn drop(&mut self) {
let _ = fs::remove_dir_all(&self.path);
}
}
fn build_config(root: &Path) -> Config {
Config {
db_path: root.join("state.db"),
worktree_root: root.join("worktrees"),
max_parallel_sessions: 4,
max_parallel_worktrees: 4,
session_timeout_secs: 60,
heartbeat_interval_secs: 5,
default_agent: "claude".to_string(),
theme: Theme::Dark,
}
}
fn build_session(id: &str, state: SessionState, updated_at: chrono::DateTime<Utc>) -> Session {
Session {
id: id.to_string(),
task: format!("task-{id}"),
agent_type: "claude".to_string(),
state,
pid: None,
worktree: None,
created_at: updated_at - Duration::minutes(1),
updated_at,
metrics: SessionMetrics::default(),
}
}
fn init_git_repo(path: &Path) -> Result<()> {
fs::create_dir_all(path)?;
run_git(path, ["init", "-q"])?;
fs::write(path.join("README.md"), "hello\n")?;
run_git(path, ["add", "README.md"])?;
run_git(
path,
[
"-c",
"user.name=ECC Tests",
"-c",
"user.email=ecc-tests@example.com",
"commit",
"-qm",
"init",
],
)?;
Ok(())
}
fn run_git<const N: usize>(path: &Path, args: [&str; N]) -> Result<()> {
let status = StdCommand::new("git")
.args(args)
.current_dir(path)
.status()
.with_context(|| format!("failed to run git in {}", path.display()))?;
if !status.success() {
anyhow::bail!("git command failed in {}", path.display());
}
Ok(())
}
fn write_fake_claude(root: &Path) -> Result<(PathBuf, PathBuf)> {
let script_path = root.join("fake-claude.sh");
let log_path = root.join("fake-claude.log");
let script = format!(
"#!/usr/bin/env python3\nimport os\nimport pathlib\nimport signal\nimport sys\nimport time\n\nlog_path = pathlib.Path(r\"{}\")\nlog_path.write_text(os.getcwd() + \"\\n\", encoding=\"utf-8\")\nwith log_path.open(\"a\", encoding=\"utf-8\") as handle:\n handle.write(\" \".join(sys.argv[1:]) + \"\\n\")\n\ndef handle_term(signum, frame):\n raise SystemExit(0)\n\nsignal.signal(signal.SIGTERM, handle_term)\nwhile True:\n time.sleep(0.1)\n",
log_path.display()
);
fs::write(&script_path, script)?;
let mut permissions = fs::metadata(&script_path)?.permissions();
permissions.set_mode(0o755);
fs::set_permissions(&script_path, permissions)?;
Ok((script_path, log_path))
}
fn wait_for_file(path: &Path) -> Result<String> {
for _ in 0..50 {
if path.exists() {
return fs::read_to_string(path)
.with_context(|| format!("failed to read {}", path.display()));
}
thread::sleep(StdDuration::from_millis(20));
}
anyhow::bail!("timed out waiting for {}", path.display());
}
#[tokio::test(flavor = "current_thread")]
async fn create_session_spawns_process_and_marks_session_running() -> Result<()> {
let tempdir = TestDir::new("manager-create-session")?;
let repo_root = tempdir.path().join("repo");
init_git_repo(&repo_root)?;
let cfg = build_config(tempdir.path());
let db = StateStore::open(&cfg.db_path)?;
let (fake_claude, log_path) = write_fake_claude(tempdir.path())?;
let session_id = create_session_in_dir(
&db,
&cfg,
"implement lifecycle",
"claude",
false,
&repo_root,
&fake_claude,
)
.await?;
let session = db
.get_session(&session_id)?
.context("session should exist")?;
assert_eq!(session.state, SessionState::Running);
assert!(
session.pid.is_some(),
"spawned session should persist a pid"
);
let log = wait_for_file(&log_path)?;
assert!(log.contains(repo_root.to_string_lossy().as_ref()));
assert!(log.contains("--print"));
assert!(log.contains("implement lifecycle"));
stop_session_with_options(&db, &session_id, false).await?;
Ok(())
}
#[tokio::test(flavor = "current_thread")]
async fn stop_session_kills_process_and_optionally_cleans_worktree() -> Result<()> {
let tempdir = TestDir::new("manager-stop-session")?;
let repo_root = tempdir.path().join("repo");
init_git_repo(&repo_root)?;
let cfg = build_config(tempdir.path());
let db = StateStore::open(&cfg.db_path)?;
let (fake_claude, _) = write_fake_claude(tempdir.path())?;
let keep_id = create_session_in_dir(
&db,
&cfg,
"keep worktree",
"claude",
true,
&repo_root,
&fake_claude,
)
.await?;
let keep_session = db.get_session(&keep_id)?.context("keep session missing")?;
keep_session.pid.context("keep session pid missing")?;
let keep_worktree = keep_session
.worktree
.clone()
.context("keep session worktree missing")?
.path;
stop_session_with_options(&db, &keep_id, false).await?;
let stopped_keep = db
.get_session(&keep_id)?
.context("stopped keep session missing")?;
assert_eq!(stopped_keep.state, SessionState::Stopped);
assert_eq!(stopped_keep.pid, None);
assert!(
keep_worktree.exists(),
"worktree should remain when cleanup is disabled"
);
let cleanup_id = create_session_in_dir(
&db,
&cfg,
"cleanup worktree",
"claude",
true,
&repo_root,
&fake_claude,
)
.await?;
let cleanup_session = db
.get_session(&cleanup_id)?
.context("cleanup session missing")?;
let cleanup_worktree = cleanup_session
.worktree
.clone()
.context("cleanup session worktree missing")?
.path;
stop_session_with_options(&db, &cleanup_id, true).await?;
assert!(
!cleanup_worktree.exists(),
"worktree should be removed when cleanup is enabled"
);
Ok(())
}
#[test]
fn get_status_supports_latest_alias() -> Result<()> {
let tempdir = TestDir::new("manager-latest-status")?;
let cfg = build_config(tempdir.path());
let db = StateStore::open(&cfg.db_path)?;
let older = Utc::now() - Duration::minutes(2);
let newer = Utc::now();
db.insert_session(&build_session("older", SessionState::Running, older))?;
db.insert_session(&build_session("newer", SessionState::Idle, newer))?;
let status = get_status(&db, "latest")?;
assert_eq!(status.0.id, "newer");
Ok(())
}
}

View File

@@ -1,100 +0,0 @@
pub mod daemon;
pub mod manager;
pub mod store;
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use std::fmt;
use std::path::PathBuf;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Session {
pub id: String,
pub task: String,
pub agent_type: String,
pub state: SessionState,
pub pid: Option<u32>,
pub worktree: Option<WorktreeInfo>,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
pub metrics: SessionMetrics,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub enum SessionState {
Pending,
Running,
Idle,
Completed,
Failed,
Stopped,
}
impl fmt::Display for SessionState {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
SessionState::Pending => write!(f, "pending"),
SessionState::Running => write!(f, "running"),
SessionState::Idle => write!(f, "idle"),
SessionState::Completed => write!(f, "completed"),
SessionState::Failed => write!(f, "failed"),
SessionState::Stopped => write!(f, "stopped"),
}
}
}
impl SessionState {
pub fn can_transition_to(&self, next: &Self) -> bool {
if self == next {
return true;
}
matches!(
(self, next),
(
SessionState::Pending,
SessionState::Running | SessionState::Failed | SessionState::Stopped
) | (
SessionState::Running,
SessionState::Idle
| SessionState::Completed
| SessionState::Failed
| SessionState::Stopped
) | (
SessionState::Idle,
SessionState::Running
| SessionState::Completed
| SessionState::Failed
| SessionState::Stopped
) | (SessionState::Completed, SessionState::Stopped)
| (SessionState::Failed, SessionState::Stopped)
)
}
pub fn from_db_value(value: &str) -> Self {
match value {
"running" => SessionState::Running,
"idle" => SessionState::Idle,
"completed" => SessionState::Completed,
"failed" => SessionState::Failed,
"stopped" => SessionState::Stopped,
_ => SessionState::Pending,
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WorktreeInfo {
pub path: PathBuf,
pub branch: String,
pub base_branch: String,
}
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct SessionMetrics {
pub tokens_used: u64,
pub tool_calls: u64,
pub files_changed: u32,
pub duration_secs: u64,
pub cost_usd: f64,
}

View File

@@ -1,350 +0,0 @@
use anyhow::{Context, Result};
use rusqlite::{Connection, OptionalExtension};
use std::path::Path;
use super::{Session, SessionMetrics, SessionState};
pub struct StateStore {
conn: Connection,
}
impl StateStore {
pub fn open(path: &Path) -> Result<Self> {
let conn = Connection::open(path)?;
let store = Self { conn };
store.init_schema()?;
Ok(store)
}
fn init_schema(&self) -> Result<()> {
self.conn.execute_batch(
"
CREATE TABLE IF NOT EXISTS sessions (
id TEXT PRIMARY KEY,
task TEXT NOT NULL,
agent_type TEXT NOT NULL,
state TEXT NOT NULL DEFAULT 'pending',
pid INTEGER,
worktree_path TEXT,
worktree_branch TEXT,
worktree_base TEXT,
tokens_used INTEGER DEFAULT 0,
tool_calls INTEGER DEFAULT 0,
files_changed INTEGER DEFAULT 0,
duration_secs INTEGER DEFAULT 0,
cost_usd REAL DEFAULT 0.0,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS tool_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT NOT NULL REFERENCES sessions(id),
tool_name TEXT NOT NULL,
input_summary TEXT,
output_summary TEXT,
duration_ms INTEGER,
risk_score REAL DEFAULT 0.0,
timestamp TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
from_session TEXT NOT NULL,
to_session TEXT NOT NULL,
content TEXT NOT NULL,
msg_type TEXT NOT NULL DEFAULT 'info',
read INTEGER DEFAULT 0,
timestamp TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_sessions_state ON sessions(state);
CREATE INDEX IF NOT EXISTS idx_tool_log_session ON tool_log(session_id);
CREATE INDEX IF NOT EXISTS idx_messages_to ON messages(to_session, read);
",
)?;
self.ensure_session_columns()?;
Ok(())
}
fn ensure_session_columns(&self) -> Result<()> {
if !self.has_column("sessions", "pid")? {
self.conn
.execute("ALTER TABLE sessions ADD COLUMN pid INTEGER", [])
.context("Failed to add pid column to sessions table")?;
}
Ok(())
}
fn has_column(&self, table: &str, column: &str) -> Result<bool> {
let pragma = format!("PRAGMA table_info({table})");
let mut stmt = self.conn.prepare(&pragma)?;
let columns = stmt
.query_map([], |row| row.get::<_, String>(1))?
.collect::<std::result::Result<Vec<_>, _>>()?;
Ok(columns.iter().any(|existing| existing == column))
}
pub fn insert_session(&self, session: &Session) -> Result<()> {
self.conn.execute(
"INSERT INTO sessions (id, task, agent_type, state, pid, worktree_path, worktree_branch, worktree_base, created_at, updated_at)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10)",
rusqlite::params![
session.id,
session.task,
session.agent_type,
session.state.to_string(),
session.pid.map(i64::from),
session.worktree.as_ref().map(|w| w.path.to_string_lossy().to_string()),
session.worktree.as_ref().map(|w| w.branch.clone()),
session.worktree.as_ref().map(|w| w.base_branch.clone()),
session.created_at.to_rfc3339(),
session.updated_at.to_rfc3339(),
],
)?;
Ok(())
}
pub fn update_state(&self, session_id: &str, state: &SessionState) -> Result<()> {
let current_state = self
.conn
.query_row(
"SELECT state FROM sessions WHERE id = ?1",
[session_id],
|row| row.get::<_, String>(0),
)
.optional()?
.map(|raw| SessionState::from_db_value(&raw))
.ok_or_else(|| anyhow::anyhow!("Session not found: {session_id}"))?;
if !current_state.can_transition_to(state) {
anyhow::bail!(
"Invalid session state transition: {} -> {}",
current_state,
state
);
}
let updated = self.conn.execute(
"UPDATE sessions SET state = ?1, updated_at = ?2 WHERE id = ?3",
rusqlite::params![
state.to_string(),
chrono::Utc::now().to_rfc3339(),
session_id,
],
)?;
if updated == 0 {
anyhow::bail!("Session not found: {session_id}");
}
Ok(())
}
pub fn update_pid(&self, session_id: &str, pid: Option<u32>) -> Result<()> {
let updated = self.conn.execute(
"UPDATE sessions SET pid = ?1, updated_at = ?2 WHERE id = ?3",
rusqlite::params![
pid.map(i64::from),
chrono::Utc::now().to_rfc3339(),
session_id,
],
)?;
if updated == 0 {
anyhow::bail!("Session not found: {session_id}");
}
Ok(())
}
pub fn update_metrics(&self, session_id: &str, metrics: &SessionMetrics) -> Result<()> {
self.conn.execute(
"UPDATE sessions SET tokens_used = ?1, tool_calls = ?2, files_changed = ?3, duration_secs = ?4, cost_usd = ?5, updated_at = ?6 WHERE id = ?7",
rusqlite::params![
metrics.tokens_used,
metrics.tool_calls,
metrics.files_changed,
metrics.duration_secs,
metrics.cost_usd,
chrono::Utc::now().to_rfc3339(),
session_id,
],
)?;
Ok(())
}
pub fn list_sessions(&self) -> Result<Vec<Session>> {
let mut stmt = self.conn.prepare(
"SELECT id, task, agent_type, state, pid, worktree_path, worktree_branch, worktree_base,
tokens_used, tool_calls, files_changed, duration_secs, cost_usd,
created_at, updated_at
FROM sessions ORDER BY updated_at DESC",
)?;
let sessions = stmt
.query_map([], |row| {
let state_str: String = row.get(3)?;
let state = SessionState::from_db_value(&state_str);
let worktree_path: Option<String> = row.get(5)?;
let worktree = worktree_path.map(|p| super::WorktreeInfo {
path: std::path::PathBuf::from(p),
branch: row.get::<_, String>(6).unwrap_or_default(),
base_branch: row.get::<_, String>(7).unwrap_or_default(),
});
let created_str: String = row.get(13)?;
let updated_str: String = row.get(14)?;
Ok(Session {
id: row.get(0)?,
task: row.get(1)?,
agent_type: row.get(2)?,
state,
pid: row.get::<_, Option<u32>>(4)?,
worktree,
created_at: chrono::DateTime::parse_from_rfc3339(&created_str)
.unwrap_or_default()
.with_timezone(&chrono::Utc),
updated_at: chrono::DateTime::parse_from_rfc3339(&updated_str)
.unwrap_or_default()
.with_timezone(&chrono::Utc),
metrics: SessionMetrics {
tokens_used: row.get(8)?,
tool_calls: row.get(9)?,
files_changed: row.get(10)?,
duration_secs: row.get(11)?,
cost_usd: row.get(12)?,
},
})
})?
.collect::<Result<Vec<_>, _>>()?;
Ok(sessions)
}
pub fn get_latest_session(&self) -> Result<Option<Session>> {
Ok(self.list_sessions()?.into_iter().next())
}
pub fn get_session(&self, id: &str) -> Result<Option<Session>> {
let sessions = self.list_sessions()?;
Ok(sessions
.into_iter()
.find(|s| s.id == id || s.id.starts_with(id)))
}
pub fn send_message(&self, from: &str, to: &str, content: &str, msg_type: &str) -> Result<()> {
self.conn.execute(
"INSERT INTO messages (from_session, to_session, content, msg_type, timestamp)
VALUES (?1, ?2, ?3, ?4, ?5)",
rusqlite::params![from, to, content, msg_type, chrono::Utc::now().to_rfc3339()],
)?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::session::{Session, SessionMetrics, SessionState};
use chrono::{Duration, Utc};
use std::fs;
use std::path::{Path, PathBuf};
struct TestDir {
path: PathBuf,
}
impl TestDir {
fn new(label: &str) -> Result<Self> {
let path =
std::env::temp_dir().join(format!("ecc2-{}-{}", label, uuid::Uuid::new_v4()));
fs::create_dir_all(&path)?;
Ok(Self { path })
}
fn path(&self) -> &Path {
&self.path
}
}
impl Drop for TestDir {
fn drop(&mut self) {
let _ = fs::remove_dir_all(&self.path);
}
}
fn build_session(id: &str, state: SessionState) -> Session {
let now = Utc::now();
Session {
id: id.to_string(),
task: "task".to_string(),
agent_type: "claude".to_string(),
state,
pid: None,
worktree: None,
created_at: now - Duration::minutes(1),
updated_at: now,
metrics: SessionMetrics::default(),
}
}
#[test]
fn update_state_rejects_invalid_terminal_transition() -> Result<()> {
let tempdir = TestDir::new("store-invalid-transition")?;
let db = StateStore::open(&tempdir.path().join("state.db"))?;
db.insert_session(&build_session("done", SessionState::Completed))?;
let error = db
.update_state("done", &SessionState::Running)
.expect_err("completed sessions must not transition back to running");
assert!(error
.to_string()
.contains("Invalid session state transition"));
Ok(())
}
#[test]
fn open_migrates_existing_sessions_table_with_pid_column() -> Result<()> {
let tempdir = TestDir::new("store-migration")?;
let db_path = tempdir.path().join("state.db");
let conn = Connection::open(&db_path)?;
conn.execute_batch(
"
CREATE TABLE sessions (
id TEXT PRIMARY KEY,
task TEXT NOT NULL,
agent_type TEXT NOT NULL,
state TEXT NOT NULL DEFAULT 'pending',
worktree_path TEXT,
worktree_branch TEXT,
worktree_base TEXT,
tokens_used INTEGER DEFAULT 0,
tool_calls INTEGER DEFAULT 0,
files_changed INTEGER DEFAULT 0,
duration_secs INTEGER DEFAULT 0,
cost_usd REAL DEFAULT 0.0,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
",
)?;
drop(conn);
let db = StateStore::open(&db_path)?;
let mut stmt = db.conn.prepare("PRAGMA table_info(sessions)")?;
let column_names = stmt
.query_map([], |row| row.get::<_, String>(1))?
.collect::<std::result::Result<Vec<_>, _>>()?;
assert!(column_names.iter().any(|column| column == "pid"));
Ok(())
}
}

View File

@@ -1,52 +0,0 @@
use anyhow::Result;
use crossterm::{
event::{self, Event, KeyCode, KeyModifiers},
execute,
terminal::{disable_raw_mode, enable_raw_mode, EnterAlternateScreen, LeaveAlternateScreen},
};
use ratatui::prelude::*;
use std::io;
use std::time::Duration;
use super::dashboard::Dashboard;
use crate::config::Config;
use crate::session::store::StateStore;
pub async fn run(db: StateStore, cfg: Config) -> Result<()> {
enable_raw_mode()?;
let mut stdout = io::stdout();
execute!(stdout, EnterAlternateScreen)?;
let backend = CrosstermBackend::new(stdout);
let mut terminal = Terminal::new(backend)?;
let mut dashboard = Dashboard::new(db, cfg);
loop {
terminal.draw(|frame| dashboard.render(frame))?;
if event::poll(Duration::from_millis(250))? {
if let Event::Key(key) = event::read()? {
match (key.modifiers, key.code) {
(KeyModifiers::CONTROL, KeyCode::Char('c')) => break,
(_, KeyCode::Char('q')) => break,
(_, KeyCode::Tab) => dashboard.next_pane(),
(KeyModifiers::SHIFT, KeyCode::BackTab) => dashboard.prev_pane(),
(_, KeyCode::Char('j')) | (_, KeyCode::Down) => dashboard.scroll_down(),
(_, KeyCode::Char('k')) | (_, KeyCode::Up) => dashboard.scroll_up(),
(_, KeyCode::Char('n')) => dashboard.new_session(),
(_, KeyCode::Char('s')) => dashboard.stop_selected(),
(_, KeyCode::Char('r')) => dashboard.refresh(),
(_, KeyCode::Char('?')) => dashboard.toggle_help(),
_ => {}
}
}
}
dashboard.tick().await;
}
disable_raw_mode()?;
execute!(terminal.backend_mut(), LeaveAlternateScreen)?;
Ok(())
}

View File

@@ -1,273 +0,0 @@
use ratatui::{
prelude::*,
widgets::{Block, Borders, List, ListItem, Paragraph, Tabs},
};
use crate::config::Config;
use crate::session::{Session, SessionState};
use crate::session::store::StateStore;
pub struct Dashboard {
db: StateStore,
cfg: Config,
sessions: Vec<Session>,
selected_pane: Pane,
selected_session: usize,
show_help: bool,
scroll_offset: usize,
}
#[derive(Debug, Clone, Copy, PartialEq)]
enum Pane {
Sessions,
Output,
Metrics,
}
impl Dashboard {
pub fn new(db: StateStore, cfg: Config) -> Self {
let sessions = db.list_sessions().unwrap_or_default();
Self {
db,
cfg,
sessions,
selected_pane: Pane::Sessions,
selected_session: 0,
show_help: false,
scroll_offset: 0,
}
}
pub fn render(&self, frame: &mut Frame) {
let chunks = Layout::default()
.direction(Direction::Vertical)
.constraints([
Constraint::Length(3), // Header
Constraint::Min(10), // Main content
Constraint::Length(3), // Status bar
])
.split(frame.area());
self.render_header(frame, chunks[0]);
if self.show_help {
self.render_help(frame, chunks[1]);
} else {
let main_chunks = Layout::default()
.direction(Direction::Horizontal)
.constraints([
Constraint::Percentage(35), // Session list
Constraint::Percentage(65), // Output/details
])
.split(chunks[1]);
self.render_sessions(frame, main_chunks[0]);
let right_chunks = Layout::default()
.direction(Direction::Vertical)
.constraints([
Constraint::Percentage(70), // Output
Constraint::Percentage(30), // Metrics
])
.split(main_chunks[1]);
self.render_output(frame, right_chunks[0]);
self.render_metrics(frame, right_chunks[1]);
}
self.render_status_bar(frame, chunks[2]);
}
fn render_header(&self, frame: &mut Frame, area: Rect) {
let running = self.sessions.iter().filter(|s| s.state == SessionState::Running).count();
let total = self.sessions.len();
let title = format!(" ECC 2.0 | {running} running / {total} total ");
let tabs = Tabs::new(vec!["Sessions", "Output", "Metrics"])
.block(Block::default().borders(Borders::ALL).title(title))
.select(match self.selected_pane {
Pane::Sessions => 0,
Pane::Output => 1,
Pane::Metrics => 2,
})
.highlight_style(Style::default().fg(Color::Cyan).add_modifier(Modifier::BOLD));
frame.render_widget(tabs, area);
}
fn render_sessions(&self, frame: &mut Frame, area: Rect) {
let items: Vec<ListItem> = self
.sessions
.iter()
.enumerate()
.map(|(i, s)| {
let state_icon = match s.state {
SessionState::Running => "",
SessionState::Idle => "",
SessionState::Completed => "",
SessionState::Failed => "",
SessionState::Stopped => "",
SessionState::Pending => "",
};
let style = if i == self.selected_session {
Style::default().fg(Color::Cyan).add_modifier(Modifier::BOLD)
} else {
Style::default()
};
let text = format!("{state_icon} {} [{}] {}", &s.id[..8.min(s.id.len())], s.agent_type, s.task);
ListItem::new(text).style(style)
})
.collect();
let border_style = if self.selected_pane == Pane::Sessions {
Style::default().fg(Color::Cyan)
} else {
Style::default()
};
let list = List::new(items).block(
Block::default()
.borders(Borders::ALL)
.title(" Sessions ")
.border_style(border_style),
);
frame.render_widget(list, area);
}
fn render_output(&self, frame: &mut Frame, area: Rect) {
let content = if let Some(session) = self.sessions.get(self.selected_session) {
format!("Agent output for session {}...\n\n(Live streaming coming soon)", session.id)
} else {
"No sessions. Press 'n' to start one.".to_string()
};
let border_style = if self.selected_pane == Pane::Output {
Style::default().fg(Color::Cyan)
} else {
Style::default()
};
let paragraph = Paragraph::new(content).block(
Block::default()
.borders(Borders::ALL)
.title(" Output ")
.border_style(border_style),
);
frame.render_widget(paragraph, area);
}
fn render_metrics(&self, frame: &mut Frame, area: Rect) {
let content = if let Some(session) = self.sessions.get(self.selected_session) {
let m = &session.metrics;
format!(
"Tokens: {} | Tools: {} | Files: {} | Cost: ${:.4} | Duration: {}s",
m.tokens_used, m.tool_calls, m.files_changed, m.cost_usd, m.duration_secs
)
} else {
"No metrics available".to_string()
};
let border_style = if self.selected_pane == Pane::Metrics {
Style::default().fg(Color::Cyan)
} else {
Style::default()
};
let paragraph = Paragraph::new(content).block(
Block::default()
.borders(Borders::ALL)
.title(" Metrics ")
.border_style(border_style),
);
frame.render_widget(paragraph, area);
}
fn render_status_bar(&self, frame: &mut Frame, area: Rect) {
let text = " [n]ew session [s]top [Tab] switch pane [j/k] scroll [?] help [q]uit ";
let paragraph = Paragraph::new(text)
.style(Style::default().fg(Color::DarkGray))
.block(Block::default().borders(Borders::ALL));
frame.render_widget(paragraph, area);
}
fn render_help(&self, frame: &mut Frame, area: Rect) {
let help = vec![
"Keyboard Shortcuts:",
"",
" n New session",
" s Stop selected session",
" Tab Next pane",
" S-Tab Previous pane",
" j/↓ Scroll down",
" k/↑ Scroll up",
" r Refresh",
" ? Toggle help",
" q/C-c Quit",
];
let paragraph = Paragraph::new(help.join("\n")).block(
Block::default()
.borders(Borders::ALL)
.title(" Help ")
.border_style(Style::default().fg(Color::Yellow)),
);
frame.render_widget(paragraph, area);
}
pub fn next_pane(&mut self) {
self.selected_pane = match self.selected_pane {
Pane::Sessions => Pane::Output,
Pane::Output => Pane::Metrics,
Pane::Metrics => Pane::Sessions,
};
}
pub fn prev_pane(&mut self) {
self.selected_pane = match self.selected_pane {
Pane::Sessions => Pane::Metrics,
Pane::Output => Pane::Sessions,
Pane::Metrics => Pane::Output,
};
}
pub fn scroll_down(&mut self) {
if self.selected_pane == Pane::Sessions && !self.sessions.is_empty() {
self.selected_session = (self.selected_session + 1).min(self.sessions.len() - 1);
} else {
self.scroll_offset = self.scroll_offset.saturating_add(1);
}
}
pub fn scroll_up(&mut self) {
if self.selected_pane == Pane::Sessions {
self.selected_session = self.selected_session.saturating_sub(1);
} else {
self.scroll_offset = self.scroll_offset.saturating_sub(1);
}
}
pub fn new_session(&mut self) {
// TODO: Open a dialog to create a new session
tracing::info!("New session dialog requested");
}
pub fn stop_selected(&mut self) {
if let Some(session) = self.sessions.get(self.selected_session) {
let _ = self.db.update_state(&session.id, &SessionState::Stopped);
self.refresh();
}
}
pub fn refresh(&mut self) {
self.sessions = self.db.list_sessions().unwrap_or_default();
}
pub fn toggle_help(&mut self) {
self.show_help = !self.show_help;
}
pub async fn tick(&mut self) {
// Periodic refresh every few ticks
self.sessions = self.db.list_sessions().unwrap_or_default();
}
}

View File

@@ -1,3 +0,0 @@
pub mod app;
mod dashboard;
mod widgets;

View File

@@ -1,6 +0,0 @@
// Custom TUI widgets for ECC 2.0
// TODO: Implement custom widgets:
// - TokenMeter: visual token usage bar with budget threshold
// - DiffViewer: side-by-side syntax-highlighted diff display
// - ProgressTimeline: session timeline with tool call markers
// - AgentTree: hierarchical view of parent/child agent sessions

View File

@@ -1,99 +0,0 @@
use anyhow::{Context, Result};
use std::path::Path;
use std::process::Command;
use crate::config::Config;
use crate::session::WorktreeInfo;
/// Create a new git worktree for an agent session.
pub fn create_for_session(session_id: &str, cfg: &Config) -> Result<WorktreeInfo> {
let repo_root = std::env::current_dir().context("Failed to resolve repository root")?;
create_for_session_in_repo(session_id, cfg, &repo_root)
}
pub(crate) fn create_for_session_in_repo(
session_id: &str,
cfg: &Config,
repo_root: &Path,
) -> Result<WorktreeInfo> {
let branch = format!("ecc/{session_id}");
let path = cfg.worktree_root.join(session_id);
// Get current branch as base
let base = get_current_branch(repo_root)?;
std::fs::create_dir_all(&cfg.worktree_root)
.context("Failed to create worktree root directory")?;
let output = Command::new("git")
.arg("-C")
.arg(repo_root)
.args(["worktree", "add", "-b", &branch])
.arg(&path)
.arg("HEAD")
.output()
.context("Failed to run git worktree add")?;
if !output.status.success() {
let stderr = String::from_utf8_lossy(&output.stderr);
anyhow::bail!("git worktree add failed: {stderr}");
}
tracing::info!(
"Created worktree at {} on branch {}",
path.display(),
branch
);
Ok(WorktreeInfo {
path,
branch,
base_branch: base,
})
}
/// Remove a worktree and its branch.
pub fn remove(path: &Path) -> Result<()> {
let output = Command::new("git")
.arg("-C")
.arg(path)
.args(["worktree", "remove", "--force"])
.arg(path)
.output()
.context("Failed to remove worktree")?;
if !output.status.success() {
let stderr = String::from_utf8_lossy(&output.stderr);
tracing::warn!("Worktree removal warning: {stderr}");
}
Ok(())
}
/// List all active worktrees.
pub fn list() -> Result<Vec<String>> {
let output = Command::new("git")
.args(["worktree", "list", "--porcelain"])
.output()
.context("Failed to list worktrees")?;
let stdout = String::from_utf8_lossy(&output.stdout);
let worktrees: Vec<String> = stdout
.lines()
.filter(|l| l.starts_with("worktree "))
.map(|l| l.trim_start_matches("worktree ").to_string())
.collect();
Ok(worktrees)
}
fn get_current_branch(repo_root: &Path) -> Result<String> {
let output = Command::new("git")
.arg("-C")
.arg(repo_root)
.args(["rev-parse", "--abbrev-ref", "HEAD"])
.output()
.context("Failed to get current branch")?;
Ok(String::from_utf8_lossy(&output.stdout).trim().to_string())
}

View File

@@ -116,6 +116,17 @@
}
],
"description": "Check MCP server health before MCP tool execution and block unhealthy MCP calls"
},
{
"matcher": "Edit|Write|MultiEdit",
"hooks": [
{
"type": "command",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:security-reminder\" \"scripts/hooks/security-reminder-wrapper.js\" \"standard,strict\"",
"timeout": 5
}
],
"description": "Security pattern reminder: warns about command injection, XSS, and other OWASP patterns in file edits"
}
],
"PreCompact": [

View File

@@ -90,7 +90,6 @@
"targets": [
"claude",
"cursor",
"antigravity",
"codex",
"opencode"
],
@@ -141,7 +140,6 @@
"targets": [
"claude",
"cursor",
"antigravity",
"codex",
"opencode"
],

4
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "ecc-universal",
"version": "1.9.0",
"version": "2.0.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "ecc-universal",
"version": "1.9.0",
"version": "2.0.0",
"hasInstallScript": true,
"license": "MIT",
"dependencies": {

View File

@@ -1,7 +1,7 @@
{
"name": "ecc-universal",
"version": "1.9.0",
"description": "Complete collection of battle-tested Claude Code configs — agents, skills, hooks, commands, and rules evolved over 10+ months of intensive daily use by an Anthropic hackathon winner",
"version": "2.0.0",
"description": "Cross-harness AI agent performance system for Claude Code, Codex, OpenCode, Cursor, and Hermes workflows — agents, skills, hooks, commands, and rules evolved through daily production use",
"keywords": [
"claude-code",
"ai",

View File

@@ -12,6 +12,7 @@ CONFIG_FILE="$CODEX_HOME/config.toml"
AGENTS_FILE="$CODEX_HOME/AGENTS.md"
PROMPTS_DIR="$CODEX_HOME/prompts"
SKILLS_DIR="$CODEX_HOME/skills"
ROLE_DIR="$CODEX_HOME/agents"
HOOKS_DIR_EXPECT="${ECC_GLOBAL_HOOKS_DIR:-$CODEX_HOME/git-hooks}"
failures=0
@@ -89,12 +90,14 @@ fi
if [[ -f "$CONFIG_FILE" ]]; then
check_config_pattern '^multi_agent\s*=\s*true' "multi_agent is enabled"
check_config_absent '^\s*collab\s*=' "deprecated collab flag is absent"
check_config_pattern '^persistent_instructions\s*=' "persistent_instructions is configured"
check_config_pattern '^profile\s*=\s*"full-access"' "default profile is full-access"
check_config_pattern '^\[profiles\.full-access\]' "profiles.full-access exists"
check_config_pattern '^\[profiles\.strict\]' "profiles.strict exists"
check_config_pattern '^\[profiles\.yolo\]' "profiles.yolo exists"
for section in \
'mcp_servers.github' \
'mcp_servers.exa' \
'mcp_servers.memory' \
'mcp_servers.sequential-thinking' \
'mcp_servers.context7-mcp'
@@ -152,6 +155,26 @@ else
fail "Skills directory missing ($SKILLS_DIR)"
fi
if [[ -d "$ROLE_DIR" ]]; then
missing_roles=0
for role_file in explorer.toml reviewer.toml docs-researcher.toml; do
if [[ -f "$ROLE_DIR/$role_file" ]]; then
:
else
printf ' - missing agent role config: %s\n' "$role_file"
missing_roles=$((missing_roles + 1))
fi
done
if [[ "$missing_roles" -eq 0 ]]; then
ok "Global Codex agent role configs are present"
else
fail "$missing_roles required agent role configs are missing"
fi
else
fail "Agent role config directory missing ($ROLE_DIR)"
fi
if [[ -f "$PROMPTS_DIR/ecc-prompts-manifest.txt" ]]; then
ok "Command prompts manifest exists"
else

View File

@@ -0,0 +1,61 @@
#!/usr/bin/env node
/**
* Security reminder wrapper for run-with-flags compatibility.
*
* The original hook logic lives in security-reminder.py. This wrapper keeps
* the hook on the approved Node-based execution path while preserving the
* existing Python implementation.
*/
'use strict';
const path = require('path');
const { spawnSync } = require('child_process');
const MAX_STDIN = 1024 * 1024;
let raw = '';
process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (raw.length < MAX_STDIN) {
raw += chunk.substring(0, MAX_STDIN - raw.length);
}
});
process.stdin.on('end', () => {
const scriptPath = path.join(__dirname, 'security-reminder.py');
const pythonCandidates = ['python3', 'python'];
let result;
for (const pythonBin of pythonCandidates) {
result = spawnSync(pythonBin, [scriptPath], {
input: raw,
encoding: 'utf8',
env: process.env,
cwd: process.cwd(),
timeout: 5000,
});
if (result.error && result.error.code === 'ENOENT') {
continue;
}
break;
}
if (!result || (result.error && result.error.code === 'ENOENT')) {
process.stderr.write('[SecurityReminder] python3/python not found. Skipping security reminder hook.\n');
process.stdout.write(raw);
process.exit(0);
}
if (result.error) {
process.stderr.write(`[SecurityReminder] Hook failed to run: ${result.error.message}\n`);
process.stdout.write(raw);
process.exit(0);
}
if (result.stdout) process.stdout.write(result.stdout);
if (result.stderr) process.stderr.write(result.stderr);
process.exit(Number.isInteger(result.status) ? result.status : 0);
});

View File

@@ -0,0 +1,156 @@
#!/usr/bin/env python3
"""
Security Reminder Hook for Claude Code
Checks for security patterns in file edits and warns about potential vulnerabilities.
Ported from security-guidance plugin (David Dworken, Anthropic).
"""
import json
import os
import random
import sys
from datetime import datetime
SECURITY_PATTERNS = [
{
"ruleName": "github_actions_workflow",
"path_check": lambda path: ".github/workflows/" in path
and (path.endswith(".yml") or path.endswith(".yaml")),
"reminder": "You are editing a GitHub Actions workflow file. Never use untrusted input directly in run: commands. Use env: variables with proper quoting instead. See: https://github.blog/security/vulnerability-research/how-to-catch-github-actions-workflow-injections-before-attackers-do/",
},
{
"ruleName": "child_process_exec",
"substrings": ["child_process.exec"],
"reminder": "Security Warning: child_process exec can lead to command injection. Use execFile instead which does not invoke a shell.",
},
{
"ruleName": "new_function_injection",
"substrings": ["new Function"],
"reminder": "Security Warning: new Function with dynamic strings can lead to code injection. Consider alternatives.",
},
{
"ruleName": "eval_injection",
"substrings": ["eval("],
"reminder": "Security Warning: eval executes arbitrary code. Use JSON.parse for data or alternative patterns.",
},
{
"ruleName": "document_write_xss",
"substrings": ["document.write"],
"reminder": "Security Warning: document.write can be exploited for XSS. Use DOM manipulation methods instead.",
},
{
"ruleName": "innerHTML_xss",
"substrings": [".innerHTML =", ".innerHTML="],
"reminder": "Security Warning: innerHTML with untrusted content leads to XSS. Use textContent or sanitize with DOMPurify.",
},
{
"ruleName": "pickle_deserialization",
"substrings": ["pickle"],
"reminder": "Security Warning: pickle with untrusted content can lead to arbitrary code execution. Use JSON instead.",
},
{
"ruleName": "os_system_injection",
"substrings": ["os.system", "from os import system"],
"reminder": "Security Warning: os.system should only use static arguments. Use subprocess.run with a list of arguments instead.",
},
]
def get_state_file(session_id):
return os.path.expanduser(f"~/.claude/security_warnings_state_{session_id}.json")
def cleanup_old_state_files():
try:
state_dir = os.path.expanduser("~/.claude")
if not os.path.exists(state_dir):
return
cutoff = datetime.now().timestamp() - (30 * 24 * 60 * 60)
for fn in os.listdir(state_dir):
if fn.startswith("security_warnings_state_") and fn.endswith(".json"):
fp = os.path.join(state_dir, fn)
try:
if os.path.getmtime(fp) < cutoff:
os.remove(fp)
except (OSError, IOError):
pass
except Exception:
pass
def load_state(session_id):
sf = get_state_file(session_id)
if os.path.exists(sf):
try:
with open(sf, "r") as f:
return set(json.load(f))
except (json.JSONDecodeError, IOError):
return set()
return set()
def save_state(session_id, shown):
sf = get_state_file(session_id)
try:
os.makedirs(os.path.dirname(sf), exist_ok=True)
with open(sf, "w") as f:
json.dump(list(shown), f)
except IOError:
pass
def check_patterns(file_path, content):
norm = file_path.lstrip("/")
for p in SECURITY_PATTERNS:
if "path_check" in p and p["path_check"](norm):
return p["ruleName"], p["reminder"]
if "substrings" in p and content:
for sub in p["substrings"]:
if sub in content:
return p["ruleName"], p["reminder"]
return None, None
def extract_content(tool_name, tool_input):
if tool_name == "Write":
return tool_input.get("content", "")
elif tool_name == "Edit":
return tool_input.get("new_string", "")
elif tool_name == "MultiEdit":
edits = tool_input.get("edits", [])
return " ".join(e.get("new_string", "") for e in edits) if edits else ""
return ""
def main():
if os.environ.get("ENABLE_SECURITY_REMINDER", "1") == "0":
sys.exit(0)
if random.random() < 0.1:
cleanup_old_state_files()
try:
data = json.loads(sys.stdin.read())
except json.JSONDecodeError:
sys.exit(0)
session_id = data.get("session_id", "default")
tool_name = data.get("tool_name", "")
tool_input = data.get("tool_input", {})
if tool_name not in ["Edit", "Write", "MultiEdit"]:
sys.exit(0)
file_path = tool_input.get("file_path", "")
if not file_path:
sys.exit(0)
content = extract_content(tool_name, tool_input)
rule_name, reminder = check_patterns(file_path, content)
if rule_name and reminder:
key = f"{file_path}-{rule_name}"
shown = load_state(session_id)
if key not in shown:
shown.add(key)
save_state(session_id, shown)
print(reminder, file=sys.stderr)
sys.exit(2)
sys.exit(0)
if __name__ == "__main__":
main()

View File

@@ -28,6 +28,8 @@ CONFIG_FILE="$CODEX_HOME/config.toml"
AGENTS_FILE="$CODEX_HOME/AGENTS.md"
AGENTS_ROOT_SRC="$REPO_ROOT/AGENTS.md"
AGENTS_CODEX_SUPP_SRC="$REPO_ROOT/.codex/AGENTS.md"
ROLE_CONFIG_SRC="$REPO_ROOT/.codex/agents"
ROLE_CONFIG_DEST="$CODEX_HOME/agents"
SKILLS_SRC="$REPO_ROOT/.agents/skills"
SKILLS_DEST="$CODEX_HOME/skills"
PROMPTS_SRC="$REPO_ROOT/commands"
@@ -131,6 +133,7 @@ MCP_MERGE_SCRIPT="$REPO_ROOT/scripts/codex/merge-mcp-config.js"
require_path "$REPO_ROOT/AGENTS.md" "ECC AGENTS.md"
require_path "$AGENTS_CODEX_SUPP_SRC" "ECC Codex AGENTS supplement"
require_path "$ROLE_CONFIG_SRC" "ECC Codex agent config directory"
require_path "$SKILLS_SRC" "ECC skills directory"
require_path "$PROMPTS_SRC" "ECC commands directory"
require_path "$HOOKS_INSTALLER" "ECC global git hooks installer"
@@ -245,6 +248,17 @@ for skill_dir in "$SKILLS_SRC"/*; do
skills_count=$((skills_count + 1))
done
log "Syncing ECC Codex agent role configs"
run_or_echo "mkdir -p \"$ROLE_CONFIG_DEST\""
role_count=0
for role_file in "$ROLE_CONFIG_SRC"/*.toml; do
[[ -f "$role_file" ]] || continue
role_name="$(basename "$role_file")"
dest="$ROLE_CONFIG_DEST/$role_name"
run_or_echo "cp \"$role_file\" \"$dest\""
role_count=$((role_count + 1))
done
log "Generating prompt files from ECC commands"
run_or_echo "mkdir -p \"$PROMPTS_DEST\""
manifest="$PROMPTS_DEST/ecc-prompts-manifest.txt"
@@ -470,21 +484,22 @@ fi
log "Installing global git safety hooks"
if [[ "$MODE" == "dry-run" ]]; then
"$HOOKS_INSTALLER" --dry-run
bash "$HOOKS_INSTALLER" --dry-run
else
"$HOOKS_INSTALLER"
bash "$HOOKS_INSTALLER"
fi
log "Running global regression sanity check"
if [[ "$MODE" == "dry-run" ]]; then
printf '[dry-run] %s\n' "$SANITY_CHECKER"
printf '[dry-run] bash %s\n' "$SANITY_CHECKER"
else
"$SANITY_CHECKER"
bash "$SANITY_CHECKER"
fi
log "Sync complete"
log "Backup saved at: $BACKUP_DIR"
log "Skills synced: $skills_count"
log "Agent role configs synced: $role_count"
log "Prompts generated: $((prompt_count + extension_count)) (commands: $prompt_count, extensions: $extension_count)"
if [[ "$MODE" == "apply" ]]; then

View File

@@ -75,6 +75,63 @@ Delete and rewrite any of these:
- mix insight with updates, not diary filler
- use clear section labels and easy skim structure
## Tone Calibration
Match tone to context:
| Context | Tone |
|---------|------|
| Technical content | Direct, opinionated, practical. Share what works from experience. |
| Personal/journey | Honest, reflective. No performative humility, no toxic positivity. |
| Security content | Urgent but not alarmist. Evidence-based. Show the vulnerability, then the fix. |
| Community updates | Grateful but not sycophantic. Numbers speak louder than adjectives. |
| Product launches | Matter-of-fact. Let features speak. No hype language. |
## Approved Voice Patterns
These patterns work well for developer and founder audiences:
- "here's exactly how..."
- "no fluff."
- "zero [X]. just [Y]."
- "the short version:"
- Direct address: "you want X. here's X."
- Parenthetical asides for personality and self-awareness (use roughly once every 2-3 paragraphs)
## Platform-Specific Structure
### Technical Guides
- open with what the reader gets
- use code or terminal examples in every major section
- use "Pro tip:" callouts for non-obvious insights
- end with concrete takeaways as short bullets, not a summary paragraph
- link to resources at the bottom
### Essays / Opinion Pieces
- start with tension, contradiction, or a sharp observation
- keep one argument thread per section
- use examples that earn the opinion
### Newsletters
- keep the first screen strong
- mix insight with updates, not diary filler
- use clear section labels and easy skim structure
### LinkedIn
- proper capitalization
- short paragraphs (2-3 sentences max)
- first line is the hook (truncates at ~210 chars)
- professional framing: impact, lessons, takeaways
- 3-5 hashtags at the bottom only
### X/Twitter Threads
- each tweet must standalone (people see them individually)
- hook in first tweet, no "thread:" or "1/" prefix
- one point per tweet
- last tweet: CTA or punchline, not "follow for more"
- 4-7 tweets ideal length
- no links in tweet body (kills reach). Links in first reply.
## Quality Gate
Before delivering:
@@ -83,3 +140,7 @@ Before delivering:
- confirm the voice matches the supplied examples
- ensure every section adds new information
- check formatting for the intended platform
- zero banned patterns in entire document
- at least 2-3 parenthetical asides for personality (if voice calls for it)
- examples/evidence before explanation in each section
- specific numbers used where available

View File

@@ -608,3 +608,135 @@ These patterns compose well:
| Continuous Claude | AnandChowdhary | credit: @AnandChowdhary |
| NanoClaw | ECC | `/claw` command in this repo |
| Verification Loop | ECC | `skills/verification-loop/` in this repo |
---
## 7. Autonomous Agent Harness
**Transform Claude Code into a persistent autonomous agent system** with scheduled operations, persistent memory, computer use, and task queuing. Replaces standalone agent frameworks (Hermes, AutoGPT) by leveraging Claude Code's native capabilities.
### Architecture
```
┌──────────────────────────────────────────────────────────────┐
│ Claude Code Runtime │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌─────────────┐ │
│ │ Crons │ │ Dispatch │ │ Memory │ │ Computer │ │
│ │ Schedule │ │ Remote │ │ Store │ │ Use │ │
│ │ Tasks │ │ Agents │ │ │ │ │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ └──────┬──────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ ECC Skill + Agent Layer │ │
│ └──────────────────────────────────────────────────────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ MCP Server Layer │ │
│ │ memory github exa supabase browser-use │ │
│ └──────────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────┘
```
### When to Use the Harness (vs simpler loops)
- You want an agent that runs continuously or on a schedule
- Setting up automated workflows that trigger periodically
- Building a personal AI assistant that remembers context across sessions
- Replicating functionality from Hermes, AutoGPT, or similar frameworks
- Combining computer use with scheduled execution
### Core Component: Persistent Memory
Use Claude Code's built-in memory system enhanced with MCP memory server:
```
# Short-term: current session context
Use TodoWrite for in-session task tracking
# Medium-term: project memory files
Write to ~/.claude/projects/*/memory/ for cross-session recall
# Long-term: MCP knowledge graph
Use mcp__memory__create_entities for permanent structured data
Use mcp__memory__create_relations for relationship mapping
Use mcp__memory__add_observations for new facts about known entities
```
### Core Component: Scheduled Operations (Crons)
```bash
# Daily morning briefing
claude -p "Create a scheduled task: every weekday at 9am, review my GitHub notifications, open PRs, and calendar. Write a morning briefing to memory."
# Continuous learning
claude -p "Create a scheduled task: every Sunday at 8pm, extract patterns from this week's sessions and update the learned skills."
```
Useful cron patterns:
| Pattern | Schedule | Use Case |
|---------|----------|----------|
| Daily standup | `0 9 * * 1-5` | Review PRs, issues, deploy status |
| Weekly review | `0 10 * * 1` | Code quality metrics, test coverage |
| Hourly monitor | `0 * * * *` | Production health, error rate checks |
| Nightly build | `0 2 * * *` | Run full test suite, security scan |
### Core Component: Remote Dispatch
Trigger Claude Code agents remotely for event-driven workflows:
```bash
# Trigger from CI/CD pipeline
claude -p "Build failed on main. Diagnose and fix."
# Trigger from webhook (GitHub webhook -> dispatch -> Claude agent -> fix -> PR)
```
### Core Component: Task Queue
Manage a persistent queue of tasks that survive session boundaries:
```markdown
# Task persistence via memory
Write task queue to ~/.claude/projects/*/memory/task-queue.md
## Active Tasks
- [ ] PR #123: Review and approve if CI green
- [ ] Monitor deploy: check /health every 30 min for 2 hours
- [ ] Research: Find 5 leads in AI tooling space
## Completed
- [x] Daily standup: reviewed 3 PRs, 2 issues
```
### Example Harness Workflows
**Autonomous PR Reviewer:**
```
Cron: every 30 min during work hours
1. Check for new PRs on watched repos
2. For each new PR: pull branch, run tests, review changes
3. Post review comments via GitHub MCP
4. Update memory with review status
```
**Personal Research Agent:**
```
Cron: daily at 6 AM
1. Check saved search queries in memory
2. Run Exa searches for each query
3. Summarize new findings, compare against yesterday
4. Write digest to memory
5. Flag high-priority items for morning review
```
### Harness Constraints
- Cron tasks run in isolated sessions (share context only through memory)
- Computer use requires explicit permission grants
- Remote dispatch may have rate limits (design crons with appropriate intervals)
- Keep memory files concise (archive old data rather than growing unbounded)
- Always verify that scheduled tasks completed successfully

View File

@@ -1,87 +0,0 @@
# Benchmark — Performance Baseline & Regression Detection
## When to Use
- Before and after a PR to measure performance impact
- Setting up performance baselines for a project
- When users report "it feels slow"
- Before a launch — ensure you meet performance targets
- Comparing your stack against alternatives
## How It Works
### Mode 1: Page Performance
Measures real browser metrics via browser MCP:
```
1. Navigate to each target URL
2. Measure Core Web Vitals:
- LCP (Largest Contentful Paint) — target < 2.5s
- CLS (Cumulative Layout Shift) — target < 0.1
- INP (Interaction to Next Paint) — target < 200ms
- FCP (First Contentful Paint) — target < 1.8s
- TTFB (Time to First Byte) — target < 800ms
3. Measure resource sizes:
- Total page weight (target < 1MB)
- JS bundle size (target < 200KB gzipped)
- CSS size
- Image weight
- Third-party script weight
4. Count network requests
5. Check for render-blocking resources
```
### Mode 2: API Performance
Benchmarks API endpoints:
```
1. Hit each endpoint 100 times
2. Measure: p50, p95, p99 latency
3. Track: response size, status codes
4. Test under load: 10 concurrent requests
5. Compare against SLA targets
```
### Mode 3: Build Performance
Measures development feedback loop:
```
1. Cold build time
2. Hot reload time (HMR)
3. Test suite duration
4. TypeScript check time
5. Lint time
6. Docker build time
```
### Mode 4: Before/After Comparison
Run before and after a change to measure impact:
```
/benchmark baseline # saves current metrics
# ... make changes ...
/benchmark compare # compares against baseline
```
Output:
```
| Metric | Before | After | Delta | Verdict |
|--------|--------|-------|-------|---------|
| LCP | 1.2s | 1.4s | +200ms | ⚠ WARN |
| Bundle | 180KB | 175KB | -5KB | ✓ BETTER |
| Build | 12s | 14s | +2s | ⚠ WARN |
```
## Output
Stores baselines in `.ecc/benchmarks/` as JSON. Git-tracked so the team shares baselines.
## Integration
- CI: run `/benchmark compare` on every PR
- Pair with `/canary-watch` for post-deploy monitoring
- Pair with `/browser-qa` for full pre-ship checklist

View File

@@ -1,81 +0,0 @@
# Browser QA — Automated Visual Testing & Interaction
## When to Use
- After deploying a feature to staging/preview
- When you need to verify UI behavior across pages
- Before shipping — confirm layouts, forms, interactions actually work
- When reviewing PRs that touch frontend code
- Accessibility audits and responsive testing
## How It Works
Uses the browser automation MCP (claude-in-chrome, Playwright, or Puppeteer) to interact with live pages like a real user.
### Phase 1: Smoke Test
```
1. Navigate to target URL
2. Check for console errors (filter noise: analytics, third-party)
3. Verify no 4xx/5xx in network requests
4. Screenshot above-the-fold on desktop + mobile viewport
5. Check Core Web Vitals: LCP < 2.5s, CLS < 0.1, INP < 200ms
```
### Phase 2: Interaction Test
```
1. Click every nav link — verify no dead links
2. Submit forms with valid data — verify success state
3. Submit forms with invalid data — verify error state
4. Test auth flow: login → protected page → logout
5. Test critical user journeys (checkout, onboarding, search)
```
### Phase 3: Visual Regression
```
1. Screenshot key pages at 3 breakpoints (375px, 768px, 1440px)
2. Compare against baseline screenshots (if stored)
3. Flag layout shifts > 5px, missing elements, overflow
4. Check dark mode if applicable
```
### Phase 4: Accessibility
```
1. Run axe-core or equivalent on each page
2. Flag WCAG AA violations (contrast, labels, focus order)
3. Verify keyboard navigation works end-to-end
4. Check screen reader landmarks
```
## Output Format
```markdown
## QA Report — [URL] — [timestamp]
### Smoke Test
- Console errors: 0 critical, 2 warnings (analytics noise)
- Network: all 200/304, no failures
- Core Web Vitals: LCP 1.2s ✓, CLS 0.02 ✓, INP 89ms ✓
### Interactions
- [✓] Nav links: 12/12 working
- [✗] Contact form: missing error state for invalid email
- [✓] Auth flow: login/logout working
### Visual
- [✗] Hero section overflows on 375px viewport
- [✓] Dark mode: all pages consistent
### Accessibility
- 2 AA violations: missing alt text on hero image, low contrast on footer links
### Verdict: SHIP WITH FIXES (2 issues, 0 blockers)
```
## Integration
Works with any browser MCP:
- `mChild__claude-in-chrome__*` tools (preferred — uses your actual Chrome)
- Playwright via `mcp__browserbase__*`
- Direct Puppeteer scripts
Pair with `/canary-watch` for post-deploy monitoring.

View File

@@ -1,93 +0,0 @@
# Canary Watch — Post-Deploy Monitoring
## When to Use
- After deploying to production or staging
- After merging a risky PR
- When you want to verify a fix actually fixed it
- Continuous monitoring during a launch window
- After dependency upgrades
## How It Works
Monitors a deployed URL for regressions. Runs in a loop until stopped or until the watch window expires.
### What It Watches
```
1. HTTP Status — is the page returning 200?
2. Console Errors — new errors that weren't there before?
3. Network Failures — failed API calls, 5xx responses?
4. Performance — LCP/CLS/INP regression vs baseline?
5. Content — did key elements disappear? (h1, nav, footer, CTA)
6. API Health — are critical endpoints responding within SLA?
```
### Watch Modes
**Quick check** (default): single pass, report results
```
/canary-watch https://myapp.com
```
**Sustained watch**: check every N minutes for M hours
```
/canary-watch https://myapp.com --interval 5m --duration 2h
```
**Diff mode**: compare staging vs production
```
/canary-watch --compare https://staging.myapp.com https://myapp.com
```
### Alert Thresholds
```yaml
critical: # immediate alert
- HTTP status != 200
- Console error count > 5 (new errors only)
- LCP > 4s
- API endpoint returns 5xx
warning: # flag in report
- LCP increased > 500ms from baseline
- CLS > 0.1
- New console warnings
- Response time > 2x baseline
info: # log only
- Minor performance variance
- New network requests (third-party scripts added?)
```
### Notifications
When a critical threshold is crossed:
- Desktop notification (macOS/Linux)
- Optional: Slack/Discord webhook
- Log to `~/.claude/canary-watch.log`
## Output
```markdown
## Canary Report — myapp.com — 2026-03-23 03:15 PST
### Status: HEALTHY ✓
| Check | Result | Baseline | Delta |
|-------|--------|----------|-------|
| HTTP | 200 ✓ | 200 | — |
| Console errors | 0 ✓ | 0 | — |
| LCP | 1.8s ✓ | 1.6s | +200ms |
| CLS | 0.01 ✓ | 0.01 | — |
| API /health | 145ms ✓ | 120ms | +25ms |
### No regressions detected. Deploy is clean.
```
## Integration
Pair with:
- `/browser-qa` for pre-deploy verification
- Hooks: add as a PostToolUse hook on `git push` to auto-check after deploys
- CI: run in GitHub Actions after deploy step

View File

@@ -78,6 +78,69 @@ When asked for a campaign, return:
- optional CTA variants
- any missing inputs needed before publishing
## Content Types by Platform
### X/Twitter Content Mix
| Type | Frequency | Example |
|------|-----------|---------|
| Educational/tips | 3-4/week | "5 mistakes with [tool]", model selection, config tips |
| News/commentary | 2-3/week | CVE breakdowns, industry news, product launches |
| Milestones | 1-2/week | Star counts, test counts, download numbers |
| Personal/journey | 2-3/week | Builder life, founder diary, lessons learned |
| Ship logs | 2/week (weekends) | "sunday ship log: [list]" |
| Engagement bait | 1-2/week | "what's one thing you want [product] to do better?" |
### Engagement Bait Formats
- "what's your [X]?" (config sharing, stack sharing)
- "hot take: [contrarian opinion]" (disagreement drives engagement)
- "[thing] vs [thing]. go." (tribalism)
- "what am I missing?" (vulnerability + question)
### Viral Mechanics
- Large specific numbers: "1,184 malicious AI plugins"
- Security scares: "your AI has access to your SSH keys"
- Zero-X framing: "zero marketing budget. zero ads. just useful open source."
- Before/after comparisons: default vs configured
- Personal vulnerability: "ran the scan on my own config. grade B."
- Contrarian takes: "stop using [expensive thing] for everything"
### TikTok Script Structure
```
HOOK (0-3s): [visual/verbal pattern interrupt]
PROBLEM (3-10s): [what's broken, what people don't know]
SOLUTION (10-40s): [demo, walkthrough, the thing]
CTA (40-60s): [what to do next]
```
### YouTube Script Structure
- Hook (first 15s): Show the result, then explain how
- Intro (15-30s): Who you are, what this covers, why it matters
- Chapters (2-3 min each): One concept per chapter, new visual every 30 seconds max
- Outro: Clear CTA (star repo, subscribe, try product)
- Description: first 2 lines matter (above fold), include links, chapters
## Content Recycling Flow
```
YouTube (highest effort)
-> Newsletter (long-form written adaptation)
-> LinkedIn (professional 3-takeaway version)
-> X/Twitter (atomic tweet units)
-> TikTok (15-60s visual snippet)
```
Adapt, don't duplicate. Each platform version should feel native.
## Posting Schedule
| Slot | Time | Content Type |
|------|------|-------------|
| Morning (7-8am) | - | Primary: educational, technical, news |
| Midday (11:30am-12:30pm) | - | Engagement: question, hot take, QT |
| Evening (5-6pm) | - | Secondary: ship log, personal, lighter |
| Throughout day | - | 5-10 replies to relevant accounts |
## Quality Gate
Before delivering:
@@ -86,3 +149,8 @@ Before delivering:
- no generic hype language
- no duplicated copy across platforms unless requested
- the CTA matches the content and audience
- no hashtags on X (they signal inauthenticity on dev X)
- no links in X tweet body (kills reach)
- hook in first 7 words (X) or first line (LinkedIn)
- numbers are specific and sourced
- at least one parenthetical aside per post for personality

View File

@@ -1,76 +0,0 @@
# Design System — Generate & Audit Visual Systems
## When to Use
- Starting a new project that needs a design system
- Auditing an existing codebase for visual consistency
- Before a redesign — understand what you have
- When the UI looks "off" but you can't pinpoint why
- Reviewing PRs that touch styling
## How It Works
### Mode 1: Generate Design System
Analyzes your codebase and generates a cohesive design system:
```
1. Scan CSS/Tailwind/styled-components for existing patterns
2. Extract: colors, typography, spacing, border-radius, shadows, breakpoints
3. Research 3 competitor sites for inspiration (via browser MCP)
4. Propose a design token set (JSON + CSS custom properties)
5. Generate DESIGN.md with rationale for each decision
6. Create an interactive HTML preview page (self-contained, no deps)
```
Output: `DESIGN.md` + `design-tokens.json` + `design-preview.html`
### Mode 2: Visual Audit
Scores your UI across 10 dimensions (0-10 each):
```
1. Color consistency — are you using your palette or random hex values?
2. Typography hierarchy — clear h1 > h2 > h3 > body > caption?
3. Spacing rhythm — consistent scale (4px/8px/16px) or arbitrary?
4. Component consistency — do similar elements look similar?
5. Responsive behavior — fluid or broken at breakpoints?
6. Dark mode — complete or half-done?
7. Animation — purposeful or gratuitous?
8. Accessibility — contrast ratios, focus states, touch targets
9. Information density — cluttered or clean?
10. Polish — hover states, transitions, loading states, empty states
```
Each dimension gets a score, specific examples, and a fix with exact file:line.
### Mode 3: AI Slop Detection
Identifies generic AI-generated design patterns:
```
- Gratuitous gradients on everything
- Purple-to-blue defaults
- "Glass morphism" cards with no purpose
- Rounded corners on things that shouldn't be rounded
- Excessive animations on scroll
- Generic hero with centered text over stock gradient
- Sans-serif font stack with no personality
```
## Examples
**Generate for a SaaS app:**
```
/design-system generate --style minimal --palette earth-tones
```
**Audit existing UI:**
```
/design-system audit --url http://localhost:3000 --pages / /pricing /docs
```
**Check for AI slop:**
```
/design-system slop-check
```

View File

@@ -0,0 +1,55 @@
---
name: frontend-design
description: Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
---
This skill guides creation of distinctive, production-grade frontend interfaces that avoid generic "AI slop" aesthetics. Implement real working code with exceptional attention to aesthetic details and creative choices.
The user provides frontend requirements: a component, page, application, or interface to build. They may include context about the purpose, audience, or technical constraints.
## Design Thinking
Before coding, understand the context and commit to a BOLD aesthetic direction:
- **Purpose**: What problem does this interface solve? Who uses it?
- **Tone**: Pick an extreme: brutally minimal, maximalist chaos, retro-futuristic, organic/natural, luxury/refined, playful/toy-like, editorial/magazine, brutalist/raw, art deco/geometric, soft/pastel, industrial/utilitarian, etc.
- **Constraints**: Technical requirements (framework, performance, accessibility).
- **Differentiation**: What makes this UNFORGETTABLE? What's the one thing someone will remember?
**CRITICAL**: Choose a clear conceptual direction and execute it with precision. Bold maximalism and refined minimalism both work - the key is intentionality, not intensity.
Then implement working code (HTML/CSS/JS, React, Vue, etc.) that is:
- Production-grade and functional
- Visually striking and memorable
- Cohesive with a clear aesthetic point-of-view
- Meticulously refined in every detail
## Frontend Aesthetics Guidelines
### Typography
Choose fonts that are beautiful, unique, and interesting. Avoid generic fonts like Arial and Inter; opt instead for distinctive choices that elevate the frontend's aesthetics. Pair a distinctive display font with a refined body font.
### Color & Theme
Commit to a cohesive aesthetic. Use CSS variables for consistency. Dominant colors with sharp accents outperform timid, evenly-distributed palettes.
### Motion
Use animations for effects and micro-interactions. Prioritize CSS-only solutions for HTML. Use Motion library for React when available. Focus on high-impact moments: one well-orchestrated page load with staggered reveals (animation-delay) creates more delight than scattered micro-interactions. Use scroll-triggering and hover states that surprise.
### Spatial Composition
Unexpected layouts. Asymmetry. Overlap. Diagonal flow. Grid-breaking elements. Generous negative space OR controlled density.
### Backgrounds & Visual Details
Create atmosphere and depth rather than defaulting to solid colors. Apply creative forms like gradient meshes, noise textures, geometric patterns, layered transparencies, dramatic shadows, decorative borders, custom cursors, and grain overlays.
## Anti-Patterns (NEVER do these)
- Overused font families (Inter, Roboto, Arial, system fonts)
- Cliched color schemes (particularly purple gradients on white backgrounds)
- Predictable layouts and component patterns
- Cookie-cutter design that lacks context-specific character
- Converging on common choices (Space Grotesk) across generations
## Execution
Match implementation complexity to the aesthetic vision. Maximalist designs need elaborate code with extensive animations and effects. Minimalist or refined designs need restraint, precision, and careful attention to spacing, typography, and subtle details.
Interpret creatively and make unexpected choices that feel genuinely designed for the context. No design should be the same. Vary between light and dark themes, different fonts, different aesthetics.

144
skills/github-ops/SKILL.md Normal file
View File

@@ -0,0 +1,144 @@
---
name: github-ops
description: GitHub repository operations, automation, and management. Issue triage, PR management, CI/CD operations, release management, and security monitoring using the gh CLI. Use when the user wants to manage GitHub issues, PRs, CI status, releases, contributors, stale items, or any GitHub operational task beyond simple git commands.
origin: ECC
---
# GitHub Operations
Manage GitHub repositories with a focus on community health, CI reliability, and contributor experience.
## When to Activate
- Triaging issues (classifying, labeling, responding, deduplicating)
- Managing PRs (review status, CI checks, stale PRs, merge readiness)
- Debugging CI/CD failures
- Preparing releases and changelogs
- Monitoring Dependabot and security alerts
- Managing contributor experience on open-source projects
- User says "check GitHub", "triage issues", "review PRs", "merge", "release", "CI is broken"
## Tool Requirements
- **gh CLI** for all GitHub API operations
- Repository access configured via `gh auth login`
## Issue Triage
Classify each issue by type and priority:
**Types:** bug, feature-request, question, documentation, enhancement, duplicate, invalid, good-first-issue
**Priority:** critical (breaking/security), high (significant impact), medium (nice to have), low (cosmetic)
### Triage Workflow
1. Read the issue title, body, and comments
2. Check if it duplicates an existing issue (search by keywords)
3. Apply appropriate labels via `gh issue edit --add-label`
4. For questions: draft and post a helpful response
5. For bugs needing more info: ask for reproduction steps
6. For good first issues: add `good-first-issue` label
7. For duplicates: comment with link to original, add `duplicate` label
```bash
# Search for potential duplicates
gh issue list --search "keyword" --state all --limit 20
# Add labels
gh issue edit <number> --add-label "bug,high-priority"
# Comment on issue
gh issue comment <number> --body "Thanks for reporting. Could you share reproduction steps?"
```
## PR Management
### Review Checklist
1. Check CI status: `gh pr checks <number>`
2. Check if mergeable: `gh pr view <number> --json mergeable`
3. Check age and last activity
4. Flag PRs >5 days with no review
5. For community PRs: ensure they have tests and follow conventions
### Stale Policy
- Issues with no activity in 14+ days: add `stale` label, comment asking for update
- PRs with no activity in 7+ days: comment asking if still active
- Auto-close stale issues after 30 days with no response (add `closed-stale` label)
```bash
# Find stale issues (no activity in 14+ days)
gh issue list --label "stale" --state open
# Find PRs with no recent activity
gh pr list --json number,title,updatedAt --jq '.[] | select(.updatedAt < "2026-03-01")'
```
## CI/CD Operations
When CI fails:
1. Check the workflow run: `gh run view <run-id> --log-failed`
2. Identify the failing step
3. Check if it is a flaky test vs real failure
4. For real failures: identify the root cause and suggest a fix
5. For flaky tests: note the pattern for future investigation
```bash
# List recent failed runs
gh run list --status failure --limit 10
# View failed run logs
gh run view <run-id> --log-failed
# Re-run a failed workflow
gh run rerun <run-id> --failed
```
## Release Management
When preparing a release:
1. Check all CI is green on main
2. Review unreleased changes: `gh pr list --state merged --base main`
3. Generate changelog from PR titles
4. Create release: `gh release create`
```bash
# List merged PRs since last release
gh pr list --state merged --base main --search "merged:>2026-03-01"
# Create a release
gh release create v1.2.0 --title "v1.2.0" --generate-notes
# Create a pre-release
gh release create v1.3.0-rc1 --prerelease --title "v1.3.0 Release Candidate 1"
```
## Security Monitoring
```bash
# Check Dependabot alerts
gh api repos/{owner}/{repo}/dependabot/alerts --jq '.[].security_advisory.summary'
# Check secret scanning alerts
gh api repos/{owner}/{repo}/secret-scanning/alerts --jq '.[].state'
# Review and auto-merge safe dependency bumps
gh pr list --label "dependencies" --json number,title
```
- Review and auto-merge safe dependency bumps
- Flag any critical/high severity alerts immediately
- Check for new Dependabot alerts weekly at minimum
## Quality Gate
Before completing any GitHub operations task:
- all issues triaged have appropriate labels
- no PRs older than 7 days without a review or comment
- CI failures have been investigated (not just re-run)
- releases include accurate changelogs
- security alerts are acknowledged and tracked

View File

@@ -0,0 +1,9 @@
# Hermes Generated Skills
This directory is reserved for skills distilled from Hermes session data, repeated Telegram asks, and self-improvement runs.
Rules:
- keep skills specific and evidence-backed
- prefer reusable operational patterns over one-off tasks
- mirror from `$HERMES_HOME/skills/generated/` only after the pattern is stable
- do not overwrite unrelated ECC skills

View File

@@ -0,0 +1,45 @@
---
name: hermes-generated
description: Index skill for Hermes-generated workflow packs. Use when the task clearly belongs to a Hermes-derived operator pattern and you need to choose the right generated subskill before acting.
origin: ECC
---
# Hermes Generated
This skill is the entrypoint for the Hermes-generated skill bucket.
Use it when the user is operating in a Hermes-style workflow and the task appears to map to one of the generated operator packs in this directory.
## Purpose
- route the request to the right Hermes-generated subskill
- avoid inventing a new workflow when a stable Hermes pattern already exists
- keep Hermes-derived operational behavior discoverable in the public ECC surface
## Available Subskills
- `automation-audit-ops`
- `content-crosspost-ops`
- `ecc-tools-cost-audit`
- `email-ops`
- `finance-billing-ops`
- `knowledge-ops`
- `messages-ops`
- `research-ops`
- `terminal-ops`
## Routing Guidance
Choose the narrowest matching subskill first:
- content and distribution -> `content-crosspost-ops`
- ECC Tools burn, PR recursion, quota bypass, or premium-model leakage -> `ecc-tools-cost-audit`
- inbox, triage, cleanup, and email sequencing -> `email-ops`
- billing, revenue, and payments -> `finance-billing-ops`
- memory and prior-session recovery -> `knowledge-ops`
- Telegram, chat, or messaging triage -> `messages-ops`
- research and source-backed investigation -> `research-ops`
- shell execution and terminal workflows -> `terminal-ops`
- cron, overlap, or automation hygiene -> `automation-audit-ops`
If no generated subskill fits cleanly, fall back to the closest standard ECC skill instead of forcing a Hermes-specific pattern.

View File

@@ -0,0 +1,88 @@
---
name: automation-audit-ops
description: Evidence-first automation audit workflow for Hermes. Use when auditing cron jobs, tooling, connectors, auth wiring, MCP surfaces, local-system parity, or automation overlap before fixing or pruning anything.
metadata:
hermes:
tags: [generated, automation, cron, tooling, workflow, audit, verification, parity, auth, mcp]
---
# Automation Audit Ops
Use this when the user asks what automations are live, which jobs are broken, where overlap exists, or what tooling and connectors Hermes is actually benefiting from right now.
## Skill Stack
Pull these skills into the workflow when relevant:
- `continuous-agent-loop` for multi-step audits that cross cron, tooling, and config surfaces
- `agentic-engineering` for decomposing the audit into verifiable units with a clear done condition
- `terminal-ops` when the audit turns into a concrete local fix or command run
- `research-ops` when local inventory has to be compared against current docs, platform support, or missing-capability claims
- `search-first` before inventing a new helper, wrapper, or inventory path
- `eval-harness` mindset for exact failure signatures, proof paths, and post-fix verification
## When To Use
- user asks `what automations do i have set up?`
- user asks to audit cron overlap, redundancy, broken jobs, or job coverage
- user asks which tools, connectors, auth surfaces, MCP servers, or wrappers are actually live
- user asks what is ported from another local agentic system, what is still missing, or what should be wired next
- the task spans both local inventory and a small number of high-impact fixes
## Workflow
1. Start from prepared evidence when it exists:
- read the prepared failure summary, request patterns, docs scan, and skill-sync notes before opening raw logs
- if the prepared failure summary says there is no dominant cluster, do not spray-read raw log bundles anyway
2. Inventory the live local surface before theorizing:
- for cron work, inspect `$HERMES_HOME/cron/jobs.json`, the latest relevant `$HERMES_HOME/cron/output/<job>/...` files, and matching `$HERMES_HOME/cron/delivery-state/*.json`
- for tooling or connector parity, inspect live config, plugins, helper scripts, wrappers, auth files, and verification logs before claiming something is missing
- when the user names another local system or repo, inspect that live source too before saying Hermes lacks or already has the capability
3. Classify each finding by failure class:
- active broken logic
- auth or provider outage
- stale error that has not rerun yet
- MCP or schema-level infrastructure break
- overlap or redundancy
- missing or unported capability
4. Classify each surfaced integration by live state:
- configured
- authenticated
- recently verified
- stale or broken
- missing
5. Answer inventory questions with proof, not memory:
- group automations by surface such as cron, messaging, content, GitHub, research, billing, or health
- include schedules, current status, and the proof path behind each claim
- for parity audits, separate `already ported`, `available but not wired`, and `missing entirely`
- for overlap audits, end with `keep`, `merge`, `cut`, or `fix next`, plus the evidence path behind each call
6. Freeze audit scope before changing anything:
- if the user asked for an inventory, audit table, comparison, or otherwise kept the task read-only, stay read-only until the user explicitly asks for fixes
- collect evidence with config reads, logs, status files, and non-writing proving steps first
- do not rewrite cron, config, wrappers, or auth surfaces just because a likely fix is visible
7. Fix only the highest-impact proved issues:
- keep the pass bounded to one to three changes
- prefer durable config, instruction, or helper-script fixes over one-off replies
- if the prepared evidence shows a failure happened before a later config change, record that as stale-runtime or stale-status risk instead of rewriting the config blindly
8. Verify after any change:
- rerun the smallest proving step
- regenerate any affected context artifacts
- record exact remaining risks when runtime state is outside the scope of the current pass
## Pitfalls
- do not treat `last_status=error` as proof of a current bug if the job has not rerun since recovery
- do not conflate a provider outage or MCP schema error with job-specific prompt logic
- do not claim a tool or connector is live just because a skill references it
- do not treat `present in config` as the same thing as `authenticated` or `recently verified`
- do not say Hermes is missing a capability before comparing the named local system or repo the user pointed at
- do not answer `what automations do i have set up?` from memory without reading the current local inventory
- do not open broad raw log bundles when the prepared summary already says there is no dominant cluster
- do not turn an audit-only inventory pass into config or cron edits unless the user expands scope
- do not fix low-value redundancy before resolving the concrete broken path the user actually asked about
## Verification
- each important claim points to a live file, log, config entry, or exact failure signature
- each surfaced integration is labeled `configured`, `authenticated`, `recently verified`, `stale/broken`, or `missing`
- each fix names the proving command, regenerated artifact, or re-read output that confirmed it
- remaining risks clearly distinguish stale status, runtime drift, and unresolved infrastructure blockers

View File

@@ -0,0 +1,117 @@
---
name: content-crosspost-ops
description: Evidence-first crossposting workflow for Hermes. Use when adapting posts, threads, demos, videos, or articles across requested social destinations while keeping per-platform copy distinct and verified.
metadata:
hermes:
tags: [generated, content, crosspost, workflow, verification]
---
# Content Crosspost Ops
Use this when the user wants Hermes to crosspost or repurpose content across multiple platforms, especially from Telegram-driven publishing requests.
## Skill Stack
Pull these imported skills into the workflow when relevant:
- `content-engine` for platform-native rewrites
- `crosspost` for sequencing and destination-specific adaptation
- `article-writing` when the source asset is long-form
- `video-editing` or `fal-ai-media` when the post should lead with a clip, frame, or visual
- `continuous-agent-loop` when the task spans capability checks, multi-platform execution, and verification
- `search-first` before claiming a platform or API supports a format
- `eval-harness` mindset for publish verification and status reporting
## When To Use
- user says `crosspost`, `post everywhere`, `put this on linkedin too`, or similar
- the source asset is an X post/thread, quote tweet, article, demo video, screenshot, or YouTube post
- the destination is a community thread or showcase channel like Discord's `built-with-claude`, or another destination whose live support must be checked first
- the user asks whether a new destination or post type is supported
## Support Source Of Truth
Treat these live sources as authoritative for destination support:
- `$HERMES_WORKSPACE/content/postbridge_publish.py`
- `$HERMES_WORKSPACE/content/crosspost-verification-latest.md`
Examples in this skill are destination patterns, not a promise that every destination is live right now. If a destination has no current wrapper path or no recent verified publish record, report `unverified capability` or `blocked by unsupported capability` until checked.
## Capability Matrix
Resolve these as separate questions before you answer or execute:
- is the destination itself currently publishable or only `unverified capability`
- does the asked transport support it right now, for example PostBridge
- does another live path exist, such as browser publish or a different verified API
- what is the active blocker: unsupported transport, unsupported destination, auth, MFA, missing asset, or approval
Do not flatten these into one label. `PostBridge unsupported` can still mean `browser path available`, and `browser blocked on auth` does not mean the destination is fully unsupported.
## Workflow
1. Read the real source asset and any destination rules first. Do not draft from memory.
- if the user pasted thread requirements, comply with those requirements before drafting
2. If the request depends on platform capability, API support, or quota behavior, verify it before answering.
- if the user asks whether PostBridge can handle a destination or format, inspect the real wrapper, configs, or recent publish logs before promising support
- treat `$HERMES_WORKSPACE/content/postbridge_publish.py` plus recent verification records as the source of truth for current support
- separate the destination question from the transport question, and answer both
- if there is no current wrapper path or recent proof, report `unverified capability` before drafting
- if PostBridge is unsupported but a browser path exists, say `PostBridge unsupported, browser path available` instead of flattening the whole destination to `unsupported`
- if the destination itself is unsupported, say `blocked by unsupported capability` and give the next viable path
- if the task started as a support question or `did you do it?`, settle that capability or verification answer before drafting or scheduling other destinations
- if a budget warning is active or there are 3 or fewer tool turns left, stop expanding capability checks, answer from the current wrapper or verification proof first, then use at most one high-confidence proving step
3. Extract one core idea and a few specifics. Split multiple ideas into separate posts.
4. Write native variants instead of reusing the same copy:
- X: fast hook, minimal framing
- LinkedIn: strong first line, short paragraphs, explicit lesson or takeaway
- Threads, Bluesky, or other short-form social destinations: shorter, conversational, clearly distinct wording
- YouTube Community or other media-led destinations: lead with the result or takeaway, keep it media-friendly
5. Prefer native media when the user wants engagement:
- for quote tweets, articles, or external links, prefer screenshots or media over a bare outbound link when the platform rewards native assets
- if the user says the demo itself should lead, use the video or a frame from it instead of a generic screenshot
- for community showcase threads, prefer the strongest demo clip or screenshot pair the user explicitly pointed to
6. Use link placement intentionally:
- put external links in comments or replies when engagement is the goal and the platform supports it
- otherwise use a platform-native CTA such as `comment for link` only when it matches the user's instruction
7. Resolve account and auth blockers early for browser-only destinations:
- for Discord or other browser-only community shares, verify the active account and whether the destination is reachable before spending more turns on extra asset hunting or copy polish
- verify the active account before typing into a community or social composer
- if login is blocked by MFA or a missing verification code, use the checked-in helper path instead of ad hoc inline scripting and do at most one focused resend plus one fresh helper check
- if that still returns no matching code, stop and report `blocked on missing MFA code`
8. Execute in order:
- post the primary platform first
- stagger secondary destinations when requested, defaulting to 4 hours apart unless the user overrides it
- prefer PostBridge for supported platforms, browser flows only when required
- if the asked transport is unavailable but another approved live path exists, say that explicitly and take that path next instead of doing unrelated side work
- if the explicitly asked destination is unsupported or still unverified, report that exact state before moving to any secondary destination, and only continue with other requested platforms when the user asked for a multi-destination publish or approved the fallback order
9. Verify before claiming completion:
- capture a returned post ID, URL, API response, or an updated verification log
- when the user asks `did you do it?`, answer with the exact status for each platform: posted, queued, drafted, uploaded-only, blocked, or awaiting verification
- when the user interrupts with `are you working?`, answer in one sentence with the exact current step or blocker on the explicitly asked destination before more tool use
- if a budget warning is already active, do not make another exploratory call before replying to `did you do it?` or `are you working?`
- lead with the explicitly asked destination first, and include the proof or blocker on that destination before mentioning any secondary platform work
- if the asked destination is still unresolved, say that first instead of leading with successful side work on other platforms
- record every attempt with `$HERMES_WORKSPACE/content/log_crosspost.py` or `$HERMES_WORKSPACE/content/postbridge_publish.py`
- if the state is only drafted, uploaded-only, queued, blocked, or pending manual action, report that exact status
## Pitfalls
- do not post identical copy cross-platform
- do not assume platform support without checking
- do not ignore thread rules or platform-specific showcase requirements
- do not call a draft, composer state, or upload step `posted`
- do not keep searching unrelated systems after a login or MFA blocker is already the limiting step
- do not keep refining copy or looking for better assets once auth is the only blocker on a browser-only publish
- do not answer a support question with a guess when the wrapper, logs, or API response can settle it
- do not treat destination examples in this skill as a live support matrix
- do not collapse `transport unsupported` into `destination unsupported` when another live path still exists
- do not hide an unresolved asked destination behind progress on other supported destinations
- do not answer `did you do it?` by leading with successful secondary platforms while the explicitly asked destination is still unresolved
- do not keep scheduling or verifying secondary destinations after a status interrupt while the explicitly asked destination is still unresolved
- do not keep expanding capability checks after a budget warning when the current wrapper state or verification log already supports a precise status answer
- do not ignore the user's preference for screenshots or native media over raw links
## Verification
- `$HERMES_WORKSPACE/content/crosspost-verification-latest.md` reflects the latest attempts
- each destination has an ID, URL, or explicit failure reason
- the copy and media logged match what was actually sent

View File

@@ -0,0 +1,104 @@
---
name: ecc-tools-cost-audit
description: Evidence-first ECC Tools burn and billing audit workflow. Use when investigating runaway PR creation, quota bypass, premium-model leakage, or GitHub App cost spikes in the ECC Tools repo.
origin: ECC
---
# ECC Tools Cost Audit
Use this when the user suspects the ECC Tools GitHub App is burning cost, over-creating PRs, bypassing usage limits, or routing free users into premium analysis paths.
## Skill Stack
Pull these imported skills into the workflow when relevant:
- `continuous-agent-loop` for scope freezes, recovery gates, and cost-aware tracing when audits are long or failure signatures repeat
- `terminal-ops` for repo-local inspection, narrow edits, and proving commands
- `finance-billing-ops` when customer-impact math has to be separated from repo behavior
- `agentic-engineering` for tracing entrypoints, queue paths, and fix sequencing
- `plankton-code-quality` for safer code changes and rerun behavior
- `eval-harness` mindset for exact root-cause evidence and post-fix verification
- `search-first` before inventing a new helper or abstraction
- `security-review` when auth, secrets, usage gates, or entitlement paths are touched
## When To Use
- user says ECC Tools burn rate, PR recursion, over-created PRs, usage-limit bypass, or expensive model routing
- the task is an audit or fix in `$PRIMARY_REPOS_ROOT/ECC/skill-creator-app`
- the answer depends on webhook handlers, queue workers, usage reservation, PR creation logic, or paid-gate enforcement
## Workflow
1. Freeze repo scope first:
- use `$PRIMARY_REPOS_ROOT/ECC/skill-creator-app`
- check branch and local diff before changing anything
2. Freeze audit mode before tracing:
- if the user asked for `report only`, `audit only`, `review only`, or explicitly said `do not modify code`, keep the pass read-only until the user changes scope
- gather evidence with reads, searches, git status/diff, and other non-writing proving commands first
- do not patch `src/index.ts`, run generators, install dependencies, or stage changes during an audit-only pass
3. Trace ingress before suggesting fixes:
- inspect webhook entrypoints in `src/index.ts`
- search every `ANALYSIS_QUEUE.send(...)` or equivalent enqueue
- map which triggers share a job type
4. Trace the queue consumer and its side effects:
- inspect `handleAnalysisQueue(...)` or the equivalent worker
- confirm whether queued analysis always ends in PR creation, file writes, or premium model calls
5. Audit PR multiplication:
- inspect PR helpers and branch naming
- check dedupe, branch skip logic, synchronize-event handling, and reuse of existing PRs
- treat app-generated branches such as `ecc-tools/*` or timestamped branches as red-flag evidence paths
6. Audit usage and billing truth:
- inspect rate-limit check and increment paths
- if quota is checked before enqueue but incremented only in the worker, mark it as a real race
- separate overrun risk, customer billing impact, and entitlement truth
7. Audit model routing:
- inspect analyzer `fastMode` or equivalent flags, free-vs-paid tier branching, and actual provider/model calls
- confirm whether free queued work can still hit Anthropic or another premium path when keys exist
8. Audit rerun safety and file updates:
- inspect file update helpers for existing-file `sha` handling or equivalent optimistic concurrency
- if reruns can spend analysis cost and then fail on PR or file creation, mark it as burn-with-broken-output
9. Fix in priority order only if the user asked for code changes:
- stop automatic PR multiplication first
- stop quota bypass second
- stop premium leakage third
- then close rerun/update safety gaps and missing entitlement gates
10. Answer status interrupts before more tracing:
- if the user asks `did you do it?`, `are you working?`, or the session is near the tool budget, reply from the current verified repo state before more searching
- lead with whether root causes are `found`, fixes are `changed locally`, `verified locally`, `pushed`, or still `blocked`
- if the asked burn path is still unresolved, say that before side findings or lower-priority issues
11. Verify with the smallest proving commands:
- rerun only the focused tests or typecheck that cover changed paths
- report `changed locally`, `verified locally`, `pushed`, `deployed`, or `blocked` exactly
## High-Signal Failure Patterns
### 1. One queue type for all triggers
If pushes, PR syncs, and manual audits all enqueue the same analyze job and the worker always calls the PR-creation path, analysis equals PR spam.
### 2. Post-enqueue usage increment
If usage is reserved only inside the worker, concurrent requests can all pass the front-door check and exceed quotas.
### 3. Free tier on premium model path
If free queued jobs still set `fastMode: false` or equivalent while premium provider keys exist, free users can burn premium spend.
### 4. App-generated branches re-entering the webhook
If `pull_request.synchronize` or similar runs on `ecc-tools/*` branches, the app can analyze its own output and recurse.
### 5. Update-without-sha reruns
If generated files are updated without passing the existing file `sha`, reruns can fail after the expensive work already happened.
## Pitfalls
- do not start with broad repo wandering, settle webhook -> queue -> worker path first
- do not mix customer billing inference with code-backed product truth
- do not mutate the repo during an audit-only or `do not modify code` pass
- do not claim burn is fixed until the narrow proving command was rerun
- do not push or deploy unless the user asked
- do not ignore existing local changes in the repo, work around them or stop if they conflict
- do not keep tracing lower-priority repo paths after a budget warning or status interrupt when the main root-cause state is already known
## Verification
- root causes cite exact file paths and code areas
- fixes are ordered by burn impact, not code neatness
- proving commands are named
- final status distinguishes local change, verification, push, and deployment

View File

@@ -0,0 +1,78 @@
---
name: email-ops
description: Evidence-first mailbox triage and sent-mail-safe reply workflow for Hermes. Use when organizing folders, drafting or sending through Himalaya, or verifying a message landed in Sent.
origin: Hermes
---
# Email Ops
Use this when the user wants Hermes to clean a mailbox, move messages between folders, draft or send replies, or prove a message landed in Sent.
## Prerequisites
Before using this workflow:
- install and configure the Himalaya CLI for the target mailbox accounts
- confirm the account's Sent folder name if it differs from `Sent`
## Skill Stack
Pull these companion skills into the workflow when relevant:
- `investor-outreach` when the email is investor, partner, or sponsor facing
- `continuous-agent-loop` when the task spans draft, approval, send, and Sent proof across multiple steps
- `search-first` before assuming a mail API, folder name, or CLI flag works
- `eval-harness` mindset for Sent-folder verification and exact status reporting
## When To Use
- user asks to triage inbox or trash, rescue important mail, or delete only obvious spam
- user asks to draft or send email and wants the message to appear in the mailbox's Sent folder
- user wants proof of which account, folder, or message id was used
## Workflow
1. Read the exact mailbox constraint first. If the user says `himalaya only` or forbids Apple Mail or `osascript`, stay inside Himalaya.
2. Resolve account and folder explicitly:
- check `himalaya account list`
- use `himalaya envelope list -a <account> -f <folder> ...`
- never misuse `-s INBOX` as a folder selector
3. For triage, classify before acting:
- preserve investor, partner, scheduling, and user-sent threads
- move only after the folder and account are confirmed
- permanently delete only obvious spam or messages the user explicitly authorized
4. For replies or new mail:
- read the full thread first
- choose the sender account that matches the project or recipient
- if the user has not explicitly approved the exact outgoing text, draft first and show the final copy before sending
- store approval state with `python3 $HERMES_HOME/scripts/email_guard.py queue ...`
- before any send, require `python3 $HERMES_HOME/scripts/email_guard.py approve --id <id>` plus `python3 $HERMES_HOME/scripts/email_guard.py can-send --id <id> --account <account> --recipient <recipient> --subject <subject>`
- before any send or reply, run `python3 $HERMES_HOME/scripts/email_guard.py history --recipient <recipient> --subject <subject snippet> --days 45 --json` to catch prior outbound and accidental repeats
- if the exact Himalaya send syntax is uncertain, check `himalaya ... --help` or a checked-in helper path before trying a new subcommand
- compose with `himalaya template send` or `himalaya message send` using file-backed content when possible, not ad hoc heredoc or inline Python raw-MIME wrappers
- avoid editor-driven flows unless required
5. If the request mentions attachments or images:
- resolve the exact absolute file path before broad mailbox searching
- keep the task on the local send-and-verify path instead of branching into unrelated web or repo exploration
- if Mail.app fallback is needed, pass the attachment paths after the body: `osascript $HERMES_HOME/scripts/send_mail.applescript "<sender>" "<recipient>" "<subject>" "<body>" "/absolute/file1" ...`
6. If the user wants an actual send and Himalaya fails with an IMAP append or save-copy error, a CLI dead end, or a raw-message parser crash, do not immediately resend. First verify whether the message already landed in Sent using the history check or `himalaya envelope list -a <account> -f Sent ...`. If the failure was an invalid command path or a panic before Sent proof exists, report the exact blocked state such as `blocked on invalid himalaya send path` or `blocked on himalaya parser crash` and preserve the draft. Only if there is still no sent copy and the user explicitly approved the send may you fall back to `$HERMES_HOME/scripts/send_mail.applescript`. If the user constrained the method to Himalaya only, report the exact blocked state instead of silently switching tools.
7. During long-running mailbox work, keep the loop tight: draft -> approval -> send -> Sent proof. Do the next irreversible step first, and do not branch into unrelated transports or searches while the current blocker is unresolved. If a budget warning says 3 or fewer tool calls remain, stop broad exploration and spend the remaining calls on the highest-confidence execution or verification step, or report exact status and next action.
8. If the user wants sent-mail evidence:
- verify via `himalaya envelope list -a <account> -f Sent ...` or the account's actual sent folder
- report the subject, recipient, account, and message id or date if available
9. Report exact status words: drafted, approval-pending, approved, sent, moved, flagged, deleted, blocked, awaiting verification.
## Pitfalls
- do not claim a message was sent without Sent-folder verification
- do not use the wrong account just because it is default
- do not delete uncertain business mail during cleanup
- do not switch tools after the user constrained the method
- do not invent `himalaya smtp` or other unverified send paths
- do not build last-minute raw-send wrappers when the checked commands and guard scripts already cover the path
- do not wander into unrelated searches while an attachment path or Sent verification is unresolved
- do not keep searching through the budget warning while the user is asking for a status update
## Verification
- the requested messages are present in the expected folder after the move
- sent mail appears in Sent for the correct account
- the final report includes counts or concrete message identifiers, not vague completion language

View File

@@ -0,0 +1,71 @@
---
name: finance-billing-ops
description: Evidence-first Stripe sales, billing incident, and team-pricing workflow for Hermes. Use when pulling sales, investigating duplicate charges or failed payments, checking whether team billing is real in code, or benchmarking pricing.
metadata:
hermes:
tags: [generated, finance, billing, stripe, pricing, workflow, verification]
---
# Finance Billing Ops
Use this when the user asks about Stripe sales, refunds, failed payments, duplicate charges, org or team billing behavior, pricing strategy, or whether the product logic matches the marketing copy.
## Skill Stack
Pull these imported skills into the workflow when relevant:
- `market-research` for competitor pricing, billing models, and sourced market context
- `deep-research` or `exa-search` when the answer depends on current public pricing or enforcement behavior
- `search-first` before inventing a Stripe, billing, or entitlement path
- `eval-harness` mindset for exact status reporting and separating proof from inference
- `agentic-engineering` and `plankton-code-quality` when the answer depends on checked-in ECC billing or entitlement code
## When To Use
- user says `pull in stripe data`, `any new sales`, `why was he charged`, `refund`, `duplicate charge`, `team billing`, `per seat`, or similar
- the question mixes revenue facts with product truth, for example whether team or org billing is actually implemented
- the user wants a pricing comparison against Greptile or similar competitors
## Workflow
1. Start with the freshest revenue evidence available:
- if a live Stripe pull exists, refresh it first
- otherwise read `$HERMES_WORKSPACE/business/stripe-sales.md` and `$HERMES_WORKSPACE/business/financial-status.md`
- always report the snapshot timestamp if the data is not live
2. Normalize the revenue picture before answering:
- separate paid sales, failed attempts, successful retries, `$0` invoices, refunds, disputes, and active subscriptions
- do not treat a transient decline as lost revenue if the same checkout later succeeded
- flag any duplicate subscriptions or repeated checkouts with exact timestamps
3. For a customer billing incident:
- identify the customer email, account login, subscription ids, checkout sessions, payment intents, and timing
- determine whether extra charges are duplicates, retries, or real extra entitlements
- if recommending refunds or consolidation, explain what product value the extra charges did or did not unlock
4. For org, seat, quota, or activation questions:
- inspect the checked-in billing and usage code before making claims
- verify checkout quantity handling, installation vs user usage keys, unit-count handling, seat registry or member sync, and quota stacking
- inspect the live pricing copy too, so you can call out mismatches between marketing and implementation
5. For pricing and competitor questions:
- use `market-research`, `deep-research`, or `exa-search` for current public evidence
- separate sourced facts from inference, and call out stale or incomplete pricing signals
6. Report in layers:
- current sales snapshot
- customer-impact diagnosis
- code-backed product truth
- recommendation or next action
7. If the user wants fixes after diagnosis:
- hand the implementation path to `agentic-engineering` and `plankton-code-quality`
- keep the evidence trail so copy changes, refunds, and code changes stay aligned
## Pitfalls
- do not claim `new sales` without saying whether the data is live or a saved snapshot
- do not mix failed attempts into net revenue if the payment later succeeded
- do not say `per seat` unless the code actually enforces seat behavior
- do not assume extra subscriptions increase quotas without verifying the entitlement path
- do not compare competitor pricing from memory when current public sources are available
## Verification
- the answer includes a snapshot timestamp or an explicit live-pull statement
- the answer separates fact, inference, and recommendation
- code-backed claims cite file paths or code areas
- customer-impact statements name the exact payment or subscription evidence they rely on

View File

@@ -0,0 +1,62 @@
---
name: knowledge-ops
description: Evidence-first memory and context retrieval workflow for Hermes. Use when the user asks what Hermes remembers, points to OpenClaw or Hermes memory, or wants context recovered from a compacted session without re-reading already loaded files.
origin: Hermes
---
# Knowledge Ops
Use this when the user asks Hermes to remember something, recover an older conversation, pull context from a compacted session, or find information that "should be in memory somewhere."
## Skill Stack
Pull these companion skills into the workflow when relevant:
- `continuous-learning-v2` for evidence-backed pattern capture and cross-session learning
- `continuous-agent-loop` when the lookup spans multiple stores, compaction summaries, and follow-up recovery steps
- `search-first` before inventing a new lookup path or assuming a store is empty
- `eval-harness` mindset for exact source attribution and negative-search reporting
## When To Use
- user says `do you remember`, `it was in memory`, `it was in openclaw`, `find the old session`, or similar
- the prompt contains a compaction summary or `[Files already read ... do NOT re-read these]`
- the prompt says `use the context summary above`, `proceed`, or otherwise hands off loaded context plus a concrete writing, editing, or response task
- the answer depends on Hermes workspace memory, Supermemory, session logs, or the historical knowledge base
## Workflow
1. Start from the evidence already in the prompt:
- treat compaction summaries and `do NOT re-read` markers as usable context
- if the prompt already says `use the context summary above` or asks you to proceed with writing, editing, or responding, continue from that loaded context first and search only the missing variables
- do not waste turns re-reading the same files unless the summary is clearly insufficient
2. Search in a fixed order before saying `not found`, unless the user already named the store:
- `mcp_supermemory_recall` with a targeted query
- grep `$HERMES_WORKSPACE/memory/`
- grep `$HERMES_WORKSPACE/` more broadly
- `session_search` for recent Hermes conversations
- grep `$KNOWLEDGE_BASE_ROOT/` and `$OPENCLAW_MEMORY_ROOT/` for historical context
3. If the user says the answer is in a specific memory store, pivot there immediately after the initial targeted recall:
- `openclaw memory` means favor `$KNOWLEDGE_BASE_ROOT/` and `$OPENCLAW_MEMORY_ROOT/`
- `not in this session` means stop digging through the current thread and move to persistent stores instead of re-reading current-session files
4. Keep the search narrow and evidence-led:
- reuse names, dates, channels, account names, or quoted phrases from the user
- search the most likely store first instead of spraying generic queries everywhere
5. Report findings with source evidence:
- give the file path, session id, date, or memory store
- distinguish between a direct hit, a likely match, and an inference
6. If nothing turns up, say which sources were checked and what to try next. Do not say `not found` after a single failed search.
## Pitfalls
- do not ignore a compaction summary and start over from zero
- do not keep re-reading files the prompt says are already loaded
- do not turn a loaded-context handoff into a pure retrieval loop when the user already asked for an actual draft, edit, or response
- do not keep searching the current session after the user already named OpenClaw or another persistent store as the likely source
- do not answer from vague memory without a source path, date, or session reference
- do not stop after one failed memory source when others remain
## Verification
- the response names the source store or file
- the response separates direct evidence from inference
- failed lookups list the sources checked, not just a bare `not found`

View File

@@ -0,0 +1,68 @@
---
name: messages-ops
description: Evidence-first live messaging workflow for Hermes. Use when reading iMessage threads, checking X DMs, pulling recent local MFA codes, or proving which thread or message was actually reviewed.
metadata:
hermes:
tags: [generated, messages, imessage, dm, mfa, workflow, verification]
---
# Messages Ops
Use this when the user wants Hermes to read texts or DMs, recover a recent one-time code, inspect a live thread before replying, or prove which message source was checked.
## Skill Stack
Pull these imported skills into the workflow when relevant:
- `continuous-agent-loop` when the task spans source resolution, retrieval, and a follow-up draft or action
- `continuous-learning-v2` when the live message task turns into a cross-session memory or pattern-capture issue
- `search-first` before inventing a new message lookup path or raw database query
- `eval-harness` mindset for exact source attribution and blocked-state reporting
## When To Use
- user says `read my messages`, `check texts`, `look in iMessage`, `check DMs`, or similar
- the task depends on a live local Messages thread, an X DM thread, or a recent code delivered to Messages
- the user asks whether Hermes already checked a specific thread, sender, or service
- the prompt mixes live message retrieval with a likely handoff into `email-ops` or `knowledge-ops`
## Workflow
1. Resolve the source first:
- iMessage thread
- recent local code in Messages
- X DM or another browser-gated message source
- email or persistent memory handoff
2. For iMessage threads:
- use `imsg chats` when the chat id is unclear
- use `imsg history --chat-id <ID> --limit <N>` for the actual read
- known chat ids can save turns when they match the ask, for example Alejandro=`458` and Haley/Henry=`364`
3. For recent one-time codes from local Messages:
- use `python3 $HERMES_HOME/scripts/messages_recent_codes.py --limit 8 --minutes 20`
- add `--query <service>` when the sender or brand is known
- do not jump to ad hoc `sqlite3`, `python3 -c`, or other inline database reads before the helper path is tried
- if one focused resend plus one fresh helper check still shows no match, report `blocked on missing MFA code` and stop
4. For X DMs or other browser-gated message sources:
- use the logged-in browser path first
- verify the active account before reading or drafting
- if auth or MFA blocks access, do the focused code-check path once, then report the exact blocker instead of wandering into side work
5. Hand off cleanly when the task is really another workflow:
- use `email-ops` when the dominant task is mailbox triage, folder moves, replies, or Sent proof
- use `knowledge-ops` when the user says `openclaw memory`, `not in this session`, or the prompt already provides loaded context plus a memory-retrieval hint
6. Report exact evidence:
- name the source tool and thread or chat id when possible
- include sender, service, timestamp window, or blocker
- use exact status words such as `read`, `drafted`, `blocked`, or `awaiting verification`
## Pitfalls
- do not say `I can't retrieve` when `imsg`, the browser session, or the checked helper script can settle the question
- do not confuse live message retrieval with mailbox work, hand off to `email-ops` when email is the real surface
- do not burn turns on raw database one-liners before the checked helper path for local codes
- do not keep re-reading the current session after the user already pointed to OpenClaw or another persistent store
- do not claim a thread was checked without naming the source tool, thread, or service behind the claim
## Verification
- the response names the source store or tool used
- the response includes a thread id, sender, service, or explicit blocker
- MFA/code checks end with either a concrete match or the exact blocked status

View File

@@ -0,0 +1,82 @@
---
name: research-ops
description: Evidence-first research workflow for Hermes. Use when answering current questions, evaluating a market or tool, enriching leads, comparing strategic options, or deciding whether a request should become ongoing monitored data collection.
metadata:
hermes:
tags: [generated, research, market, discovery, monitoring, workflow, verification]
---
# Research Ops
Use this when the user asks Hermes to research something current, compare options, enrich people or companies, or turn repeated lookups into an ongoing monitoring workflow.
## Skill Stack
Pull these imported skills into the workflow when relevant:
- `deep-research` for multi-source cited synthesis
- `market-research` for decision-oriented framing
- `exa-search` for first-pass discovery and current-web retrieval
- `continuous-agent-loop` when the task spans user-provided evidence, fresh verification, and a final recommendation across multiple turns
- `data-scraper-agent` when the user really needs recurring collection or monitoring
- `search-first` before building new scraping or enrichment logic
- `eval-harness` mindset for claim quality, freshness, and explicit uncertainty
## When To Use
- user says `research`, `look up`, `find`, `who should i talk to`, `what's the latest`, or similar
- the answer depends on current public information, external sources, or a ranked set of candidates
- the user pastes a compaction summary, copied research, manual calculations, or says `factor this in`
- the user asks `should i do X or Y`, `compare these options`, or wants an explicit recommendation under uncertainty
- the task sounds recurring enough that a scraper or scheduled monitor may be better than a one-off search
## Workflow
1. Start from the evidence already in the prompt:
- treat compaction summaries, pasted research, copied calculations, and quoted assumptions as loaded inputs
- normalize them into `user-provided evidence`, `needs verification`, and `open questions`
- do not restart the analysis from zero if the user already gave you a partial model
2. Classify the ask before searching:
- quick factual answer
- decision memo or comparison
- lead list or enrichment
- recurring monitoring request
3. Build the decision surface before broad searching when the ask is comparative:
- list the options, decision criteria, constraints, and assumptions explicitly
- keep concrete numbers and dates attached to the option they belong to
- mark which variables are already evidenced and which still need outside verification
4. Start with the fastest evidence path:
- use `exa-search` first for broad current-web discovery
- if the question is about a local wrapper, config, or checked-in code path, inspect the live local source before making any web claim
5. Deepen only where the evidence justifies it:
- use `deep-research` when the user needs synthesis, citations, or multiple angles
- use `market-research` when the result should end in a recommendation, ranking, or go/no-go call
- keep `continuous-agent-loop` discipline when the task spans user evidence, fresh searches, and recommendation updates across interruptions
6. Separate fact from inference:
- label sourced facts clearly
- label user-provided evidence clearly
- label inferred fit, ranking, or recommendation as inference
- include dates when freshness matters
7. Decide whether this should stay manual:
- if the user will likely ask for the same scan repeatedly, use `data-scraper-agent` patterns or propose a monitored collection path instead of repeating the same manual research forever
8. Report with evidence:
- group the answer into sourced facts, user-provided evidence, inference, and recommendation when the ask is a comparison or decision
- cite the source or local file behind each important claim
- if evidence is thin or conflicting, say so directly
## Pitfalls
- do not answer current questions from stale memory when a fresh search is cheap
- do not conflate local code-backed behavior with market or web evidence
- do not ignore pasted research or compaction context and redo the whole investigation from scratch
- do not mix user-provided assumptions into sourced facts without labeling them
- do not present unsourced numbers or rankings as facts
- do not spin up a heavy deep-research pass for a quick capability check that local code can answer
- do not leave the comparison criteria implicit when the user asked for a recommendation
- do not keep one-off researching a repeated monitoring ask when automation is the better fit
## Verification
- important claims have a source, file path, or explicit inference label
- user-provided evidence is surfaced as a distinct layer when it materially affects the answer
- freshness-sensitive answers include concrete dates when relevant
- recurring-monitoring recommendations state whether the task should remain manual or graduate to a scraper/workflow

View File

@@ -0,0 +1,80 @@
---
name: subscription-audit-ops
description: Evidence-first recurring-charge and subscription audit workflow for Hermes. Use when auditing personal spend across cards, recurring merchants, and cancellation candidates under time or cash pressure.
metadata:
hermes:
tags: [generated, finance, subscriptions, recurring-charges, credit-karma, email, verification]
---
# Subscription Audit Ops
Use this when the user asks to audit subscriptions, recurring charges, monthly software spend, or cancellation candidates across personal cards and accounts.
## Skill Stack
Pull these imported skills into the workflow when relevant:
- `continuous-agent-loop` for bounded multi-step audits with explicit stop conditions when proof is partial
- `agentic-engineering` for exact done conditions and the one-to-three-change scope discipline
- `market-research` when the user wants vendor or plan comparisons before canceling
- `deep-research` and `exa-search` when outside pricing, cancellation flows, or market alternatives need current verification
- `search-first` before inventing a custom scraper, parser, or finance helper
- `eval-harness` mindset for proof tiers, timestamps, and exact confidence labels
## When To Use
- user says `audit my subscriptions`, `what can i cancel`, `find recurring charges`, or similar
- the user wants a ruthless keep/cancel pass before a move, budget cut, or runway review
- direct card exports are missing and you need to assemble evidence from multiple partial sources
## Workflow
1. Start with the freshest finance snapshot available:
- read `/Users/affoon/.hermes/workspace/business/financial-status.md`
- use it as the source of truth for last verified recurring-charge evidence, card balances, and snapshot timestamp
- if it references Credit Karma or other live sources, preserve the exact timestamp in the final answer
2. If live Credit Karma is accessible, use browser tools to confirm:
- net worth page for balances and recent transactions
- manage accounts page for linked institutions and account names
- use `browser_vision` on screenshots when the DOM/snapshot truncates or hides account details
- distinguish `visible recurring charge`, `recent transaction`, and `linked account` clearly
3. If direct transaction exports are unavailable, use fallback evidence layers:
- search prior session logs for saved Credit Karma findings and merchant names
- inspect known finance files in workspace and memory for previous subscription analyses
- use email search for billing/receipt subjects only if it is likely to surface merchant proof
- use a passwords export only as account-existence evidence, never as billing proof
4. Classify findings into proof tiers:
- tier 1: live transaction or current finance snapshot with amount/date
- tier 2: prior saved note with explicit price and service
- tier 3: account-existence evidence only, needs billing verification
5. Build the recommendation set:
- `cancel now` for low-value tools with explicit prior cost evidence or obvious move-related services
- `verify this week` for plausible subscriptions without live billing proof
- `keep for now` for core infra or tools still likely in active use
- call out the biggest unresolved swing item, usually the highest-cost ambiguous plan
6. Report exact confidence:
- say `first-pass audit` if the evidence is partial
- never pretend you have a complete ledger unless you saw a full recurring-charge screen or statement export
- separate `billing proof`, `account evidence`, and `inference`
## Heuristics That Worked
- `financial-status.md` may contain the freshest recurring-charge evidence even when live browser access later fails
- prior Apple Notes / knowledge-base subscription analyses are useful for cost baselines, but they are not live proof
- Credit Karma `Manage accounts` can expose linked institutions even when transaction detail is sparse
- passwords export is good for finding likely paid surfaces like gym, utilities, hosting, SaaS, and media, but should never be used to claim a subscription is active
- move-related audits should explicitly check internet, gym, phone, and location-bound services first
## Pitfalls
- do not claim `every single subscription` unless you have a current recurring-charge list or statement export
- do not turn passwords-export hits into confirmed charges
- do not mix `recent transaction` with `recurring subscription`
- do not hide stale timestamps
- do not miss the biggest swing item just because it is ambiguous, flag it as the top verification priority
## Verification
- answer includes the freshest verified finance timestamp
- each recommendation is labeled by evidence strength
- final output separates `cancel now`, `verify this week`, and `keep for now`
- if coverage is partial, the answer explicitly says so and names the fastest path to full certainty

View File

@@ -0,0 +1,77 @@
---
name: terminal-ops
description: Evidence-first terminal and repo execution workflow for Hermes. Use when fixing CI or build failures, running commands in a repo, applying code changes, or proving what was actually executed, verified, and pushed.
metadata:
hermes:
tags: [generated, terminal, coding, ci, repo, workflow, verification]
---
# Terminal Ops
Use this when the user asks Hermes to fix code, resolve CI failures, run terminal commands in a repo, inspect git state, or push verified changes.
## Skill Stack
Pull these imported skills into the workflow when relevant:
- `continuous-agent-loop` for multi-step execution with scope freezes, recovery gates, and progress checkpoints
- `agentic-engineering` for scoped decomposition and explicit done conditions
- `plankton-code-quality` for write-time quality expectations and linter discipline
- `eval-harness` for pass/fail verification after each change
- `search-first` before inventing a new helper, dependency, or abstraction
- `security-review` when secrets, auth, external inputs, or privileged operations are touched
## When To Use
- user says `fix`, `debug`, `run this`, `check the repo`, `push it`, or similar
- the task references CI failures, lint errors, build errors, tests, scripts, or a local repo path
- the answer depends on what a command, diff, branch, or verification step actually shows
## Workflow
1. Resolve the exact working surface first:
- use the user-provided absolute repo path when given
- if the target is not a git repo, do not reach for git-only steps
- prefer `$PRIMARY_REPOS_ROOT/...` over any iCloud or Documents mirror
2. Inspect before editing:
- read the failing command, file, test, or CI error first
- check current branch and local state before changing or pushing anything
- if the prompt already includes loaded-file markers or a compaction summary, use that evidence instead of re-reading blindly
3. Freeze execution mode before editing:
- if the user asked for an audit, review, inspection, or explicitly said `do not modify code`, stay read-only until the user changes scope
- use logs, diffs, git state, file reads, and other non-writing proving steps to gather evidence
- do not apply patches, run codegen, install dependencies, format files, or stage/commit while the task is still read-only
4. Keep fixes narrow and evidence-led:
- solve one dominant failure at a time
- prefer repo-local scripts, package scripts, and checked-in helpers over ad hoc one-liners
- if a dependency or helper is needed, use `search-first` before writing custom glue
5. Verify after each meaningful change:
- rerun the smallest command that proves the fix
- escalate to the broader build, lint, or test only after the local failure is addressed
- if the same proving command keeps failing with the same signature, freeze the broader loop, reduce scope to the failing unit, and stop repeating the same retry
- review the diff before any commit or push
6. Answer status interrupts before more terminal work:
- if the user asks `are you working?`, `did you do it?`, or a low-budget warning appears, stop discovery and answer from the current verified state before running more commands
- lead with the explicitly asked repo, branch, or failure state first, especially when the main fix is still unresolved
- use exact words like `changed locally`, `verified locally`, `pushed`, `blocked`, or `awaiting verification`
7. Push only when the requested state is real:
- distinguish `changed locally`, `verified locally`, `committed`, and `pushed`
- if push is requested, use a non-interactive git flow and report the branch and result
8. Report exact status words:
- drafted, changed locally, verified locally, committed, pushed, blocked, awaiting verification
## Pitfalls
- do not guess the failure from memory when logs or tests can settle it
- do not turn an audit-only, review-only, or `do not modify code` request into edits because a fix seems obvious
- do not work in `$DOCUMENTS_ROOT/...` clones when `$PRIMARY_REPOS_ROOT/...` exists
- do not use destructive git commands or revert unrelated local work
- do not claim `fixed` if the proving command was not rerun
- do not claim `pushed` if the change only exists locally
- do not keep rerunning broad verification after the same unchanged failure, narrow the loop or report the blocker
- do not keep exploring after a budget warning or status interrupt when the current repo state can already be reported precisely
## Verification
- the response names the proving command or test and its result
- the response names the repo path and branch when git was involved
- any push claim includes the target branch and exact status

View File

@@ -0,0 +1,128 @@
---
name: hookify-rules
description: This skill should be used when the user asks to create a hookify rule, write a hook rule, configure hookify, add a hookify rule, or needs guidance on hookify rule syntax and patterns.
---
# Writing Hookify Rules
## Overview
Hookify rules are markdown files with YAML frontmatter that define patterns to watch for and messages to show when those patterns match. Rules are stored in `.claude/hookify.{rule-name}.local.md` files.
## Rule File Format
### Basic Structure
```markdown
---
name: rule-identifier
enabled: true
event: bash|file|stop|prompt|all
pattern: regex-pattern-here
---
Message to show Claude when this rule triggers.
Can include markdown formatting, warnings, suggestions, etc.
```
### Frontmatter Fields
| Field | Required | Values | Description |
|-------|----------|--------|-------------|
| name | Yes | kebab-case string | Unique identifier (verb-first: warn-*, block-*, require-*) |
| enabled | Yes | true/false | Toggle without deleting |
| event | Yes | bash/file/stop/prompt/all | Which hook event triggers this |
| action | No | warn/block | warn (default) shows message; block prevents operation |
| pattern | Yes* | regex string | Pattern to match (*or use conditions for complex rules) |
### Advanced Format (Multiple Conditions)
```markdown
---
name: warn-env-api-keys
enabled: true
event: file
conditions:
- field: file_path
operator: regex_match
pattern: \.env$
- field: new_text
operator: contains
pattern: API_KEY
---
You're adding an API key to a .env file. Ensure this file is in .gitignore!
```
**Condition fields by event:**
- bash: `command`
- file: `file_path`, `new_text`, `old_text`, `content`
- prompt: `user_prompt`
**Operators:** `regex_match`, `contains`, `equals`, `not_contains`, `starts_with`, `ends_with`
All conditions must match for rule to trigger.
## Event Type Guide
### bash Events
Match Bash command patterns:
- Dangerous commands: `rm\s+-rf`, `dd\s+if=`, `mkfs`
- Privilege escalation: `sudo\s+`, `su\s+`
- Permission issues: `chmod\s+777`
### file Events
Match Edit/Write/MultiEdit operations:
- Debug code: `console\.log\(`, `debugger`
- Security risks: `eval\(`, `innerHTML\s*=`
- Sensitive files: `\.env$`, `credentials`, `\.pem$`
### stop Events
Completion checks and reminders. Pattern `.*` matches always.
### prompt Events
Match user prompt content for workflow enforcement.
## Pattern Writing Tips
### Regex Basics
- Escape special chars: `.` to `\.`, `(` to `\(`
- `\s` whitespace, `\d` digit, `\w` word char
- `+` one or more, `*` zero or more, `?` optional
- `|` OR operator
### Common Pitfalls
- **Too broad**: `log` matches "login", "dialog" — use `console\.log\(`
- **Too specific**: `rm -rf /tmp` — use `rm\s+-rf`
- **YAML escaping**: Use unquoted patterns; quoted strings need `\\s`
### Testing
```bash
python3 -c "import re; print(re.search(r'your_pattern', 'test text'))"
```
## File Organization
- **Location**: `.claude/` directory in project root
- **Naming**: `.claude/hookify.{descriptive-name}.local.md`
- **Gitignore**: Add `.claude/*.local.md` to `.gitignore`
## Commands
- `/hookify [description]` - Create new rules (auto-analyzes conversation if no args)
- `/hookify-list` - View all rules in table format
- `/hookify-configure` - Toggle rules on/off interactively
- `/hookify-help` - Full documentation
## Quick Reference
Minimum viable rule:
```markdown
---
name: my-rule
enabled: true
event: bash
pattern: dangerous_command
---
Warning message here
```

View File

@@ -86,6 +86,51 @@ Include:
- revenue math that does not sum cleanly
- inflated certainty where assumptions are fragile
## Pitch Deck Structure (10-12 slides)
Recommended flow with guidance per slide:
1. **Title:** Company name + one-line positioning
2. **Problem:** Quantify the pain. Use specific market numbers.
3. **Solution:** Show the product, not a description of it. Screenshots, demo frames, or architecture diagrams.
4. **Product demo:** Phase 1 (live) vs Phase 2 (funded). Always present as phased if not fully built.
5. **Market:** TAM with source. Growth rate with source. Key catalyst for "why now."
6. **Business model:** Revenue layers with year-by-year projections. Show the math.
7. **Traction:** Working product, users, revenue, waitlist, partnerships. Concrete numbers only.
8. **Team:** Each founder gets a credibility anchor (prior company, metric, recognition).
9. **Competitive landscape:** Positioning map or table. Show the gap you fill.
10. **Ask:** Raise amount, instrument (SAFE/priced), valuation range.
11. **Milestones / Use of funds:** Timeline from now to Series A. Use of funds must sum exactly.
12. **Appendix:** Revenue model detail, regulatory strategy, technical architecture.
## Financial Model Requirements
When building or updating financial models:
- All assumptions must be stated explicitly and separately from projections
- Include bear/base/bull scenarios (sensitivity analysis)
- Revenue layers must sum correctly across all timeframes
- Use of funds must sum to the exact raise amount
- Include unit economics where possible (cost per user, revenue per customer)
- Discount rates and growth rates must be sourced or justified
- Milestone-linked spending: tie spend to specific milestones
## Accelerator Applications
When writing accelerator applications:
- Follow the specific word/character limits of each program
- Lead with traction (metrics, users, revenue, recognition)
- Be specific about what the accelerator adds (network, funding, customers)
- Never sound desperate. Frame as mutual fit.
- Keep internal metrics consistent with the deck and model
## Honesty Requirements
These are non-negotiable:
- Clearly distinguish between what is live/working and what requires funding
- Never attach revenue figures to things that are not revenue-generating
- Never claim awards or recognition not actually received
- "Algorithmic" when the tech is algorithmic, "AI" only when there is actual ML/AI
- All traction claims must be verifiable
## Quality Gate
Before delivering:
@@ -94,3 +139,6 @@ Before delivering:
- assumptions are visible, not buried
- the story is clear without hype language
- the final asset is defensible in a partner meeting
- phase distinctions (live vs funded) are clear
- no unverifiable claims
- team roles and titles are correct

View File

@@ -66,6 +66,63 @@ Include:
- one new proof point if available
- the next step
## Cold Email Structure (Detailed)
**Subject line:** Short, specific, no fluff. Reference something the investor actually did or said.
- Good: "Structured products for prediction markets (Goldman co-founder)"
- Good: "[Fund Name] thesis + prediction market infrastructure"
- Bad: "Exciting opportunity in DeFi"
- Bad: "Partnership inquiry"
**Opening (1-2 sentences):** Reference something specific about the investor. A deal they led, a thesis they published, a tweet they posted. This cannot be generic.
**The pitch (2-3 sentences):** What you are building, why now, and the one metric that proves traction.
**The ask (1 sentence):** Specific, low-friction. "Would you have 20 minutes this week?" or "Would it make sense to share our memo?"
**Sign-off:** Name, title, one credibility line.
### Email Tone Rules
- Direct. No begging. No "I know you're busy" or "I'd be honored if you..."
- Confident but not arrogant. Let facts do the heavy lifting.
- Short. Under 150 words total. Investors get 200+ emails/day.
- Zero em dashes. Zero corporate speak.
## Warm Intro Requests (Detailed)
Template for requesting intros through mutual connections:
```
hey [mutual name],
quick ask. i see you know [target name] at [company].
i'm building [your product] which [1-line relevance to target].
would you be open to a quick intro? happy to send you a
forwardable blurb.
[your name]
```
## Direct Cold Outreach Template
```
hey [target name],
[specific reference to their recent work/post/announcement].
i'm [your name], building [product]. [1 line on why this is
relevant to them specifically].
[specific low-friction ask].
[your name]
```
## Anti-Patterns (Never Do)
- Generic templates with no personalization
- Long paragraphs explaining your whole company
- Multiple asks in one message
- Fake familiarity ("loved your recent talk!" without specifics)
- Bulk-sent messages with visible merge fields
## Quality Gate
Before delivering:
@@ -74,3 +131,6 @@ Before delivering:
- there is no fluff or begging language
- the proof point is concrete
- word count stays tight
- under 150 words (cold email) or 200 words (follow-up)
- correct branding and terminology
- numbers match canonical sources

View File

@@ -0,0 +1,124 @@
---
name: knowledge-ops
description: Knowledge base management, ingestion, sync, and retrieval across multiple storage layers (local files, MCP memory, vector stores, Git repos). Use when the user wants to save, organize, sync, deduplicate, or search across their knowledge systems.
origin: ECC
---
# Knowledge Operations
Manage a multi-layered knowledge system for ingesting, organizing, syncing, and retrieving knowledge across multiple stores.
## When to Activate
- User wants to save information to their knowledge base
- Ingesting documents, conversations, or data into structured storage
- Syncing knowledge across systems (local files, MCP memory, Supabase, Git repos)
- Deduplicating or organizing existing knowledge
- User says "save this to KB", "sync knowledge", "what do I know about X", "ingest this", "update the knowledge base"
- Any knowledge management task beyond simple memory recall
## Knowledge Architecture
### Layer 1: Claude Code Memory (Quick Access)
- **Path:** `~/.claude/projects/*/memory/`
- **Format:** Markdown files with frontmatter
- **Types:** user preferences, feedback, project context, reference
- **Use for:** Quick-access context that persists across conversations
- **Automatically loaded at session start**
### Layer 2: MCP Memory Server (Structured Knowledge Graph)
- **Access:** MCP memory tools (create_entities, create_relations, add_observations, search_nodes)
- **Use for:** Semantic search across all stored memories, relationship mapping
- **Cross-session persistence with queryable graph structure**
### Layer 3: External Data Store (Supabase, PostgreSQL, etc.)
- **Use for:** Structured data, large document storage, full-text search
- **Good for:** Documents too large for memory files, data needing SQL queries
### Layer 4: Git-Backed Knowledge Base
- **Use for:** Version-controlled knowledge, shareable context
- **Good for:** Conversation exports, session snapshots, reference documents
- **Benefits:** Full history, collaboration, backup
## Ingestion Workflow
When new knowledge needs to be captured:
### 1. Classify
What type of knowledge is it?
- Business decision -> memory file (project type) + MCP memory
- Personal preference -> memory file (user/feedback type)
- Reference info -> memory file (reference type) + MCP memory
- Large document -> external data store + summary in memory
- Conversation/session -> Git knowledge base repo
### 2. Deduplicate
Check if this knowledge already exists:
- Search memory files for existing entries
- Query MCP memory with relevant terms
- Do not create duplicates. Update existing entries instead.
### 3. Store
Write to appropriate layer(s):
- Always update Claude Code memory for quick access
- Use MCP memory for semantic searchability and relationship mapping
- Commit to Git KB for version control on major additions
### 4. Index
Update any relevant indexes or summary files.
## Sync Operations
### Conversation Sync
Periodically sync conversation history into the knowledge base:
- Sources: Claude session files, Codex sessions, other agent sessions
- Destination: Git knowledge base repo
- Generate a session index for quick browsing
- Commit and push
### Workspace State Sync
Mirror workspace configuration and scripts to the knowledge base:
- Generate directory maps
- Redact sensitive config before committing
- Track changes over time
### Cross-Source Knowledge Sync
Pull knowledge from multiple sources into one place:
- Claude/ChatGPT/Grok conversation exports
- Browser bookmarks
- GitHub activity events
- Write status summary, commit and push
## Memory Patterns
```
# Short-term: current session context
Use TodoWrite for in-session task tracking
# Medium-term: project memory files
Write to ~/.claude/projects/*/memory/ for cross-session recall
# Long-term: MCP knowledge graph
Use mcp__memory__create_entities for permanent structured data
Use mcp__memory__create_relations for relationship mapping
Use mcp__memory__add_observations for new facts about known entities
Use mcp__memory__search_nodes to find existing knowledge
```
## Best Practices
- Keep memory files concise. Archive old data rather than letting files grow unbounded.
- Use frontmatter (YAML) for metadata on all knowledge files.
- Deduplicate before storing. Search first, then create or update.
- Redact sensitive information (API keys, passwords) before committing to Git.
- Use consistent naming conventions for knowledge files (lowercase-kebab-case).
- Tag entries with topics/categories for easier retrieval.
## Quality Gate
Before completing any knowledge operation:
- no duplicate entries created
- sensitive data redacted from any Git-tracked files
- indexes and summaries updated
- appropriate storage layer chosen for the data type
- cross-references added where relevant

View File

@@ -0,0 +1,186 @@
---
name: lead-intelligence
description: AI-native lead intelligence and outreach pipeline. Replaces Apollo, Clay, and ZoomInfo with agent-powered signal scoring, mutual ranking, warm path discovery, and personalized outreach. Use when the user wants to find, qualify, and reach high-value contacts.
origin: ECC
---
# Lead Intelligence
Agent-powered lead intelligence pipeline that finds, scores, and reaches high-value contacts through social graph analysis and warm path discovery.
## When to Activate
- User wants to find leads or prospects in a specific industry
- Building an outreach list for partnerships, sales, or fundraising
- Researching who to reach out to and the best path to reach them
- User says "find leads", "outreach list", "who should I reach out to", "warm intros"
- Needs to score or rank a list of contacts by relevance
- Wants to map mutual connections to find warm introduction paths
## Tool Requirements
### Required
- **Exa MCP** -- Deep web search for people, companies, and signals (`web_search_exa`)
- **X API** -- Follower/following graph, mutual analysis, recent activity
### Optional (enhance results)
- **LinkedIn** -- Via browser-use MCP or direct API for connection graph
- **Apollo/Clay API** -- For enrichment cross-reference if user has access
- **GitHub MCP** -- For developer-centric lead qualification
## Pipeline Overview
```
1. Signal -> 2. Mutual -> 3. Warm Path -> 4. Enrich -> 5. Outreach
Scoring Ranking Discovery Draft
```
## Stage 1: Signal Scoring
Search for high-signal people in target verticals. Assign a weight to each based on:
| Signal | Weight | Source |
|--------|--------|--------|
| Role/title alignment | 30% | Exa, LinkedIn |
| Industry match | 25% | Exa company search |
| Recent activity on topic | 20% | X API search, Exa |
| Follower count / influence | 10% | X API |
| Location proximity | 10% | Exa, LinkedIn |
| Engagement with your content | 5% | X API interactions |
### Signal Search Approach
1. Define target parameters (verticals, roles, locations)
2. Run Exa deep search for people and companies in each vertical
3. Run X API search for active voices on relevant topics
4. Score each result against the signal weights
5. Rank and deduplicate
## Stage 2: Mutual Ranking
For each scored target, analyze the user's social graph to find the warmest path.
### Algorithm
1. Pull user's X following list and LinkedIn connections
2. For each high-signal target, check for shared connections
3. Rank mutuals by:
| Factor | Weight |
|--------|--------|
| Number of connections to targets | 40% |
| Mutual's current role/company | 20% |
| Mutual's location | 15% |
| Industry alignment | 15% |
| Mutual's identifiability (handle/profile) | 10% |
### Output Format
```
MUTUAL RANKING REPORT
=====================
#1 @mutual_handle (Score: 92)
Name: Jane Smith
Role: Partner @ Acme Ventures
Location: San Francisco
Connections to targets: 7
Connected to: @target1, @target2, @target3, ...
Best intro path: Jane invested in Target1's company
#2 @mutual_handle2 (Score: 85)
...
```
## Stage 3: Warm Path Discovery
For each target, find the shortest introduction chain:
```
You --[follows]--> Mutual A --[invested in]--> Target Company
You --[follows]--> Mutual B --[co-founded with]--> Target Person
You --[met at]--> Event --[also attended]--> Target Person
```
### Path Types (ordered by warmth)
1. **Direct mutual** -- You both follow/know the same person
2. **Portfolio connection** -- Mutual invested in or advises target's company
3. **Co-worker/alumni** -- Mutual worked at same company or attended same school
4. **Event overlap** -- Both attended same conference/program
5. **Content engagement** -- Target engaged with mutual's content or vice versa
## Stage 4: Enrichment
For each qualified lead, pull:
- Full name, current title, company
- Company size, funding stage, recent news
- Recent X posts (last 30 days): topics, tone, interests
- Mutual interests with user (shared follows, similar content)
- Recent company events (product launch, funding round, hiring)
### Enrichment Sources
- Exa: company data, news, blog posts
- X API: recent tweets, bio, followers
- GitHub: open source contributions (for developer-centric leads)
- LinkedIn (via browser-use): full profile, experience, education
## Stage 5: Outreach Draft
Generate personalized outreach for each lead. Two modes:
### Warm Intro Request (to mutual)
```
hey [mutual name],
quick ask. i see you know [target name] at [company].
i'm building [your product] which [1-line relevance to target].
would you be open to a quick intro? happy to send you a
forwardable blurb.
[your name]
```
### Direct Cold Outreach (to target)
```
hey [target name],
[specific reference to their recent work/post/announcement].
i'm [your name], building [product]. [1 line on why this is
relevant to them specifically].
[specific low-friction ask].
[your name]
```
### Anti-Patterns (never do)
- Generic templates with no personalization
- Long paragraphs explaining your whole company
- Multiple asks in one message
- Fake familiarity ("loved your recent talk!" without specifics)
- Bulk-sent messages with visible merge fields
## Configuration
Users should set these environment variables:
```bash
# Required
export X_BEARER_TOKEN="..."
export X_ACCESS_TOKEN="..."
export X_ACCESS_TOKEN_SECRET="..."
export X_API_KEY="..."
export X_API_SECRET="..."
export EXA_API_KEY="..."
# Optional
export LINKEDIN_COOKIE="..." # For browser-use LinkedIn access
export APOLLO_API_KEY="..." # For Apollo enrichment
```
## Related Skills
- `x-api` -- X/Twitter API integration for graph analysis
- `investor-outreach` -- Investor-specific outreach patterns
- `market-research` -- Company and fund due diligence

View File

@@ -65,6 +65,30 @@ Default structure:
5. recommendation
6. sources
## Domain-Specific Research Context
When researching specific verticals, collect domain-specific signals:
### Prediction Markets
Key metrics: Volume, open interest, user count, market categories
Regulatory landscape: CFTC (US), FCA (UK), global patchwork
Key players: Polymarket, Kalshi, Robinhood (event contracts), Metaculus, Manifold
### DeFi / Structured Products
Key concepts: Vaults, exotic options, baskets, LP positions, DLMM
Key players: Cega, Ribbon Finance, Opyn, OrBit Markets
Chain-specific context matters (Solana vs Ethereum vs L2s)
### AI Agent Security
Key concepts: Agent permissions, tool poisoning, prompt injection, OWASP LLM Top 10
Key players: Invariant Labs, Backslash, Dam Secure, Cogent Security, Entire, Pillar Security
### General Research Practices
- For investor due diligence: produce a 200-300 word dossier with fund overview, relevant investments, thesis alignment, suggested angle, and red flags
- For competitive analysis: always include "so what" for each finding relative to the user's venture
- For market sizing: follow TAM/SAM/SOM with explicit growth rate (CAGR with source), key drivers, and key risks
- For technology research: cover architecture (not marketing), trade-offs, adoption signals (GitHub stars, npm downloads, TVL if DeFi), and integration complexity
## Quality Gate
Before delivering:
@@ -73,3 +97,5 @@ Before delivering:
- the recommendation follows from the evidence
- risks and counterarguments are included
- the output makes a decision easier
- no filler paragraphs or generic market commentary
- contrarian/risk perspective explicitly included

162
skills/oura-health/SKILL.md Normal file
View File

@@ -0,0 +1,162 @@
---
name: oura-health
description: Oura Ring health data sync, analysis, and wellness reporting via the Oura API v2. Sleep, readiness, activity, HRV, heart rate, SpO2, stress, and resilience tracking with automated sync and trend analysis. Use when the user wants health data, sleep scores, recovery metrics, or wellness reports from their Oura Ring.
origin: ECC
---
# Oura Health Operations
Sync, analyze, and report on health data from the Oura Ring via the Oura API v2.
## When to Activate
- User asks about sleep quality, readiness, recovery, activity, or health stats
- Generating wellness reports or trend analysis
- Syncing Oura data to local storage or knowledge base
- User says "how did I sleep", "my health stats", "wellness report", "check my Oura", "am I recovered"
- Any Oura API interaction or biometric data analysis
## Authentication
Oura uses OAuth2. Store credentials in environment variables or a `.env` file:
```bash
export OURA_CLIENT_ID="your-client-id"
export OURA_CLIENT_SECRET="your-client-secret"
export OURA_ACCESS_TOKEN="your-access-token"
export OURA_REFRESH_TOKEN="your-refresh-token"
```
### Token Refresh
If the access token is expired, refresh it:
```bash
curl -X POST "https://api.ouraring.com/oauth/token" \
-d "grant_type=refresh_token" \
-d "refresh_token=$OURA_REFRESH_TOKEN" \
-d "client_id=$OURA_CLIENT_ID" \
-d "client_secret=$OURA_CLIENT_SECRET"
```
Update your `.env` with the new tokens after refresh.
## API Endpoints (v2)
Base URL: `https://api.ouraring.com/v2/usercollection/`
| Endpoint | Data |
|----------|------|
| `daily_sleep` | Sleep score, total sleep, deep/REM/light, efficiency, latency |
| `daily_activity` | Activity score, steps, calories, active time, sedentary time |
| `daily_readiness` | Readiness score, HRV balance, body temp, recovery index |
| `daily_spo2` | Blood oxygen levels |
| `daily_stress` | Stress levels throughout the day |
| `daily_resilience` | Resilience score and contributing factors |
| `heart_rate` | Continuous heart rate data |
| `sleep` | Detailed sleep periods with phases |
| `workout` | Exercise sessions |
| `daily_cardiovascular_age` | Cardio age estimate |
All endpoints accept `start_date` and `end_date` query params (YYYY-MM-DD format).
### Example API Call
```bash
curl -H "Authorization: Bearer $OURA_ACCESS_TOKEN" \
"https://api.ouraring.com/v2/usercollection/daily_sleep?start_date=2026-03-25&end_date=2026-03-31"
```
```python
import os, requests
headers = {"Authorization": f"Bearer {os.environ['OURA_ACCESS_TOKEN']}"}
params = {"start_date": "2026-03-25", "end_date": "2026-03-31"}
resp = requests.get(
"https://api.ouraring.com/v2/usercollection/daily_sleep",
headers=headers, params=params
)
data = resp.json()["data"]
```
## Wellness Report Format
When generating health summaries, use this structure:
1. **Overall Status** -- composite of sleep + readiness + activity scores
2. **Sleep Quality** -- total sleep, deep sleep %, REM %, efficiency, sleep debt
3. **Recovery** -- readiness score, HRV trend, resting HR, body temperature deviation
4. **Activity** -- steps, active calories, goal progress
5. **Trends** -- 7-day rolling averages for key metrics
6. **Recommendations** -- 1-2 actionable suggestions based on the data
Keep it concise. Numbers and insights, not paragraphs.
### Example Output
```
WELLNESS REPORT (Mar 25-31)
===========================
Overall: GOOD (avg readiness 78, sleep 82, activity 71)
Sleep: 7h12m avg (deep 18%, REM 22%, light 60%)
Efficiency: 91% | Latency: 8min avg
Sleep debt: -1h32m (improving)
Recovery: Readiness trending up (+4 over 7d)
HRV: 42ms avg (baseline 38ms) | Resting HR: 58 bpm
Body temp: +0.1C (normal range)
Activity: 8,420 steps/day avg | 2,180 active cal
Goal progress: 85% | Sedentary: 9.2h avg
Recommendations:
- Deep sleep below 20%. Try earlier last meal (3h before bed).
- Activity trending down. Add a 20min walk in the afternoon.
```
## Automated Sync
Set up a daily sync cron to pull Oura data automatically:
```bash
# Example: daily sync at 10 AM
claude -p "Pull yesterday's Oura data (sleep, readiness, activity, HRV) and write a summary to memory."
```
### Sync Script Pattern
```bash
#!/bin/bash
# oura-sync.sh - Pull daily Oura data
source ~/.env.oura
# Refresh token if needed
# ... (token refresh logic)
DATE=$(date -v-1d +%Y-%m-%d) # yesterday
for endpoint in daily_sleep daily_activity daily_readiness daily_stress; do
curl -s -H "Authorization: Bearer $OURA_ACCESS_TOKEN" \
"https://api.ouraring.com/v2/usercollection/${endpoint}?start_date=${DATE}&end_date=${DATE}" \
> "oura-data/${endpoint}_${DATE}.json"
done
# Generate markdown summary from JSON files
```
## Integration with Other Skills
- Use with `knowledge-ops` to persist health trends in the knowledge base
- Use with `content-engine` to create health-related content
- Use with `autonomous-loops` for scheduled health check-ins
## Security
- Never commit Oura tokens to Git
- Store OAuth credentials in `.env` files (gitignored)
- Token auto-refresh should update the `.env` file in place
- Log functions should write to stderr to avoid polluting data pipelines

View File

@@ -0,0 +1,210 @@
---
name: pmx-guidelines
description: PMX prediction markets platform development guidelines, architecture patterns, and project-specific best practices. Use when working on PMX or similar Next.js + Supabase + Solana prediction market codebases. Covers semantic search, real-money safety, deployment workflow, and design system rules.
origin: ECC
---
# PMX Prediction Markets - Development Guidelines
Project-specific development guidelines, architectural patterns, and critical rules for prediction market platforms handling real money.
## When to Activate
- Working on a prediction markets platform
- Building a Next.js + Supabase + Solana application that handles real money
- Implementing semantic search with Redis + OpenAI embeddings
- Following strict deployment workflows (no direct push to production)
- User references PMX or a similar prediction market codebase
## Core Principles
### 1. Real Money Safety
This platform handles real money for real users.
- ONE person does all code review and production deployments
- Bugs mean financial losses. Test everything.
- NEVER push until EVERYTHING is bulletproof
### 2. Many Small Files > Few Large Files
- High cohesion, low coupling
- Files: 200-400 lines typical, 800 max
- Extract utilities from large components
- Organize by feature/domain, not by code type
### 3. No Emojis in Codebase
- Never in documentation, code comments, console output, or error messages
- Use clear text: "SUCCESS:" not checkmarks, "ERROR:" not X marks
### 4. Single README.md for All Documentation
- All documentation in ONE file: `README.md`
- Use collapsible sections for organization
- No separate .md files (except CLAUDE.md)
### 5. Immutability in JavaScript/TypeScript
Always create new objects, never mutate:
```javascript
// WRONG: Mutating original object
async function fetchWithRetry(url, options) {
options.headers['Authorization'] = newToken // MUTATION
response = await fetch(url, options)
}
// CORRECT: Create new object
async function fetchWithRetry(url, options) {
const retryOptions = {
...options,
headers: {
...options.headers,
Authorization: `Bearer ${newToken}`
}
}
response = await fetch(url, retryOptions)
}
```
## Tech Stack
- **Frontend**: Next.js 15, React 19, TypeScript
- **Database**: Supabase PostgreSQL
- **Blockchain**: Solana Web3.js
- **Authentication**: Privy (@privy-io/react-auth)
- **UI**: Tailwind CSS + Three.js
- **Search**: Redis Stack + OpenAI embeddings (semantic search)
- **Trading**: Meteora CP-AMM SDK
- **Charts**: Lightweight Charts + Recharts
## Semantic Search System
### Architecture (Mandatory Pattern)
```
User Query -> Debounce (500ms) -> Generate Embedding (OpenAI) ->
Vector Search (Redis KNN) -> Fetch Data (Supabase) ->
Sort by Similarity -> Results Displayed
```
### Performance
- ~300ms average (200ms OpenAI + 10ms Redis + 50ms Supabase)
- Cost: ~$0.0001 per search
- Scales to 100K+ markets
- Automatic fallback to substring search if Redis unavailable
### Critical Implementation Rule
Always use the useSemanticSearch hook:
```javascript
const {
searchQuery,
searchResults,
isSearching,
handleSearchChange, // Use this, NOT setSearchQuery
clearSearch
} = useSemanticSearch(500)
// ALWAYS check semantic search is enabled
if (isSemanticEnabled && searchResults) {
// Use semantic results
} else {
// Use substring fallback
}
```
## Git Workflow (Critical)
1. Test locally FIRST
2. Commit locally only: `git commit`
3. DO NOT push upstream
4. Wait for review
5. Small commits: one logical change per commit
6. Never commit: `.env.local`, `node_modules/`, build artifacts
### Branch Strategy
- `main` (upstream) - PRODUCTION - DO NOT PUSH DIRECTLY
- `main` (fork) - Your fork - Push here for testing
- Feature branches: `fix/issue-32`, `feat/add-feature`
## Pre-Deployment Requirements (Mandatory)
1. **Comprehensive Local Testing**
- Every feature tested manually
- Every API endpoint tested with curl
- Every edge case considered
- Mobile AND desktop tested
- No console errors anywhere
2. **Comprehensive Automated Testing**
- Unit tests for utilities
- Integration tests for APIs
- E2E tests for critical flows
- Database migration tests
- Race condition tests
3. **Security Review**
- No secrets exposed
- All inputs validated
- Authentication/authorization verified
- Rate limiting tested
4. **Only After All Above**
- Commit locally with detailed message
- DO NOT push to production
- Wait for project owner review
## CSS and Styling Rules
- **Unify styles**: Extend existing patterns, never create parallel systems
- **Tailwind first**: Prefer Tailwind utilities over custom CSS
- **One CSS file only**: `globals.css`
- **Delete unused styles immediately**
### Design System
- Background: `#0a1929`, `#1a2a3a`, `#2a3a4a`
- Primary: `#00bcd4` (cyan)
- Text: `#ffffff` (white), `#94a3b8` (gray)
- Spacing: Tailwind spacing scale (4px base)
- Typography: Inter font family
- Shadows: Subtle glows for glass-morphism
## Next.js Best Practices
- Use Next.js built-ins first: `router.back()`, `useSearchParams()`, Server Components
- Client components: `'use client'` for interactivity
- Server components: Static content, data fetching, SEO
- Proper loading states on all data-fetching components
- Dynamic imports for code splitting
## API Design
### Conventions
- Next.js API Routes pattern (route.js files)
- Authentication via Privy (JWT tokens)
- Supabase client for database operations
- Comprehensive error handling with NextResponse
- Rate limiting on search endpoints
- Caching headers for static data
### Supabase Pattern (Mandatory)
```javascript
const { data, error } = await supabase
.from('markets')
.select('*')
.eq('published', true)
if (error) {
console.error('Supabase error:', error)
return NextResponse.json({ success: false, error: error.message }, { status: 500 })
}
return NextResponse.json({ success: true, data })
```
## Security
- All API keys server-side only
- Input validation on all API routes
- Sanitize user inputs before queries
- Use Supabase RLS (Row Level Security)
- Rate limiting on expensive operations
- HTTPS everywhere in production
- Never hardcode secrets (Supabase URLs, OpenAI keys, Redis connection strings)
**Remember**: This handles real money. Reliability and correctness are paramount. Test everything. Never push directly to production.

View File

@@ -1,79 +0,0 @@
# Product Lens — Think Before You Build
## When to Use
- Before starting any feature — validate the "why"
- Weekly product review — are we building the right thing?
- When stuck choosing between features
- Before a launch — sanity check the user journey
- When converting a vague idea into a spec
## How It Works
### Mode 1: Product Diagnostic
Like YC office hours but automated. Asks the hard questions:
```
1. Who is this for? (specific person, not "developers")
2. What's the pain? (quantify: how often, how bad, what do they do today?)
3. Why now? (what changed that makes this possible/necessary?)
4. What's the 10-star version? (if money/time were unlimited)
5. What's the MVP? (smallest thing that proves the thesis)
6. What's the anti-goal? (what are you explicitly NOT building?)
7. How do you know it's working? (metric, not vibes)
```
Output: a `PRODUCT-BRIEF.md` with answers, risks, and a go/no-go recommendation.
### Mode 2: Founder Review
Reviews your current project through a founder lens:
```
1. Read README, CLAUDE.md, package.json, recent commits
2. Infer: what is this trying to be?
3. Score: product-market fit signals (0-10)
- Usage growth trajectory
- Retention indicators (repeat contributors, return users)
- Revenue signals (pricing page, billing code, Stripe integration)
- Competitive moat (what's hard to copy?)
4. Identify: the one thing that would 10x this
5. Flag: things you're building that don't matter
```
### Mode 3: User Journey Audit
Maps the actual user experience:
```
1. Clone/install the product as a new user
2. Document every friction point (confusing steps, errors, missing docs)
3. Time each step
4. Compare to competitor onboarding
5. Score: time-to-value (how long until the user gets their first win?)
6. Recommend: top 3 fixes for onboarding
```
### Mode 4: Feature Prioritization
When you have 10 ideas and need to pick 2:
```
1. List all candidate features
2. Score each on: impact (1-5) × confidence (1-5) ÷ effort (1-5)
3. Rank by ICE score
4. Apply constraints: runway, team size, dependencies
5. Output: prioritized roadmap with rationale
```
## Output
All modes output actionable docs, not essays. Every recommendation has a specific next step.
## Integration
Pair with:
- `/browser-qa` to verify the user journey audit findings
- `/design-system audit` for visual polish assessment
- `/canary-watch` for post-launch monitoring

44
skills/remotion/SKILL.md Normal file
View File

@@ -0,0 +1,44 @@
---
name: remotion-best-practices
description: Best practices for Remotion - Video creation in React
origin: ECC
metadata:
tags: remotion, video, react, animation, composition
---
## When to use
Use this skills whenever you are dealing with Remotion code to obtain the domain-specific knowledge.
## How to use
Read individual rule files for detailed explanations and code examples:
- [rules/3d.md](rules/3d.md) - 3D content in Remotion using Three.js and React Three Fiber
- [rules/animations.md](rules/animations.md) - Fundamental animation skills for Remotion
- [rules/assets.md](rules/assets.md) - Importing images, videos, audio, and fonts into Remotion
- [rules/audio.md](rules/audio.md) - Using audio and sound in Remotion - importing, trimming, volume, speed, pitch
- [rules/calculate-metadata.md](rules/calculate-metadata.md) - Dynamically set composition duration, dimensions, and props
- [rules/can-decode.md](rules/can-decode.md) - Check if a video can be decoded by the browser using Mediabunny
- [rules/charts.md](rules/charts.md) - Chart and data visualization patterns for Remotion
- [rules/compositions.md](rules/compositions.md) - Defining compositions, stills, folders, default props and dynamic metadata
- [rules/display-captions.md](rules/display-captions.md) - Displaying captions in Remotion with TikTok-style pages and word highlighting
- [rules/extract-frames.md](rules/extract-frames.md) - Extract frames from videos at specific timestamps using Mediabunny
- [rules/fonts.md](rules/fonts.md) - Loading Google Fonts and local fonts in Remotion
- [rules/get-audio-duration.md](rules/get-audio-duration.md) - Getting the duration of an audio file in seconds with Mediabunny
- [rules/get-video-dimensions.md](rules/get-video-dimensions.md) - Getting the width and height of a video file with Mediabunny
- [rules/get-video-duration.md](rules/get-video-duration.md) - Getting the duration of a video file in seconds with Mediabunny
- [rules/gifs.md](rules/gifs.md) - Displaying GIFs synchronized with Remotion's timeline
- [rules/images.md](rules/images.md) - Embedding images in Remotion using the Img component
- [rules/import-srt-captions.md](rules/import-srt-captions.md) - Importing .srt subtitle files into Remotion using @remotion/captions
- [rules/lottie.md](rules/lottie.md) - Embedding Lottie animations in Remotion
- [rules/measuring-dom-nodes.md](rules/measuring-dom-nodes.md) - Measuring DOM element dimensions in Remotion
- [rules/measuring-text.md](rules/measuring-text.md) - Measuring text dimensions, fitting text to containers, and checking overflow
- [rules/sequencing.md](rules/sequencing.md) - Sequencing patterns for Remotion - delay, trim, limit duration of items
- [rules/tailwind.md](rules/tailwind.md) - Using TailwindCSS in Remotion
- [rules/text-animations.md](rules/text-animations.md) - Typography and text animation patterns for Remotion
- [rules/timing.md](rules/timing.md) - Interpolation curves in Remotion - linear, easing, spring animations
- [rules/transcribe-captions.md](rules/transcribe-captions.md) - Transcribing audio to generate captions in Remotion
- [rules/transitions.md](rules/transitions.md) - Scene transition patterns for Remotion
- [rules/trimming.md](rules/trimming.md) - Trimming patterns for Remotion - cut the beginning or end of animations
- [rules/videos.md](rules/videos.md) - Embedding videos in Remotion - trimming, volume, speed, looping, pitch

View File

@@ -0,0 +1,86 @@
---
name: 3d
description: 3D content in Remotion using Three.js and React Three Fiber.
metadata:
tags: 3d, three, threejs
---
# Using Three.js and React Three Fiber in Remotion
Follow React Three Fiber and Three.js best practices.
Only the following Remotion-specific rules need to be followed:
## Prerequisites
First, the `@remotion/three` package needs to be installed.
If it is not, use the following command:
```bash
npx remotion add @remotion/three # If project uses npm
bunx remotion add @remotion/three # If project uses bun
yarn remotion add @remotion/three # If project uses yarn
pnpm exec remotion add @remotion/three # If project uses pnpm
```
## Using ThreeCanvas
You MUST wrap 3D content in `<ThreeCanvas>` and include proper lighting.
`<ThreeCanvas>` MUST have a `width` and `height` prop.
```tsx
import { ThreeCanvas } from "@remotion/three";
import { useVideoConfig } from "remotion";
const { width, height } = useVideoConfig();
<ThreeCanvas width={width} height={height}>
<ambientLight intensity={0.4} />
<directionalLight position={[5, 5, 5]} intensity={0.8} />
<mesh>
<sphereGeometry args={[1, 32, 32]} />
<meshStandardMaterial color="red" />
</mesh>
</ThreeCanvas>
```
## No animations not driven by `useCurrentFrame()`
Shaders, models etc MUST NOT animate by themselves.
No animations are allowed unless they are driven by `useCurrentFrame()`.
Otherwise, it will cause flickering during rendering.
Using `useFrame()` from `@react-three/fiber` is forbidden.
## Animate using `useCurrentFrame()`
Use `useCurrentFrame()` to perform animations.
```tsx
const frame = useCurrentFrame();
const rotationY = frame * 0.02;
<mesh rotation={[0, rotationY, 0]}>
<boxGeometry args={[2, 2, 2]} />
<meshStandardMaterial color="#4a9eff" />
</mesh>
```
## Using `<Sequence>` inside `<ThreeCanvas>`
The `layout` prop of any `<Sequence>` inside a `<ThreeCanvas>` must be set to `none`.
```tsx
import { Sequence } from "remotion";
import { ThreeCanvas } from "@remotion/three";
const { width, height } = useVideoConfig();
<ThreeCanvas width={width} height={height}>
<Sequence layout="none">
<mesh>
<boxGeometry args={[2, 2, 2]} />
<meshStandardMaterial color="#4a9eff" />
</mesh>
</Sequence>
</ThreeCanvas>
```

View File

@@ -0,0 +1,29 @@
---
name: animations
description: Fundamental animation skills for Remotion
metadata:
tags: animations, transitions, frames, useCurrentFrame
---
All animations MUST be driven by the `useCurrentFrame()` hook.
Write animations in seconds and multiply them by the `fps` value from `useVideoConfig()`.
```tsx
import { useCurrentFrame } from "remotion";
export const FadeIn = () => {
const frame = useCurrentFrame();
const { fps } = useVideoConfig();
const opacity = interpolate(frame, [0, 2 * fps], [0, 1], {
extrapolateRight: 'clamp',
});
return (
<div style={{ opacity }}>Hello World!</div>
);
};
```
CSS transitions or animations are FORBIDDEN - they will not render correctly.
Tailwind animation class names are FORBIDDEN - they will not render correctly.

View File

@@ -0,0 +1,78 @@
---
name: assets
description: Importing images, videos, audio, and fonts into Remotion
metadata:
tags: assets, staticFile, images, fonts, public
---
# Importing assets in Remotion
## The public folder
Place assets in the `public/` folder at your project root.
## Using staticFile()
You MUST use `staticFile()` to reference files from the `public/` folder:
```tsx
import {Img, staticFile} from 'remotion';
export const MyComposition = () => {
return <Img src={staticFile('logo.png')} />;
};
```
The function returns an encoded URL that works correctly when deploying to subdirectories.
## Using with components
**Images:**
```tsx
import {Img, staticFile} from 'remotion';
<Img src={staticFile('photo.png')} />;
```
**Videos:**
```tsx
import {Video} from '@remotion/media';
import {staticFile} from 'remotion';
<Video src={staticFile('clip.mp4')} />;
```
**Audio:**
```tsx
import {Audio} from '@remotion/media';
import {staticFile} from 'remotion';
<Audio src={staticFile('music.mp3')} />;
```
**Fonts:**
```tsx
import {staticFile} from 'remotion';
const fontFamily = new FontFace('MyFont', `url(${staticFile('font.woff2')})`);
await fontFamily.load();
document.fonts.add(fontFamily);
```
## Remote URLs
Remote URLs can be used directly without `staticFile()`:
```tsx
<Img src="https://example.com/image.png" />
<Video src="https://remotion.media/video.mp4" />
```
## Important notes
- Remotion components (`<Img>`, `<Video>`, `<Audio>`) ensure assets are fully loaded before rendering
- Special characters in filenames (`#`, `?`, `&`) are automatically encoded

View File

@@ -0,0 +1,173 @@
import {loadFont} from '@remotion/google-fonts/Inter';
import {AbsoluteFill, spring, useCurrentFrame, useVideoConfig} from 'remotion';
const {fontFamily} = loadFont();
const COLOR_BAR = '#D4AF37';
const COLOR_TEXT = '#ffffff';
const COLOR_MUTED = '#888888';
const COLOR_BG = '#0a0a0a';
const COLOR_AXIS = '#333333';
// Ideal composition size: 1280x720
const Title: React.FC<{children: React.ReactNode}> = ({children}) => (
<div style={{textAlign: 'center', marginBottom: 40}}>
<div style={{color: COLOR_TEXT, fontSize: 48, fontWeight: 600}}>
{children}
</div>
</div>
);
const YAxis: React.FC<{steps: number[]; height: number}> = ({
steps,
height,
}) => (
<div
style={{
display: 'flex',
flexDirection: 'column',
justifyContent: 'space-between',
height,
paddingRight: 16,
}}
>
{steps
.slice()
.reverse()
.map((step) => (
<div
key={step}
style={{
color: COLOR_MUTED,
fontSize: 20,
textAlign: 'right',
}}
>
{step.toLocaleString()}
</div>
))}
</div>
);
const Bar: React.FC<{
height: number;
progress: number;
}> = ({height, progress}) => (
<div
style={{
flex: 1,
display: 'flex',
flexDirection: 'column',
justifyContent: 'flex-end',
}}
>
<div
style={{
width: '100%',
height,
backgroundColor: COLOR_BAR,
borderRadius: '8px 8px 0 0',
opacity: progress,
}}
/>
</div>
);
const XAxis: React.FC<{
children: React.ReactNode;
labels: string[];
height: number;
}> = ({children, labels, height}) => (
<div style={{flex: 1, display: 'flex', flexDirection: 'column'}}>
<div
style={{
display: 'flex',
alignItems: 'flex-end',
gap: 16,
height,
borderLeft: `2px solid ${COLOR_AXIS}`,
borderBottom: `2px solid ${COLOR_AXIS}`,
paddingLeft: 16,
}}
>
{children}
</div>
<div
style={{
display: 'flex',
gap: 16,
paddingLeft: 16,
marginTop: 12,
}}
>
{labels.map((label) => (
<div
key={label}
style={{
flex: 1,
textAlign: 'center',
color: COLOR_MUTED,
fontSize: 20,
}}
>
{label}
</div>
))}
</div>
</div>
);
export const MyAnimation = () => {
const frame = useCurrentFrame();
const {fps, height} = useVideoConfig();
const data = [
{month: 'Jan', price: 2039},
{month: 'Mar', price: 2160},
{month: 'May', price: 2327},
{month: 'Jul', price: 2426},
{month: 'Sep', price: 2634},
{month: 'Nov', price: 2672},
];
const minPrice = 2000;
const maxPrice = 2800;
const priceRange = maxPrice - minPrice;
const chartHeight = height - 280;
const yAxisSteps = [2000, 2400, 2800];
return (
<AbsoluteFill
style={{
backgroundColor: COLOR_BG,
padding: 60,
display: 'flex',
flexDirection: 'column',
fontFamily,
}}
>
<Title>Gold Price 2024</Title>
<div style={{display: 'flex', flex: 1}}>
<YAxis steps={yAxisSteps} height={chartHeight} />
<XAxis height={chartHeight} labels={data.map((d) => d.month)}>
{data.map((item, i) => {
const progress = spring({
frame: frame - i * 5 - 10,
fps,
config: {damping: 18, stiffness: 80},
});
const barHeight =
((item.price - minPrice) / priceRange) * chartHeight * progress;
return (
<Bar key={item.month} height={barHeight} progress={progress} />
);
})}
</XAxis>
</div>
</AbsoluteFill>
);
};

View File

@@ -0,0 +1,100 @@
import {
AbsoluteFill,
interpolate,
useCurrentFrame,
useVideoConfig,
} from 'remotion';
const COLOR_BG = '#ffffff';
const COLOR_TEXT = '#000000';
const FULL_TEXT = 'From prompt to motion graphics. This is Remotion.';
const PAUSE_AFTER = 'From prompt to motion graphics.';
const FONT_SIZE = 72;
const FONT_WEIGHT = 700;
const CHAR_FRAMES = 2;
const CURSOR_BLINK_FRAMES = 16;
const PAUSE_SECONDS = 1;
// Ideal composition size: 1280x720
const getTypedText = ({
frame,
fullText,
pauseAfter,
charFrames,
pauseFrames,
}: {
frame: number;
fullText: string;
pauseAfter: string;
charFrames: number;
pauseFrames: number;
}): string => {
const pauseIndex = fullText.indexOf(pauseAfter);
const preLen =
pauseIndex >= 0 ? pauseIndex + pauseAfter.length : fullText.length;
let typedChars = 0;
if (frame < preLen * charFrames) {
typedChars = Math.floor(frame / charFrames);
} else if (frame < preLen * charFrames + pauseFrames) {
typedChars = preLen;
} else {
const postPhase = frame - preLen * charFrames - pauseFrames;
typedChars = Math.min(
fullText.length,
preLen + Math.floor(postPhase / charFrames),
);
}
return fullText.slice(0, typedChars);
};
const Cursor: React.FC<{
frame: number;
blinkFrames: number;
symbol?: string;
}> = ({frame, blinkFrames, symbol = '\u258C'}) => {
const opacity = interpolate(
frame % blinkFrames,
[0, blinkFrames / 2, blinkFrames],
[1, 0, 1],
{extrapolateLeft: 'clamp', extrapolateRight: 'clamp'},
);
return <span style={{opacity}}>{symbol}</span>;
};
export const MyAnimation = () => {
const frame = useCurrentFrame();
const {fps} = useVideoConfig();
const pauseFrames = Math.round(fps * PAUSE_SECONDS);
const typedText = getTypedText({
frame,
fullText: FULL_TEXT,
pauseAfter: PAUSE_AFTER,
charFrames: CHAR_FRAMES,
pauseFrames,
});
return (
<AbsoluteFill
style={{
backgroundColor: COLOR_BG,
}}
>
<div
style={{
color: COLOR_TEXT,
fontSize: FONT_SIZE,
fontWeight: FONT_WEIGHT,
fontFamily: 'sans-serif',
}}
>
<span>{typedText}</span>
<Cursor frame={frame} blinkFrames={CURSOR_BLINK_FRAMES} />
</div>
</AbsoluteFill>
);
};

View File

@@ -0,0 +1,108 @@
import {loadFont} from '@remotion/google-fonts/Inter';
import React from 'react';
import {
AbsoluteFill,
spring,
useCurrentFrame,
useVideoConfig,
} from 'remotion';
/*
* Highlight a word in a sentence with a spring-animated wipe effect.
*/
// Ideal composition size: 1280x720
const COLOR_BG = '#ffffff';
const COLOR_TEXT = '#000000';
const COLOR_HIGHLIGHT = '#A7C7E7';
const FULL_TEXT = 'This is Remotion.';
const HIGHLIGHT_WORD = 'Remotion';
const FONT_SIZE = 72;
const FONT_WEIGHT = 700;
const HIGHLIGHT_START_FRAME = 30;
const HIGHLIGHT_WIPE_DURATION = 18;
const {fontFamily} = loadFont();
const Highlight: React.FC<{
word: string;
color: string;
delay: number;
durationInFrames: number;
}> = ({word, color, delay, durationInFrames}) => {
const frame = useCurrentFrame();
const {fps} = useVideoConfig();
const highlightProgress = spring({
fps,
frame,
config: {damping: 200},
delay,
durationInFrames,
});
const scaleX = Math.max(0, Math.min(1, highlightProgress));
return (
<span style={{position: 'relative', display: 'inline-block'}}>
<span
style={{
position: 'absolute',
left: 0,
right: 0,
top: '50%',
height: '1.05em',
transform: `translateY(-50%) scaleX(${scaleX})`,
transformOrigin: 'left center',
backgroundColor: color,
borderRadius: '0.18em',
zIndex: 0,
}}
/>
<span style={{position: 'relative', zIndex: 1}}>{word}</span>
</span>
);
};
export const MyAnimation = () => {
const highlightIndex = FULL_TEXT.indexOf(HIGHLIGHT_WORD);
const hasHighlight = highlightIndex >= 0;
const preText = hasHighlight ? FULL_TEXT.slice(0, highlightIndex) : FULL_TEXT;
const postText = hasHighlight
? FULL_TEXT.slice(highlightIndex + HIGHLIGHT_WORD.length)
: '';
return (
<AbsoluteFill
style={{
backgroundColor: COLOR_BG,
alignItems: 'center',
justifyContent: 'center',
fontFamily,
}}
>
<div
style={{
color: COLOR_TEXT,
fontSize: FONT_SIZE,
fontWeight: FONT_WEIGHT,
}}
>
{hasHighlight ? (
<>
<span>{preText}</span>
<Highlight
word={HIGHLIGHT_WORD}
color={COLOR_HIGHLIGHT}
delay={HIGHLIGHT_START_FRAME}
durationInFrames={HIGHLIGHT_WIPE_DURATION}
/>
<span>{postText}</span>
</>
) : (
<span>{FULL_TEXT}</span>
)}
</div>
</AbsoluteFill>
);
};

View File

@@ -0,0 +1,172 @@
---
name: audio
description: Using audio and sound in Remotion - importing, trimming, volume, speed, pitch
metadata:
tags: audio, media, trim, volume, speed, loop, pitch, mute, sound, sfx
---
# Using audio in Remotion
## Prerequisites
First, the @remotion/media package needs to be installed.
If it is not installed, use the following command:
```bash
npx remotion add @remotion/media # If project uses npm
bunx remotion add @remotion/media # If project uses bun
yarn remotion add @remotion/media # If project uses yarn
pnpm exec remotion add @remotion/media # If project uses pnpm
```
## Importing Audio
Use `<Audio>` from `@remotion/media` to add audio to your composition.
```tsx
import { Audio } from "@remotion/media";
import { staticFile } from "remotion";
export const MyComposition = () => {
return <Audio src={staticFile("audio.mp3")} />;
};
```
Remote URLs are also supported:
```tsx
<Audio src="https://remotion.media/audio.mp3" />
```
By default, audio plays from the start, at full volume and full length.
Multiple audio tracks can be layered by adding multiple `<Audio>` components.
## Trimming
Use `trimBefore` and `trimAfter` to remove portions of the audio. Values are in frames.
```tsx
const { fps } = useVideoConfig();
return (
<Audio
src={staticFile("audio.mp3")}
trimBefore={2 * fps} // Skip the first 2 seconds
trimAfter={10 * fps} // End at the 10 second mark
/>
);
```
The audio still starts playing at the beginning of the composition - only the specified portion is played.
## Delaying
Wrap the audio in a `<Sequence>` to delay when it starts:
```tsx
import { Sequence, staticFile } from "remotion";
import { Audio } from "@remotion/media";
const { fps } = useVideoConfig();
return (
<Sequence from={1 * fps}>
<Audio src={staticFile("audio.mp3")} />
</Sequence>
);
```
The audio will start playing after 1 second.
## Volume
Set a static volume (0 to 1):
```tsx
<Audio src={staticFile("audio.mp3")} volume={0.5} />
```
Or use a callback for dynamic volume based on the current frame:
```tsx
import { interpolate } from "remotion";
const { fps } = useVideoConfig();
return (
<Audio
src={staticFile("audio.mp3")}
volume={(f) =>
interpolate(f, [0, 1 * fps], [0, 1], { extrapolateRight: "clamp" })
}
/>
);
```
The value of `f` starts at 0 when the audio begins to play, not the composition frame.
## Muting
Use `muted` to silence the audio. It can be set dynamically:
```tsx
const frame = useCurrentFrame();
const { fps } = useVideoConfig();
return (
<Audio
src={staticFile("audio.mp3")}
muted={frame >= 2 * fps && frame <= 4 * fps} // Mute between 2s and 4s
/>
);
```
## Speed
Use `playbackRate` to change the playback speed:
```tsx
<Audio src={staticFile("audio.mp3")} playbackRate={2} /> {/* 2x speed */}
<Audio src={staticFile("audio.mp3")} playbackRate={0.5} /> {/* Half speed */}
```
Reverse playback is not supported.
## Looping
Use `loop` to loop the audio indefinitely:
```tsx
<Audio src={staticFile("audio.mp3")} loop />
```
Use `loopVolumeCurveBehavior` to control how the frame count behaves when looping:
- `"repeat"`: Frame count resets to 0 each loop (default)
- `"extend"`: Frame count continues incrementing
```tsx
<Audio
src={staticFile("audio.mp3")}
loop
loopVolumeCurveBehavior="extend"
volume={(f) => interpolate(f, [0, 300], [1, 0])} // Fade out over multiple loops
/>
```
## Pitch
Use `toneFrequency` to adjust the pitch without affecting speed. Values range from 0.01 to 2:
```tsx
<Audio
src={staticFile("audio.mp3")}
toneFrequency={1.5} // Higher pitch
/>
<Audio
src={staticFile("audio.mp3")}
toneFrequency={0.8} // Lower pitch
/>
```
Pitch shifting only works during server-side rendering, not in the Remotion Studio preview or in the `<Player />`.

View File

@@ -0,0 +1,104 @@
---
name: calculate-metadata
description: Dynamically set composition duration, dimensions, and props
metadata:
tags: calculateMetadata, duration, dimensions, props, dynamic
---
# Using calculateMetadata
Use `calculateMetadata` on a `<Composition>` to dynamically set duration, dimensions, and transform props before rendering.
```tsx
<Composition id="MyComp" component={MyComponent} durationInFrames={300} fps={30} width={1920} height={1080} defaultProps={{videoSrc: 'https://remotion.media/video.mp4'}} calculateMetadata={calculateMetadata} />
```
## Setting duration based on a video
Use the `getMediaMetadata()` function from the mediabunny/metadata skill to get the video duration:
```tsx
import {CalculateMetadataFunction} from 'remotion';
import {getMediaMetadata} from '../get-media-metadata';
const calculateMetadata: CalculateMetadataFunction<Props> = async ({props}) => {
const {durationInSeconds} = await getMediaMetadata(props.videoSrc);
return {
durationInFrames: Math.ceil(durationInSeconds * 30),
};
};
```
## Matching dimensions of a video
```tsx
const calculateMetadata: CalculateMetadataFunction<Props> = async ({props}) => {
const {durationInSeconds, dimensions} = await getMediaMetadata(props.videoSrc);
return {
durationInFrames: Math.ceil(durationInSeconds * 30),
width: dimensions?.width ?? 1920,
height: dimensions?.height ?? 1080,
};
};
```
## Setting duration based on multiple videos
```tsx
const calculateMetadata: CalculateMetadataFunction<Props> = async ({props}) => {
const metadataPromises = props.videos.map((video) => getMediaMetadata(video.src));
const allMetadata = await Promise.all(metadataPromises);
const totalDuration = allMetadata.reduce((sum, meta) => sum + meta.durationInSeconds, 0);
return {
durationInFrames: Math.ceil(totalDuration * 30),
};
};
```
## Setting a default outName
Set the default output filename based on props:
```tsx
const calculateMetadata: CalculateMetadataFunction<Props> = async ({props}) => {
return {
defaultOutName: `video-${props.id}.mp4`,
};
};
```
## Transforming props
Fetch data or transform props before rendering:
```tsx
const calculateMetadata: CalculateMetadataFunction<Props> = async ({props, abortSignal}) => {
const response = await fetch(props.dataUrl, {signal: abortSignal});
const data = await response.json();
return {
props: {
...props,
fetchedData: data,
},
};
};
```
The `abortSignal` cancels stale requests when props change in the Studio.
## Return value
All fields are optional. Returned values override the `<Composition>` props:
- `durationInFrames`: Number of frames
- `width`: Composition width in pixels
- `height`: Composition height in pixels
- `fps`: Frames per second
- `props`: Transformed props passed to the component
- `defaultOutName`: Default output filename
- `defaultCodec`: Default codec for rendering

View File

@@ -0,0 +1,75 @@
---
name: can-decode
description: Check if a video can be decoded by the browser using Mediabunny
metadata:
tags: decode, validation, video, audio, compatibility, browser
---
# Checking if a video can be decoded
Use Mediabunny to check if a video can be decoded by the browser before attempting to play it.
## The `canDecode()` function
This function can be copy-pasted into any project.
```tsx
import { Input, ALL_FORMATS, UrlSource } from "mediabunny";
export const canDecode = async (src: string) => {
const input = new Input({
formats: ALL_FORMATS,
source: new UrlSource(src, {
getRetryDelay: () => null,
}),
});
try {
await input.getFormat();
} catch {
return false;
}
const videoTrack = await input.getPrimaryVideoTrack();
if (videoTrack && !(await videoTrack.canDecode())) {
return false;
}
const audioTrack = await input.getPrimaryAudioTrack();
if (audioTrack && !(await audioTrack.canDecode())) {
return false;
}
return true;
};
```
## Usage
```tsx
const src = "https://remotion.media/video.mp4";
const isDecodable = await canDecode(src);
if (isDecodable) {
console.log("Video can be decoded");
} else {
console.log("Video cannot be decoded by this browser");
}
```
## Using with Blob
For file uploads or drag-and-drop, use `BlobSource`:
```tsx
import { Input, ALL_FORMATS, BlobSource } from "mediabunny";
export const canDecodeBlob = async (blob: Blob) => {
const input = new Input({
formats: ALL_FORMATS,
source: new BlobSource(blob),
});
// Same validation logic as above
};
```

View File

@@ -0,0 +1,58 @@
---
name: charts
description: Chart and data visualization patterns for Remotion. Use when creating bar charts, pie charts, histograms, progress bars, or any data-driven animations.
metadata:
tags: charts, data, visualization, bar-chart, pie-chart, graphs
---
# Charts in Remotion
You can create bar charts in Remotion by using regular React code - HTML and SVG is allowed, as well as D3.js.
## No animations not powered by `useCurrentFrame()`
Disable all animations by third party libraries.
They will cause flickering during rendering.
Instead, drive all animations from `useCurrentFrame()`.
## Bar Chart Animations
See [Bar Chart Example](assets/charts/bar-chart.tsx) for a basic example implmentation.
### Staggered Bars
You can animate the height of the bars and stagger them like this:
```tsx
const STAGGER_DELAY = 5;
const frame = useCurrentFrame();
const {fps} = useVideoConfig();
const bars = data.map((item, i) => {
const delay = i * STAGGER_DELAY;
const height = spring({
frame,
fps,
delay,
config: {damping: 200},
});
return <div style={{height: height * item.value}} />;
});
```
## Pie Chart Animation
Animate segments using stroke-dashoffset, starting from 12 o'clock.
```tsx
const frame = useCurrentFrame();
const {fps} = useVideoConfig();
const progress = interpolate(frame, [0, 100], [0, 1]);
const circumference = 2 * Math.PI * radius;
const segmentLength = (value / total) * circumference;
const offset = interpolate(progress, [0, 1], [segmentLength, 0]);
<circle r={radius} cx={center} cy={center} fill="none" stroke={color} strokeWidth={strokeWidth} strokeDasharray={`${segmentLength} ${circumference}`} strokeDashoffset={offset} transform={`rotate(-90 ${center} ${center})`} />;
```

View File

@@ -0,0 +1,146 @@
---
name: compositions
description: Defining compositions, stills, folders, default props and dynamic metadata
metadata:
tags: composition, still, folder, props, metadata
---
A `<Composition>` defines the component, width, height, fps and duration of a renderable video.
It normally is placed in the `src/Root.tsx` file.
```tsx
import { Composition } from "remotion";
import { MyComposition } from "./MyComposition";
export const RemotionRoot = () => {
return (
<Composition
id="MyComposition"
component={MyComposition}
durationInFrames={100}
fps={30}
width={1080}
height={1080}
/>
);
};
```
## Default Props
Pass `defaultProps` to provide initial values for your component.
Values must be JSON-serializable (`Date`, `Map`, `Set`, and `staticFile()` are supported).
```tsx
import { Composition } from "remotion";
import { MyComposition, MyCompositionProps } from "./MyComposition";
export const RemotionRoot = () => {
return (
<Composition
id="MyComposition"
component={MyComposition}
durationInFrames={100}
fps={30}
width={1080}
height={1080}
defaultProps={{
title: "Hello World",
color: "#ff0000",
} satisfies MyCompositionProps}
/>
);
};
```
Use `type` declarations for props rather than `interface` to ensure `defaultProps` type safety.
## Folders
Use `<Folder>` to organize compositions in the sidebar.
Folder names can only contain letters, numbers, and hyphens.
```tsx
import { Composition, Folder } from "remotion";
export const RemotionRoot = () => {
return (
<>
<Folder name="Marketing">
<Composition id="Promo" /* ... */ />
<Composition id="Ad" /* ... */ />
</Folder>
<Folder name="Social">
<Folder name="Instagram">
<Composition id="Story" /* ... */ />
<Composition id="Reel" /* ... */ />
</Folder>
</Folder>
</>
);
};
```
## Stills
Use `<Still>` for single-frame images. It does not require `durationInFrames` or `fps`.
```tsx
import { Still } from "remotion";
import { Thumbnail } from "./Thumbnail";
export const RemotionRoot = () => {
return (
<Still
id="Thumbnail"
component={Thumbnail}
width={1280}
height={720}
/>
);
};
```
## Calculate Metadata
Use `calculateMetadata` to make dimensions, duration, or props dynamic based on data.
```tsx
import { Composition, CalculateMetadataFunction } from "remotion";
import { MyComposition, MyCompositionProps } from "./MyComposition";
const calculateMetadata: CalculateMetadataFunction<MyCompositionProps> = async ({
props,
abortSignal,
}) => {
const data = await fetch(`https://api.example.com/video/${props.videoId}`, {
signal: abortSignal,
}).then((res) => res.json());
return {
durationInFrames: Math.ceil(data.duration * 30),
props: {
...props,
videoUrl: data.url,
},
};
};
export const RemotionRoot = () => {
return (
<Composition
id="MyComposition"
component={MyComposition}
durationInFrames={100} // Placeholder, will be overridden
fps={30}
width={1080}
height={1080}
defaultProps={{ videoId: "abc123" }}
calculateMetadata={calculateMetadata}
/>
);
};
```
The function can return `props`, `durationInFrames`, `width`, `height`, `fps`, and codec-related defaults. It runs once before rendering begins.

View File

@@ -0,0 +1,126 @@
---
name: display-captions
description: Displaying captions in Remotion with TikTok-style pages and word highlighting
metadata:
tags: captions, subtitles, display, tiktok, highlight
---
# Displaying captions in Remotion
This guide explains how to display captions in Remotion, assuming you already have captions in the `Caption` format.
## Prerequisites
First, the @remotion/captions package needs to be installed.
If it is not installed, use the following command:
```bash
npx remotion add @remotion/captions # If project uses npm
bunx remotion add @remotion/captions # If project uses bun
yarn remotion add @remotion/captions # If project uses yarn
pnpm exec remotion add @remotion/captions # If project uses pnpm
```
## Creating pages
Use `createTikTokStyleCaptions()` to group captions into pages. The `combineTokensWithinMilliseconds` option controls how many words appear at once:
```tsx
import {useMemo} from 'react';
import {createTikTokStyleCaptions} from '@remotion/captions';
import type {Caption} from '@remotion/captions';
// How often captions should switch (in milliseconds)
// Higher values = more words per page
// Lower values = fewer words (more word-by-word)
const SWITCH_CAPTIONS_EVERY_MS = 1200;
const {pages} = useMemo(() => {
return createTikTokStyleCaptions({
captions,
combineTokensWithinMilliseconds: SWITCH_CAPTIONS_EVERY_MS,
});
}, [captions]);
```
## Rendering with Sequences
Map over the pages and render each one in a `<Sequence>`. Calculate the start frame and duration from the page timing:
```tsx
import {Sequence, useVideoConfig, AbsoluteFill} from 'remotion';
import type {TikTokPage} from '@remotion/captions';
const CaptionedContent: React.FC = () => {
const {fps} = useVideoConfig();
return (
<AbsoluteFill>
{pages.map((page, index) => {
const nextPage = pages[index + 1] ?? null;
const startFrame = (page.startMs / 1000) * fps;
const endFrame = Math.min(
nextPage ? (nextPage.startMs / 1000) * fps : Infinity,
startFrame + (SWITCH_CAPTIONS_EVERY_MS / 1000) * fps,
);
const durationInFrames = endFrame - startFrame;
if (durationInFrames <= 0) {
return null;
}
return (
<Sequence
key={index}
from={startFrame}
durationInFrames={durationInFrames}
>
<CaptionPage page={page} />
</Sequence>
);
})}
</AbsoluteFill>
);
};
```
## Word highlighting
A caption page contains `tokens` which you can use to highlight the currently spoken word:
```tsx
import {AbsoluteFill, useCurrentFrame, useVideoConfig} from 'remotion';
import type {TikTokPage} from '@remotion/captions';
const HIGHLIGHT_COLOR = '#39E508';
const CaptionPage: React.FC<{page: TikTokPage}> = ({page}) => {
const frame = useCurrentFrame();
const {fps} = useVideoConfig();
// Current time relative to the start of the sequence
const currentTimeMs = (frame / fps) * 1000;
// Convert to absolute time by adding the page start
const absoluteTimeMs = page.startMs + currentTimeMs;
return (
<AbsoluteFill style={{justifyContent: 'center', alignItems: 'center'}}>
<div style={{fontSize: 80, fontWeight: 'bold', whiteSpace: 'pre'}}>
{page.tokens.map((token) => {
const isActive =
token.fromMs <= absoluteTimeMs && token.toMs > absoluteTimeMs;
return (
<span
key={token.fromMs}
style={{color: isActive ? HIGHLIGHT_COLOR : 'white'}}
>
{token.text}
</span>
);
})}
</div>
</AbsoluteFill>
);
};
```

View File

@@ -0,0 +1,229 @@
---
name: extract-frames
description: Extract frames from videos at specific timestamps using Mediabunny
metadata:
tags: frames, extract, video, thumbnail, filmstrip, canvas
---
# Extracting frames from videos
Use Mediabunny to extract frames from videos at specific timestamps. This is useful for generating thumbnails, filmstrips, or processing individual frames.
## The `extractFrames()` function
This function can be copy-pasted into any project.
```tsx
import {
ALL_FORMATS,
Input,
UrlSource,
VideoSample,
VideoSampleSink,
} from "mediabunny";
type Options = {
track: { width: number; height: number };
container: string;
durationInSeconds: number | null;
};
export type ExtractFramesTimestampsInSecondsFn = (
options: Options
) => Promise<number[]> | number[];
export type ExtractFramesProps = {
src: string;
timestampsInSeconds: number[] | ExtractFramesTimestampsInSecondsFn;
onVideoSample: (sample: VideoSample) => void;
signal?: AbortSignal;
};
export async function extractFrames({
src,
timestampsInSeconds,
onVideoSample,
signal,
}: ExtractFramesProps): Promise<void> {
using input = new Input({
formats: ALL_FORMATS,
source: new UrlSource(src),
});
const [durationInSeconds, format, videoTrack] = await Promise.all([
input.computeDuration(),
input.getFormat(),
input.getPrimaryVideoTrack(),
]);
if (!videoTrack) {
throw new Error("No video track found in the input");
}
if (signal?.aborted) {
throw new Error("Aborted");
}
const timestamps =
typeof timestampsInSeconds === "function"
? await timestampsInSeconds({
track: {
width: videoTrack.displayWidth,
height: videoTrack.displayHeight,
},
container: format.name,
durationInSeconds,
})
: timestampsInSeconds;
if (timestamps.length === 0) {
return;
}
if (signal?.aborted) {
throw new Error("Aborted");
}
const sink = new VideoSampleSink(videoTrack);
for await (using videoSample of sink.samplesAtTimestamps(timestamps)) {
if (signal?.aborted) {
break;
}
if (!videoSample) {
continue;
}
onVideoSample(videoSample);
}
}
```
## Basic usage
Extract frames at specific timestamps:
```tsx
await extractFrames({
src: "https://remotion.media/video.mp4",
timestampsInSeconds: [0, 1, 2, 3, 4],
onVideoSample: (sample) => {
const canvas = document.createElement("canvas");
canvas.width = sample.displayWidth;
canvas.height = sample.displayHeight;
const ctx = canvas.getContext("2d");
sample.draw(ctx!, 0, 0);
},
});
```
## Creating a filmstrip
Use a callback function to dynamically calculate timestamps based on video metadata:
```tsx
const canvasWidth = 500;
const canvasHeight = 80;
const fromSeconds = 0;
const toSeconds = 10;
await extractFrames({
src: "https://remotion.media/video.mp4",
timestampsInSeconds: async ({ track, durationInSeconds }) => {
const aspectRatio = track.width / track.height;
const amountOfFramesFit = Math.ceil(
canvasWidth / (canvasHeight * aspectRatio)
);
const segmentDuration = toSeconds - fromSeconds;
const timestamps: number[] = [];
for (let i = 0; i < amountOfFramesFit; i++) {
timestamps.push(
fromSeconds + (segmentDuration / amountOfFramesFit) * (i + 0.5)
);
}
return timestamps;
},
onVideoSample: (sample) => {
console.log(`Frame at ${sample.timestamp}s`);
const canvas = document.createElement("canvas");
canvas.width = sample.displayWidth;
canvas.height = sample.displayHeight;
const ctx = canvas.getContext("2d");
sample.draw(ctx!, 0, 0);
},
});
```
## Cancellation with AbortSignal
Cancel frame extraction after a timeout:
```tsx
const controller = new AbortController();
setTimeout(() => controller.abort(), 5000);
try {
await extractFrames({
src: "https://remotion.media/video.mp4",
timestampsInSeconds: [0, 1, 2, 3, 4],
onVideoSample: (sample) => {
using frame = sample;
const canvas = document.createElement("canvas");
canvas.width = frame.displayWidth;
canvas.height = frame.displayHeight;
const ctx = canvas.getContext("2d");
frame.draw(ctx!, 0, 0);
},
signal: controller.signal,
});
console.log("Frame extraction complete!");
} catch (error) {
console.error("Frame extraction was aborted or failed:", error);
}
```
## Timeout with Promise.race
```tsx
const controller = new AbortController();
const timeoutPromise = new Promise<never>((_, reject) => {
const timeoutId = setTimeout(() => {
controller.abort();
reject(new Error("Frame extraction timed out after 10 seconds"));
}, 10000);
controller.signal.addEventListener("abort", () => clearTimeout(timeoutId), {
once: true,
});
});
try {
await Promise.race([
extractFrames({
src: "https://remotion.media/video.mp4",
timestampsInSeconds: [0, 1, 2, 3, 4],
onVideoSample: (sample) => {
using frame = sample;
const canvas = document.createElement("canvas");
canvas.width = frame.displayWidth;
canvas.height = frame.displayHeight;
const ctx = canvas.getContext("2d");
frame.draw(ctx!, 0, 0);
},
signal: controller.signal,
}),
timeoutPromise,
]);
console.log("Frame extraction complete!");
} catch (error) {
console.error("Frame extraction was aborted or failed:", error);
}
```

View File

@@ -0,0 +1,152 @@
---
name: fonts
description: Loading Google Fonts and local fonts in Remotion
metadata:
tags: fonts, google-fonts, typography, text
---
# Using fonts in Remotion
## Google Fonts with @remotion/google-fonts
The recommended way to use Google Fonts. It's type-safe and automatically blocks rendering until the font is ready.
### Prerequisites
First, the @remotion/google-fonts package needs to be installed.
If it is not installed, use the following command:
```bash
npx remotion add @remotion/google-fonts # If project uses npm
bunx remotion add @remotion/google-fonts # If project uses bun
yarn remotion add @remotion/google-fonts # If project uses yarn
pnpm exec remotion add @remotion/google-fonts # If project uses pnpm
```
```tsx
import { loadFont } from "@remotion/google-fonts/Lobster";
const { fontFamily } = loadFont();
export const MyComposition = () => {
return <div style={{ fontFamily }}>Hello World</div>;
};
```
Preferrably, specify only needed weights and subsets to reduce file size:
```tsx
import { loadFont } from "@remotion/google-fonts/Roboto";
const { fontFamily } = loadFont("normal", {
weights: ["400", "700"],
subsets: ["latin"],
});
```
### Waiting for font to load
Use `waitUntilDone()` if you need to know when the font is ready:
```tsx
import { loadFont } from "@remotion/google-fonts/Lobster";
const { fontFamily, waitUntilDone } = loadFont();
await waitUntilDone();
```
## Local fonts with @remotion/fonts
For local font files, use the `@remotion/fonts` package.
### Prerequisites
First, install @remotion/fonts:
```bash
npx remotion add @remotion/fonts # If project uses npm
bunx remotion add @remotion/fonts # If project uses bun
yarn remotion add @remotion/fonts # If project uses yarn
pnpm exec remotion add @remotion/fonts # If project uses pnpm
```
### Loading a local font
Place your font file in the `public/` folder and use `loadFont()`:
```tsx
import { loadFont } from "@remotion/fonts";
import { staticFile } from "remotion";
await loadFont({
family: "MyFont",
url: staticFile("MyFont-Regular.woff2"),
});
export const MyComposition = () => {
return <div style={{ fontFamily: "MyFont" }}>Hello World</div>;
};
```
### Loading multiple weights
Load each weight separately with the same family name:
```tsx
import { loadFont } from "@remotion/fonts";
import { staticFile } from "remotion";
await Promise.all([
loadFont({
family: "Inter",
url: staticFile("Inter-Regular.woff2"),
weight: "400",
}),
loadFont({
family: "Inter",
url: staticFile("Inter-Bold.woff2"),
weight: "700",
}),
]);
```
### Available options
```tsx
loadFont({
family: "MyFont", // Required: name to use in CSS
url: staticFile("font.woff2"), // Required: font file URL
format: "woff2", // Optional: auto-detected from extension
weight: "400", // Optional: font weight
style: "normal", // Optional: normal or italic
display: "block", // Optional: font-display behavior
});
```
## Using in components
Call `loadFont()` at the top level of your component or in a separate file that's imported early:
```tsx
import { loadFont } from "@remotion/google-fonts/Montserrat";
const { fontFamily } = loadFont("normal", {
weights: ["400", "700"],
subsets: ["latin"],
});
export const Title: React.FC<{ text: string }> = ({ text }) => {
return (
<h1
style={{
fontFamily,
fontSize: 80,
fontWeight: "bold",
}}
>
{text}
</h1>
);
};
```

Some files were not shown because too many files have changed in this diff Show More