feat: add orchestration workflows and harness skills

This commit is contained in:
Affaan Mustafa
2026-03-12 08:50:24 -07:00
parent 51eec12764
commit cb43402d7d
46 changed files with 5618 additions and 80 deletions

View File

@@ -0,0 +1,333 @@
---
name: claude-api
description: Anthropic Claude API patterns for Python and TypeScript. Covers Messages API, streaming, tool use, vision, extended thinking, batches, prompt caching, and Claude Agent SDK. Use when building applications with the Claude API or Anthropic SDKs.
origin: ECC
---
# Claude API
Build applications with the Anthropic Claude API and SDKs.
## When to Activate
- Building applications that call the Claude API
- Code imports `anthropic` (Python) or `@anthropic-ai/sdk` (TypeScript)
- User asks about Claude API patterns, tool use, streaming, or vision
- Implementing agent workflows with Claude Agent SDK
- Optimizing API costs, token usage, or latency
## Model Selection
| Model | ID | Best For |
|-------|-----|----------|
| Opus 4.6 | `claude-opus-4-6` | Complex reasoning, architecture, research |
| Sonnet 4.6 | `claude-sonnet-4-6` | Balanced coding, most development tasks |
| Haiku 4.5 | `claude-haiku-4-5-20251001` | Fast responses, high-volume, cost-sensitive |
Default to Sonnet 4.6 unless the task requires deep reasoning (Opus) or speed/cost optimization (Haiku).
## Python SDK
### Installation
```bash
pip install anthropic
```
### Basic Message
```python
import anthropic
client = anthropic.Anthropic() # reads ANTHROPIC_API_KEY from env
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[
{"role": "user", "content": "Explain async/await in Python"}
]
)
print(message.content[0].text)
```
### Streaming
```python
with client.messages.stream(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a haiku about coding"}]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
```
### System Prompt
```python
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
system="You are a senior Python developer. Be concise.",
messages=[{"role": "user", "content": "Review this function"}]
)
```
## TypeScript SDK
### Installation
```bash
npm install @anthropic-ai/sdk
```
### Basic Message
```typescript
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic(); // reads ANTHROPIC_API_KEY from env
const message = await client.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
messages: [
{ role: "user", content: "Explain async/await in TypeScript" }
],
});
console.log(message.content[0].text);
```
### Streaming
```typescript
const stream = client.messages.stream({
model: "claude-sonnet-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Write a haiku" }],
});
for await (const event of stream) {
if (event.type === "content_block_delta" && event.delta.type === "text_delta") {
process.stdout.write(event.delta.text);
}
}
```
## Tool Use
Define tools and let Claude call them:
```python
tools = [
{
"name": "get_weather",
"description": "Get current weather for a location",
"input_schema": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
]
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
tools=tools,
messages=[{"role": "user", "content": "What's the weather in SF?"}]
)
# Handle tool use response
for block in message.content:
if block.type == "tool_use":
# Execute the tool with block.input
result = get_weather(**block.input)
# Send result back
follow_up = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
tools=tools,
messages=[
{"role": "user", "content": "What's the weather in SF?"},
{"role": "assistant", "content": message.content},
{"role": "user", "content": [
{"type": "tool_result", "tool_use_id": block.id, "content": str(result)}
]}
]
)
```
## Vision
Send images for analysis:
```python
import base64
with open("diagram.png", "rb") as f:
image_data = base64.standard_b64encode(f.read()).decode("utf-8")
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{
"role": "user",
"content": [
{"type": "image", "source": {"type": "base64", "media_type": "image/png", "data": image_data}},
{"type": "text", "text": "Describe this diagram"}
]
}]
)
```
## Extended Thinking
For complex reasoning tasks:
```python
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=16000,
thinking={
"type": "enabled",
"budget_tokens": 10000
},
messages=[{"role": "user", "content": "Solve this math problem step by step..."}]
)
for block in message.content:
if block.type == "thinking":
print(f"Thinking: {block.thinking}")
elif block.type == "text":
print(f"Answer: {block.text}")
```
## Prompt Caching
Cache large system prompts or context to reduce costs:
```python
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
system=[
{"type": "text", "text": large_system_prompt, "cache_control": {"type": "ephemeral"}}
],
messages=[{"role": "user", "content": "Question about the cached context"}]
)
# Check cache usage
print(f"Cache read: {message.usage.cache_read_input_tokens}")
print(f"Cache creation: {message.usage.cache_creation_input_tokens}")
```
## Batches API
Process large volumes asynchronously at 50% cost reduction:
```python
batch = client.messages.batches.create(
requests=[
{
"custom_id": f"request-{i}",
"params": {
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"messages": [{"role": "user", "content": prompt}]
}
}
for i, prompt in enumerate(prompts)
]
)
# Poll for completion
while True:
status = client.messages.batches.retrieve(batch.id)
if status.processing_status == "ended":
break
time.sleep(30)
# Get results
for result in client.messages.batches.results(batch.id):
print(result.result.message.content[0].text)
```
## Claude Agent SDK
Build multi-step agents:
```python
# Note: Agent SDK API surface may change — check official docs
import anthropic
# Define tools as functions
tools = [{
"name": "search_codebase",
"description": "Search the codebase for relevant code",
"input_schema": {
"type": "object",
"properties": {"query": {"type": "string"}},
"required": ["query"]
}
}]
# Run an agentic loop with tool use
client = anthropic.Anthropic()
messages = [{"role": "user", "content": "Review the auth module for security issues"}]
while True:
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=4096,
tools=tools,
messages=messages,
)
if response.stop_reason == "end_turn":
break
# Handle tool calls and continue the loop
messages.append({"role": "assistant", "content": response.content})
# ... execute tools and append tool_result messages
```
## Cost Optimization
| Strategy | Savings | When to Use |
|----------|---------|-------------|
| Prompt caching | Up to 90% on cached tokens | Repeated system prompts or context |
| Batches API | 50% | Non-time-sensitive bulk processing |
| Haiku instead of Sonnet | ~75% | Simple tasks, classification, extraction |
| Shorter max_tokens | Variable | When you know output will be short |
| Streaming | None (same cost) | Better UX, same price |
## Error Handling
```python
from anthropic import APIError, RateLimitError, APIConnectionError
try:
message = client.messages.create(...)
except RateLimitError:
# Back off and retry
time.sleep(60)
except APIConnectionError:
# Network issue, retry with backoff
pass
except APIError as e:
print(f"API error {e.status_code}: {e.message}")
```
## Environment Setup
```bash
# Required
export ANTHROPIC_API_KEY="your-api-key-here"
# Optional: set default model
export ANTHROPIC_MODEL="claude-sonnet-4-6"
```
Never hardcode API keys. Always use environment variables.

View File

@@ -0,0 +1,7 @@
interface:
display_name: "Claude API"
short_description: "Anthropic Claude API patterns and SDKs"
brand_color: "#D97706"
default_prompt: "Build applications with the Claude API using Messages, tool use, streaming, and Agent SDK"
policy:
allow_implicit_invocation: true

View File

@@ -0,0 +1,187 @@
---
name: crosspost
description: Multi-platform content distribution across X, LinkedIn, Threads, and Bluesky. Adapts content per platform using content-engine patterns. Never posts identical content cross-platform. Use when the user wants to distribute content across social platforms.
origin: ECC
---
# Crosspost
Distribute content across multiple social platforms with platform-native adaptation.
## When to Activate
- User wants to post content to multiple platforms
- Publishing announcements, launches, or updates across social media
- Repurposing a post from one platform to others
- User says "crosspost", "post everywhere", "share on all platforms", or "distribute this"
## Core Rules
1. **Never post identical content cross-platform.** Each platform gets a native adaptation.
2. **Primary platform first.** Post to the main platform, then adapt for others.
3. **Respect platform conventions.** Length limits, formatting, link handling all differ.
4. **One idea per post.** If the source content has multiple ideas, split across posts.
5. **Attribution matters.** If crossposting someone else's content, credit the source.
## Platform Specifications
| Platform | Max Length | Link Handling | Hashtags | Media |
|----------|-----------|---------------|----------|-------|
| X | 280 chars (4000 for Premium) | Counted in length | Minimal (1-2 max) | Images, video, GIFs |
| LinkedIn | 3000 chars | Not counted in length | 3-5 relevant | Images, video, docs, carousels |
| Threads | 500 chars | Separate link attachment | None typical | Images, video |
| Bluesky | 300 chars | Via facets (rich text) | None (use feeds) | Images |
## Workflow
### Step 1: Create Source Content
Start with the core idea. Use `content-engine` skill for high-quality drafts:
- Identify the single core message
- Determine the primary platform (where the audience is biggest)
- Draft the primary platform version first
### Step 2: Identify Target Platforms
Ask the user or determine from context:
- Which platforms to target
- Priority order (primary gets the best version)
- Any platform-specific requirements (e.g., LinkedIn needs professional tone)
### Step 3: Adapt Per Platform
For each target platform, transform the content:
**X adaptation:**
- Open with a hook, not a summary
- Cut to the core insight fast
- Keep links out of main body when possible
- Use thread format for longer content
**LinkedIn adaptation:**
- Strong first line (visible before "see more")
- Short paragraphs with line breaks
- Frame around lessons, results, or professional takeaways
- More explicit context than X (LinkedIn audience needs framing)
**Threads adaptation:**
- Conversational, casual tone
- Shorter than LinkedIn, less compressed than X
- Visual-first if possible
**Bluesky adaptation:**
- Direct and concise (300 char limit)
- Community-oriented tone
- Use feeds/lists for topic targeting instead of hashtags
### Step 4: Post Primary Platform
Post to the primary platform first:
- Use `x-api` skill for X
- Use platform-specific APIs or tools for others
- Capture the post URL for cross-referencing
### Step 5: Post to Secondary Platforms
Post adapted versions to remaining platforms:
- Stagger timing (not all at once — 30-60 min gaps)
- Include cross-platform references where appropriate ("longer thread on X" etc.)
## Content Adaptation Examples
### Source: Product Launch
**X version:**
```
We just shipped [feature].
[One specific thing it does that's impressive]
[Link]
```
**LinkedIn version:**
```
Excited to share: we just launched [feature] at [Company].
Here's why it matters:
[2-3 short paragraphs with context]
[Takeaway for the audience]
[Link]
```
**Threads version:**
```
just shipped something cool — [feature]
[casual explanation of what it does]
link in bio
```
### Source: Technical Insight
**X version:**
```
TIL: [specific technical insight]
[Why it matters in one sentence]
```
**LinkedIn version:**
```
A pattern I've been using that's made a real difference:
[Technical insight with professional framing]
[How it applies to teams/orgs]
#relevantHashtag
```
## API Integration
### Batch Crossposting Service (Example Pattern)
If using a crossposting service (e.g., Postbridge, Buffer, or a custom API), the pattern looks like:
```python
import requests
resp = requests.post(
"https://api.postbridge.io/v1/posts",
headers={"Authorization": f"Bearer {os.environ['POSTBRIDGE_API_KEY']}"},
json={
"platforms": ["twitter", "linkedin", "threads"],
"content": {
"twitter": {"text": x_version},
"linkedin": {"text": linkedin_version},
"threads": {"text": threads_version}
}
}
)
```
### Manual Posting
Without Postbridge, post to each platform using its native API:
- X: Use `x-api` skill patterns
- LinkedIn: LinkedIn API v2 with OAuth 2.0
- Threads: Threads API (Meta)
- Bluesky: AT Protocol API
## Quality Gate
Before posting:
- [ ] Each platform version reads naturally for that platform
- [ ] No identical content across platforms
- [ ] Length limits respected
- [ ] Links work and are placed appropriately
- [ ] Tone matches platform conventions
- [ ] Media is sized correctly for each platform
## Related Skills
- `content-engine` — Generate platform-native content
- `x-api` — X/Twitter API integration

View File

@@ -0,0 +1,7 @@
interface:
display_name: "Crosspost"
short_description: "Multi-platform content distribution with native adaptation"
brand_color: "#EC4899"
default_prompt: "Distribute content across X, LinkedIn, Threads, and Bluesky with platform-native adaptation"
policy:
allow_implicit_invocation: true

View File

@@ -0,0 +1,155 @@
---
name: deep-research
description: Multi-source deep research using firecrawl and exa MCPs. Searches the web, synthesizes findings, and delivers cited reports with source attribution. Use when the user wants thorough research on any topic with evidence and citations.
origin: ECC
---
# Deep Research
Produce thorough, cited research reports from multiple web sources using firecrawl and exa MCP tools.
## When to Activate
- User asks to research any topic in depth
- Competitive analysis, technology evaluation, or market sizing
- Due diligence on companies, investors, or technologies
- Any question requiring synthesis from multiple sources
- User says "research", "deep dive", "investigate", or "what's the current state of"
## MCP Requirements
At least one of:
- **firecrawl** — `firecrawl_search`, `firecrawl_scrape`, `firecrawl_crawl`
- **exa** — `web_search_exa`, `web_search_advanced_exa`, `crawling_exa`
Both together give the best coverage. Configure in `~/.claude.json` or `~/.codex/config.toml`.
## Workflow
### Step 1: Understand the Goal
Ask 1-2 quick clarifying questions:
- "What's your goal — learning, making a decision, or writing something?"
- "Any specific angle or depth you want?"
If the user says "just research it" — skip ahead with reasonable defaults.
### Step 2: Plan the Research
Break the topic into 3-5 research sub-questions. Example:
- Topic: "Impact of AI on healthcare"
- What are the main AI applications in healthcare today?
- What clinical outcomes have been measured?
- What are the regulatory challenges?
- What companies are leading this space?
- What's the market size and growth trajectory?
### Step 3: Execute Multi-Source Search
For EACH sub-question, search using available MCP tools:
**With firecrawl:**
```
firecrawl_search(query: "<sub-question keywords>", limit: 8)
```
**With exa:**
```
web_search_exa(query: "<sub-question keywords>", numResults: 8)
web_search_advanced_exa(query: "<keywords>", numResults: 5, startPublishedDate: "2025-01-01")
```
**Search strategy:**
- Use 2-3 different keyword variations per sub-question
- Mix general and news-focused queries
- Aim for 15-30 unique sources total
- Prioritize: academic, official, reputable news > blogs > forums
### Step 4: Deep-Read Key Sources
For the most promising URLs, fetch full content:
**With firecrawl:**
```
firecrawl_scrape(url: "<url>")
```
**With exa:**
```
crawling_exa(url: "<url>", tokensNum: 5000)
```
Read 3-5 key sources in full for depth. Do not rely only on search snippets.
### Step 5: Synthesize and Write Report
Structure the report:
```markdown
# [Topic]: Research Report
*Generated: [date] | Sources: [N] | Confidence: [High/Medium/Low]*
## Executive Summary
[3-5 sentence overview of key findings]
## 1. [First Major Theme]
[Findings with inline citations]
- Key point ([Source Name](url))
- Supporting data ([Source Name](url))
## 2. [Second Major Theme]
...
## 3. [Third Major Theme]
...
## Key Takeaways
- [Actionable insight 1]
- [Actionable insight 2]
- [Actionable insight 3]
## Sources
1. [Title](url) — [one-line summary]
2. ...
## Methodology
Searched [N] queries across web and news. Analyzed [M] sources.
Sub-questions investigated: [list]
```
### Step 6: Deliver
- **Short topics**: Post the full report in chat
- **Long reports**: Post the executive summary + key takeaways, save full report to a file
## Parallel Research with Subagents
For broad topics, use Claude Code's Task tool to parallelize:
```
Launch 3 research agents in parallel:
1. Agent 1: Research sub-questions 1-2
2. Agent 2: Research sub-questions 3-4
3. Agent 3: Research sub-question 5 + cross-cutting themes
```
Each agent searches, reads sources, and returns findings. The main session synthesizes into the final report.
## Quality Rules
1. **Every claim needs a source.** No unsourced assertions.
2. **Cross-reference.** If only one source says it, flag it as unverified.
3. **Recency matters.** Prefer sources from the last 12 months.
4. **Acknowledge gaps.** If you couldn't find good info on a sub-question, say so.
5. **No hallucination.** If you don't know, say "insufficient data found."
6. **Separate fact from inference.** Label estimates, projections, and opinions clearly.
## Examples
```
"Research the current state of nuclear fusion energy"
"Deep dive into Rust vs Go for backend services in 2026"
"Research the best strategies for bootstrapping a SaaS business"
"What's happening with the US housing market right now?"
"Investigate the competitive landscape for AI code editors"
```

View File

@@ -0,0 +1,7 @@
interface:
display_name: "Deep Research"
short_description: "Multi-source deep research with firecrawl and exa MCPs"
brand_color: "#6366F1"
default_prompt: "Research the given topic using firecrawl and exa, produce a cited report"
policy:
allow_implicit_invocation: true

View File

@@ -0,0 +1,144 @@
---
name: dmux-workflows
description: Multi-agent orchestration using dmux (tmux pane manager for AI agents). Patterns for parallel agent workflows across Claude Code, Codex, OpenCode, and other harnesses. Use when running multiple agent sessions in parallel or coordinating multi-agent development workflows.
origin: ECC
---
# dmux Workflows
Orchestrate parallel AI agent sessions using dmux, a tmux pane manager for agent harnesses.
## When to Activate
- Running multiple agent sessions in parallel
- Coordinating work across Claude Code, Codex, and other harnesses
- Complex tasks that benefit from divide-and-conquer parallelism
- User says "run in parallel", "split this work", "use dmux", or "multi-agent"
## What is dmux
dmux is a tmux-based orchestration tool that manages AI agent panes:
- Press `n` to create a new pane with a prompt
- Press `m` to merge pane output back to the main session
- Supports: Claude Code, Codex, OpenCode, Cline, Gemini, Qwen
**Install:** `npm install -g dmux` or see [github.com/standardagents/dmux](https://github.com/standardagents/dmux)
## Quick Start
```bash
# Start dmux session
dmux
# Create agent panes (press 'n' in dmux, then type prompt)
# Pane 1: "Implement the auth middleware in src/auth/"
# Pane 2: "Write tests for the user service"
# Pane 3: "Update API documentation"
# Each pane runs its own agent session
# Press 'm' to merge results back
```
## Workflow Patterns
### Pattern 1: Research + Implement
Split research and implementation into parallel tracks:
```
Pane 1 (Research): "Research best practices for rate limiting in Node.js.
Check current libraries, compare approaches, and write findings to
/tmp/rate-limit-research.md"
Pane 2 (Implement): "Implement rate limiting middleware for our Express API.
Start with a basic token bucket, we'll refine after research completes."
# After Pane 1 completes, merge findings into Pane 2's context
```
### Pattern 2: Multi-File Feature
Parallelize work across independent files:
```
Pane 1: "Create the database schema and migrations for the billing feature"
Pane 2: "Build the billing API endpoints in src/api/billing/"
Pane 3: "Create the billing dashboard UI components"
# Merge all, then do integration in main pane
```
### Pattern 3: Test + Fix Loop
Run tests in one pane, fix in another:
```
Pane 1 (Watcher): "Run the test suite in watch mode. When tests fail,
summarize the failures."
Pane 2 (Fixer): "Fix failing tests based on the error output from pane 1"
```
### Pattern 4: Cross-Harness
Use different AI tools for different tasks:
```
Pane 1 (Claude Code): "Review the security of the auth module"
Pane 2 (Codex): "Refactor the utility functions for performance"
Pane 3 (Claude Code): "Write E2E tests for the checkout flow"
```
### Pattern 5: Code Review Pipeline
Parallel review perspectives:
```
Pane 1: "Review src/api/ for security vulnerabilities"
Pane 2: "Review src/api/ for performance issues"
Pane 3: "Review src/api/ for test coverage gaps"
# Merge all reviews into a single report
```
## Best Practices
1. **Independent tasks only.** Don't parallelize tasks that depend on each other's output.
2. **Clear boundaries.** Each pane should work on distinct files or concerns.
3. **Merge strategically.** Review pane output before merging to avoid conflicts.
4. **Use git worktrees.** For file-conflict-prone work, use separate worktrees per pane.
5. **Resource awareness.** Each pane uses API tokens — keep total panes under 5-6.
## Git Worktree Integration
For tasks that touch overlapping files:
```bash
# Create worktrees for isolation
git worktree add ../feature-auth feat/auth
git worktree add ../feature-billing feat/billing
# Run agents in separate worktrees
# Pane 1: cd ../feature-auth && claude
# Pane 2: cd ../feature-billing && claude
# Merge branches when done
git merge feat/auth
git merge feat/billing
```
## Complementary Tools
| Tool | What It Does | When to Use |
|------|-------------|-------------|
| **dmux** | tmux pane management for agents | Parallel agent sessions |
| **Superset** | Terminal IDE for 10+ parallel agents | Large-scale orchestration |
| **Claude Code Task tool** | In-process subagent spawning | Programmatic parallelism within a session |
| **Codex multi-agent** | Built-in agent roles | Codex-specific parallel work |
## Troubleshooting
- **Pane not responding:** Check if the agent session is waiting for input. Use `m` to read output.
- **Merge conflicts:** Use git worktrees to isolate file changes per pane.
- **High token usage:** Reduce number of parallel panes. Each pane is a full agent session.
- **tmux not found:** Install with `brew install tmux` (macOS) or `apt install tmux` (Linux).

View File

@@ -0,0 +1,7 @@
interface:
display_name: "dmux Workflows"
short_description: "Multi-agent orchestration with dmux"
brand_color: "#14B8A6"
default_prompt: "Orchestrate parallel agent sessions using dmux pane manager"
policy:
allow_implicit_invocation: true

View File

@@ -0,0 +1,165 @@
---
name: exa-search
description: Neural search via Exa MCP for web, code, and company research. Use when the user needs web search, code examples, company intel, people lookup, or AI-powered deep research with Exa's neural search engine.
origin: ECC
---
# Exa Search
Neural search for web content, code, companies, and people via the Exa MCP server.
## When to Activate
- User needs current web information or news
- Searching for code examples, API docs, or technical references
- Researching companies, competitors, or market players
- Finding professional profiles or people in a domain
- Running background research for any development task
- User says "search for", "look up", "find", or "what's the latest on"
## MCP Requirement
Exa MCP server must be configured. Add to `~/.claude.json`:
```json
"exa-web-search": {
"command": "npx",
"args": ["-y", "exa-mcp-server"],
"env": { "EXA_API_KEY": "YOUR_EXA_API_KEY_HERE" }
}
```
Get an API key at [exa.ai](https://exa.ai).
## Core Tools
### web_search_exa
General web search for current information, news, or facts.
```
web_search_exa(query: "latest AI developments 2026", numResults: 5)
```
**Parameters:**
| Param | Type | Default | Notes |
|-------|------|---------|-------|
| `query` | string | required | Search query |
| `numResults` | number | 8 | Number of results |
### web_search_advanced_exa
Filtered search with domain and date constraints.
```
web_search_advanced_exa(
query: "React Server Components best practices",
numResults: 5,
includeDomains: ["github.com", "react.dev"],
startPublishedDate: "2025-01-01"
)
```
**Parameters:**
| Param | Type | Default | Notes |
|-------|------|---------|-------|
| `query` | string | required | Search query |
| `numResults` | number | 8 | Number of results |
| `includeDomains` | string[] | none | Limit to specific domains |
| `excludeDomains` | string[] | none | Exclude specific domains |
| `startPublishedDate` | string | none | ISO date filter (start) |
| `endPublishedDate` | string | none | ISO date filter (end) |
### get_code_context_exa
Find code examples and documentation from GitHub, Stack Overflow, and docs sites.
```
get_code_context_exa(query: "Python asyncio patterns", tokensNum: 3000)
```
**Parameters:**
| Param | Type | Default | Notes |
|-------|------|---------|-------|
| `query` | string | required | Code or API search query |
| `tokensNum` | number | 5000 | Content tokens (1000-50000) |
### company_research_exa
Research companies for business intelligence and news.
```
company_research_exa(companyName: "Anthropic", numResults: 5)
```
**Parameters:**
| Param | Type | Default | Notes |
|-------|------|---------|-------|
| `companyName` | string | required | Company name |
| `numResults` | number | 5 | Number of results |
### people_search_exa
Find professional profiles and bios.
```
people_search_exa(query: "AI safety researchers at Anthropic", numResults: 5)
```
### crawling_exa
Extract full page content from a URL.
```
crawling_exa(url: "https://example.com/article", tokensNum: 5000)
```
**Parameters:**
| Param | Type | Default | Notes |
|-------|------|---------|-------|
| `url` | string | required | URL to extract |
| `tokensNum` | number | 5000 | Content tokens |
### deep_researcher_start / deep_researcher_check
Start an AI research agent that runs asynchronously.
```
# Start research
deep_researcher_start(query: "comprehensive analysis of AI code editors in 2026")
# Check status (returns results when complete)
deep_researcher_check(researchId: "<id from start>")
```
## Usage Patterns
### Quick Lookup
```
web_search_exa(query: "Node.js 22 new features", numResults: 3)
```
### Code Research
```
get_code_context_exa(query: "Rust error handling patterns Result type", tokensNum: 3000)
```
### Company Due Diligence
```
company_research_exa(companyName: "Vercel", numResults: 5)
web_search_advanced_exa(query: "Vercel funding valuation 2026", numResults: 3)
```
### Technical Deep Dive
```
# Start async research
deep_researcher_start(query: "WebAssembly component model status and adoption")
# ... do other work ...
deep_researcher_check(researchId: "<id>")
```
## Tips
- Use `web_search_exa` for broad queries, `web_search_advanced_exa` for filtered results
- Lower `tokensNum` (1000-2000) for focused code snippets, higher (5000+) for comprehensive context
- Combine `company_research_exa` with `web_search_advanced_exa` for thorough company analysis
- Use `crawling_exa` to get full content from specific URLs found in search results
- `deep_researcher_start` is best for comprehensive topics that benefit from AI synthesis
## Related Skills
- `deep-research` — Full research workflow using firecrawl + exa together
- `market-research` — Business-oriented research with decision frameworks

View File

@@ -0,0 +1,7 @@
interface:
display_name: "Exa Search"
short_description: "Neural search via Exa MCP for web, code, and companies"
brand_color: "#8B5CF6"
default_prompt: "Search using Exa MCP tools for web content, code, or company research"
policy:
allow_implicit_invocation: true

View File

@@ -0,0 +1,277 @@
---
name: fal-ai-media
description: Unified media generation via fal.ai MCP — image, video, and audio. Covers text-to-image (Nano Banana), text/image-to-video (Seedance, Kling, Veo 3), text-to-speech (CSM-1B), and video-to-audio (ThinkSound). Use when the user wants to generate images, videos, or audio with AI.
origin: ECC
---
# fal.ai Media Generation
Generate images, videos, and audio using fal.ai models via MCP.
## When to Activate
- User wants to generate images from text prompts
- Creating videos from text or images
- Generating speech, music, or sound effects
- Any media generation task
- User says "generate image", "create video", "text to speech", "make a thumbnail", or similar
## MCP Requirement
fal.ai MCP server must be configured. Add to `~/.claude.json`:
```json
"fal-ai": {
"command": "npx",
"args": ["-y", "fal-ai-mcp-server"],
"env": { "FAL_KEY": "YOUR_FAL_KEY_HERE" }
}
```
Get an API key at [fal.ai](https://fal.ai).
## MCP Tools
The fal.ai MCP provides these tools:
- `search` — Find available models by keyword
- `find` — Get model details and parameters
- `generate` — Run a model with parameters
- `result` — Check async generation status
- `status` — Check job status
- `cancel` — Cancel a running job
- `estimate_cost` — Estimate generation cost
- `models` — List popular models
- `upload` — Upload files for use as inputs
---
## Image Generation
### Nano Banana 2 (Fast)
Best for: quick iterations, drafts, text-to-image, image editing.
```
generate(
model_name: "fal-ai/nano-banana-2",
input: {
"prompt": "a futuristic cityscape at sunset, cyberpunk style",
"image_size": "landscape_16_9",
"num_images": 1,
"seed": 42
}
)
```
### Nano Banana Pro (High Fidelity)
Best for: production images, realism, typography, detailed prompts.
```
generate(
model_name: "fal-ai/nano-banana-pro",
input: {
"prompt": "professional product photo of wireless headphones on marble surface, studio lighting",
"image_size": "square",
"num_images": 1,
"guidance_scale": 7.5
}
)
```
### Common Image Parameters
| Param | Type | Options | Notes |
|-------|------|---------|-------|
| `prompt` | string | required | Describe what you want |
| `image_size` | string | `square`, `portrait_4_3`, `landscape_16_9`, `portrait_16_9`, `landscape_4_3` | Aspect ratio |
| `num_images` | number | 1-4 | How many to generate |
| `seed` | number | any integer | Reproducibility |
| `guidance_scale` | number | 1-20 | How closely to follow the prompt (higher = more literal) |
### Image Editing
Use Nano Banana 2 with an input image for inpainting, outpainting, or style transfer:
```
# First upload the source image
upload(file_path: "/path/to/image.png")
# Then generate with image input
generate(
model_name: "fal-ai/nano-banana-2",
input: {
"prompt": "same scene but in watercolor style",
"image_url": "<uploaded_url>",
"image_size": "landscape_16_9"
}
)
```
---
## Video Generation
### Seedance 1.0 Pro (ByteDance)
Best for: text-to-video, image-to-video with high motion quality.
```
generate(
model_name: "fal-ai/seedance-1-0-pro",
input: {
"prompt": "a drone flyover of a mountain lake at golden hour, cinematic",
"duration": "5s",
"aspect_ratio": "16:9",
"seed": 42
}
)
```
### Kling Video v3 Pro
Best for: text/image-to-video with native audio generation.
```
generate(
model_name: "fal-ai/kling-video/v3/pro",
input: {
"prompt": "ocean waves crashing on a rocky coast, dramatic clouds",
"duration": "5s",
"aspect_ratio": "16:9"
}
)
```
### Veo 3 (Google DeepMind)
Best for: video with generated sound, high visual quality.
```
generate(
model_name: "fal-ai/veo-3",
input: {
"prompt": "a bustling Tokyo street market at night, neon signs, crowd noise",
"aspect_ratio": "16:9"
}
)
```
### Image-to-Video
Start from an existing image:
```
generate(
model_name: "fal-ai/seedance-1-0-pro",
input: {
"prompt": "camera slowly zooms out, gentle wind moves the trees",
"image_url": "<uploaded_image_url>",
"duration": "5s"
}
)
```
### Video Parameters
| Param | Type | Options | Notes |
|-------|------|---------|-------|
| `prompt` | string | required | Describe the video |
| `duration` | string | `"5s"`, `"10s"` | Video length |
| `aspect_ratio` | string | `"16:9"`, `"9:16"`, `"1:1"` | Frame ratio |
| `seed` | number | any integer | Reproducibility |
| `image_url` | string | URL | Source image for image-to-video |
---
## Audio Generation
### CSM-1B (Conversational Speech)
Text-to-speech with natural, conversational quality.
```
generate(
model_name: "fal-ai/csm-1b",
input: {
"text": "Hello, welcome to the demo. Let me show you how this works.",
"speaker_id": 0
}
)
```
### ThinkSound (Video-to-Audio)
Generate matching audio from video content.
```
generate(
model_name: "fal-ai/thinksound",
input: {
"video_url": "<video_url>",
"prompt": "ambient forest sounds with birds chirping"
}
)
```
### ElevenLabs (via API, no MCP)
For professional voice synthesis, use ElevenLabs directly:
```python
import os
import requests
resp = requests.post(
"https://api.elevenlabs.io/v1/text-to-speech/<voice_id>",
headers={
"xi-api-key": os.environ["ELEVENLABS_API_KEY"],
"Content-Type": "application/json"
},
json={
"text": "Your text here",
"model_id": "eleven_turbo_v2_5",
"voice_settings": {"stability": 0.5, "similarity_boost": 0.75}
}
)
with open("output.mp3", "wb") as f:
f.write(resp.content)
```
### VideoDB Generative Audio
If VideoDB is configured, use its generative audio:
```python
# Voice generation
audio = coll.generate_voice(text="Your narration here", voice="alloy")
# Music generation
music = coll.generate_music(prompt="upbeat electronic background music", duration=30)
# Sound effects
sfx = coll.generate_sound_effect(prompt="thunder crack followed by rain")
```
---
## Cost Estimation
Before generating, check estimated cost:
```
estimate_cost(model_name: "fal-ai/nano-banana-pro", input: {...})
```
## Model Discovery
Find models for specific tasks:
```
search(query: "text to video")
find(model_name: "fal-ai/seedance-1-0-pro")
models()
```
## Tips
- Use `seed` for reproducible results when iterating on prompts
- Start with lower-cost models (Nano Banana 2) for prompt iteration, then switch to Pro for finals
- For video, keep prompts descriptive but concise — focus on motion and scene
- Image-to-video produces more controlled results than pure text-to-video
- Check `estimate_cost` before running expensive video generations
## Related Skills
- `videodb` — Video processing, editing, and streaming
- `video-editing` — AI-powered video editing workflows
- `content-engine` — Content creation for social platforms

View File

@@ -0,0 +1,7 @@
interface:
display_name: "fal.ai Media"
short_description: "AI image, video, and audio generation via fal.ai"
brand_color: "#F43F5E"
default_prompt: "Generate images, videos, or audio using fal.ai models"
policy:
allow_implicit_invocation: true

View File

@@ -0,0 +1,308 @@
---
name: video-editing
description: AI-assisted video editing workflows for cutting, structuring, and augmenting real footage. Covers the full pipeline from raw capture through FFmpeg, Remotion, ElevenLabs, fal.ai, and final polish in Descript or CapCut. Use when the user wants to edit video, cut footage, create vlogs, or build video content.
origin: ECC
---
# Video Editing
AI-assisted editing for real footage. Not generation from prompts. Editing existing video fast.
## When to Activate
- User wants to edit, cut, or structure video footage
- Turning long recordings into short-form content
- Building vlogs, tutorials, or demo videos from raw capture
- Adding overlays, subtitles, music, or voiceover to existing video
- Reframing video for different platforms (YouTube, TikTok, Instagram)
- User says "edit video", "cut this footage", "make a vlog", or "video workflow"
## Core Thesis
AI video editing is useful when you stop asking it to create the whole video and start using it to compress, structure, and augment real footage. The value is not generation. The value is compression.
## The Pipeline
```
Screen Studio / raw footage
→ Claude / Codex
→ FFmpeg
→ Remotion
→ ElevenLabs / fal.ai
→ Descript or CapCut
```
Each layer has a specific job. Do not skip layers. Do not try to make one tool do everything.
## Layer 1: Capture (Screen Studio / Raw Footage)
Collect the source material:
- **Screen Studio**: polished screen recordings for app demos, coding sessions, browser workflows
- **Raw camera footage**: vlog footage, interviews, event recordings
- **Desktop capture via VideoDB**: session recording with real-time context (see `videodb` skill)
Output: raw files ready for organization.
## Layer 2: Organization (Claude / Codex)
Use Claude Code or Codex to:
- **Transcribe and label**: generate transcript, identify topics and themes
- **Plan structure**: decide what stays, what gets cut, what order works
- **Identify dead sections**: find pauses, tangents, repeated takes
- **Generate edit decision list**: timestamps for cuts, segments to keep
- **Scaffold FFmpeg and Remotion code**: generate the commands and compositions
```
Example prompt:
"Here's the transcript of a 4-hour recording. Identify the 8 strongest segments
for a 24-minute vlog. Give me FFmpeg cut commands for each segment."
```
This layer is about structure, not final creative taste.
## Layer 3: Deterministic Cuts (FFmpeg)
FFmpeg handles the boring but critical work: splitting, trimming, concatenating, and preprocessing.
### Extract segment by timestamp
```bash
ffmpeg -i raw.mp4 -ss 00:12:30 -to 00:15:45 -c copy segment_01.mp4
```
### Batch cut from edit decision list
```bash
#!/bin/bash
# cuts.txt: start,end,label
while IFS=, read -r start end label; do
ffmpeg -i raw.mp4 -ss "$start" -to "$end" -c copy "segments/${label}.mp4"
done < cuts.txt
```
### Concatenate segments
```bash
# Create file list
for f in segments/*.mp4; do echo "file '$f'"; done > concat.txt
ffmpeg -f concat -safe 0 -i concat.txt -c copy assembled.mp4
```
### Create proxy for faster editing
```bash
ffmpeg -i raw.mp4 -vf "scale=960:-2" -c:v libx264 -preset ultrafast -crf 28 proxy.mp4
```
### Extract audio for transcription
```bash
ffmpeg -i raw.mp4 -vn -acodec pcm_s16le -ar 16000 audio.wav
```
### Normalize audio levels
```bash
ffmpeg -i segment.mp4 -af loudnorm=I=-16:TP=-1.5:LRA=11 -c:v copy normalized.mp4
```
## Layer 4: Programmable Composition (Remotion)
Remotion turns editing problems into composable code. Use it for things that traditional editors make painful:
### When to use Remotion
- Overlays: text, images, branding, lower thirds
- Data visualizations: charts, stats, animated numbers
- Motion graphics: transitions, explainer animations
- Composable scenes: reusable templates across videos
- Product demos: annotated screenshots, UI highlights
### Basic Remotion composition
```tsx
import { AbsoluteFill, Sequence, Video, useCurrentFrame } from "remotion";
export const VlogComposition: React.FC = () => {
const frame = useCurrentFrame();
return (
<AbsoluteFill>
{/* Main footage */}
<Sequence from={0} durationInFrames={300}>
<Video src="/segments/intro.mp4" />
</Sequence>
{/* Title overlay */}
<Sequence from={30} durationInFrames={90}>
<AbsoluteFill style={{
justifyContent: "center",
alignItems: "center",
}}>
<h1 style={{
fontSize: 72,
color: "white",
textShadow: "2px 2px 8px rgba(0,0,0,0.8)",
}}>
The AI Editing Stack
</h1>
</AbsoluteFill>
</Sequence>
{/* Next segment */}
<Sequence from={300} durationInFrames={450}>
<Video src="/segments/demo.mp4" />
</Sequence>
</AbsoluteFill>
);
};
```
### Render output
```bash
npx remotion render src/index.ts VlogComposition output.mp4
```
See the [Remotion docs](https://www.remotion.dev/docs) for detailed patterns and API reference.
## Layer 5: Generated Assets (ElevenLabs / fal.ai)
Generate only what you need. Do not generate the whole video.
### Voiceover with ElevenLabs
```python
import os
import requests
resp = requests.post(
f"https://api.elevenlabs.io/v1/text-to-speech/{voice_id}",
headers={
"xi-api-key": os.environ["ELEVENLABS_API_KEY"],
"Content-Type": "application/json"
},
json={
"text": "Your narration text here",
"model_id": "eleven_turbo_v2_5",
"voice_settings": {"stability": 0.5, "similarity_boost": 0.75}
}
)
with open("voiceover.mp3", "wb") as f:
f.write(resp.content)
```
### Music and SFX with fal.ai
Use the `fal-ai-media` skill for:
- Background music generation
- Sound effects (ThinkSound model for video-to-audio)
- Transition sounds
### Generated visuals with fal.ai
Use for insert shots, thumbnails, or b-roll that doesn't exist:
```
generate(model_name: "fal-ai/nano-banana-pro", input: {
"prompt": "professional thumbnail for tech vlog, dark background, code on screen",
"image_size": "landscape_16_9"
})
```
### VideoDB generative audio
If VideoDB is configured:
```python
voiceover = coll.generate_voice(text="Narration here", voice="alloy")
music = coll.generate_music(prompt="lo-fi background for coding vlog", duration=120)
sfx = coll.generate_sound_effect(prompt="subtle whoosh transition")
```
## Layer 6: Final Polish (Descript / CapCut)
The last layer is human. Use a traditional editor for:
- **Pacing**: adjust cuts that feel too fast or slow
- **Captions**: auto-generated, then manually cleaned
- **Color grading**: basic correction and mood
- **Final audio mix**: balance voice, music, and SFX levels
- **Export**: platform-specific formats and quality settings
This is where taste lives. AI clears the repetitive work. You make the final calls.
## Social Media Reframing
Different platforms need different aspect ratios:
| Platform | Aspect Ratio | Resolution |
|----------|-------------|------------|
| YouTube | 16:9 | 1920x1080 |
| TikTok / Reels | 9:16 | 1080x1920 |
| Instagram Feed | 1:1 | 1080x1080 |
| X / Twitter | 16:9 or 1:1 | 1280x720 or 720x720 |
### Reframe with FFmpeg
```bash
# 16:9 to 9:16 (center crop)
ffmpeg -i input.mp4 -vf "crop=ih*9/16:ih,scale=1080:1920" vertical.mp4
# 16:9 to 1:1 (center crop)
ffmpeg -i input.mp4 -vf "crop=ih:ih,scale=1080:1080" square.mp4
```
### Reframe with VideoDB
```python
# Smart reframe (AI-guided subject tracking)
reframed = video.reframe(start=0, end=60, target="vertical", mode=ReframeMode.smart)
```
## Scene Detection and Auto-Cut
### FFmpeg scene detection
```bash
# Detect scene changes (threshold 0.3 = moderate sensitivity)
ffmpeg -i input.mp4 -vf "select='gt(scene,0.3)',showinfo" -vsync vfr -f null - 2>&1 | grep showinfo
```
### Silence detection for auto-cut
```bash
# Find silent segments (useful for cutting dead air)
ffmpeg -i input.mp4 -af silencedetect=noise=-30dB:d=2 -f null - 2>&1 | grep silence
```
### Highlight extraction
Use Claude to analyze transcript + scene timestamps:
```
"Given this transcript with timestamps and these scene change points,
identify the 5 most engaging 30-second clips for social media."
```
## What Each Tool Does Best
| Tool | Strength | Weakness |
|------|----------|----------|
| Claude / Codex | Organization, planning, code generation | Not the creative taste layer |
| FFmpeg | Deterministic cuts, batch processing, format conversion | No visual editing UI |
| Remotion | Programmable overlays, composable scenes, reusable templates | Learning curve for non-devs |
| Screen Studio | Polished screen recordings immediately | Only screen capture |
| ElevenLabs | Voice, narration, music, SFX | Not the center of the workflow |
| Descript / CapCut | Final pacing, captions, polish | Manual, not automatable |
## Key Principles
1. **Edit, don't generate.** This workflow is for cutting real footage, not creating from prompts.
2. **Structure before style.** Get the story right in Layer 2 before touching anything visual.
3. **FFmpeg is the backbone.** Boring but critical. Where long footage becomes manageable.
4. **Remotion for repeatability.** If you'll do it more than once, make it a Remotion component.
5. **Generate selectively.** Only use AI generation for assets that don't exist, not for everything.
6. **Taste is the last layer.** AI clears repetitive work. You make the final creative calls.
## Related Skills
- `fal-ai-media` — AI image, video, and audio generation
- `videodb` — Server-side video processing, indexing, and streaming
- `content-engine` — Platform-native content distribution

View File

@@ -0,0 +1,7 @@
interface:
display_name: "Video Editing"
short_description: "AI-assisted video editing for real footage"
brand_color: "#EF4444"
default_prompt: "Edit video using AI-assisted pipeline: organize, cut, compose, generate assets, polish"
policy:
allow_implicit_invocation: true

View File

@@ -0,0 +1,211 @@
---
name: x-api
description: X/Twitter API integration for posting tweets, threads, reading timelines, search, and analytics. Covers OAuth auth patterns, rate limits, and platform-native content posting. Use when the user wants to interact with X programmatically.
origin: ECC
---
# X API
Programmatic interaction with X (Twitter) for posting, reading, searching, and analytics.
## When to Activate
- User wants to post tweets or threads programmatically
- Reading timeline, mentions, or user data from X
- Searching X for content, trends, or conversations
- Building X integrations or bots
- Analytics and engagement tracking
- User says "post to X", "tweet", "X API", or "Twitter API"
## Authentication
### OAuth 2.0 (App-Only / User Context)
Best for: read-heavy operations, search, public data.
```bash
# Environment setup
export X_BEARER_TOKEN="your-bearer-token"
```
```python
import os
import requests
bearer = os.environ["X_BEARER_TOKEN"]
headers = {"Authorization": f"Bearer {bearer}"}
# Search recent tweets
resp = requests.get(
"https://api.x.com/2/tweets/search/recent",
headers=headers,
params={"query": "claude code", "max_results": 10}
)
tweets = resp.json()
```
### OAuth 1.0a (User Context)
Required for: posting tweets, managing account, DMs.
```bash
# Environment setup — source before use
export X_API_KEY="your-api-key"
export X_API_SECRET="your-api-secret"
export X_ACCESS_TOKEN="your-access-token"
export X_ACCESS_SECRET="your-access-secret"
```
```python
import os
from requests_oauthlib import OAuth1Session
oauth = OAuth1Session(
os.environ["X_API_KEY"],
client_secret=os.environ["X_API_SECRET"],
resource_owner_key=os.environ["X_ACCESS_TOKEN"],
resource_owner_secret=os.environ["X_ACCESS_SECRET"],
)
```
## Core Operations
### Post a Tweet
```python
resp = oauth.post(
"https://api.x.com/2/tweets",
json={"text": "Hello from Claude Code"}
)
resp.raise_for_status()
tweet_id = resp.json()["data"]["id"]
```
### Post a Thread
```python
def post_thread(oauth, tweets: list[str]) -> list[str]:
ids = []
reply_to = None
for text in tweets:
payload = {"text": text}
if reply_to:
payload["reply"] = {"in_reply_to_tweet_id": reply_to}
resp = oauth.post("https://api.x.com/2/tweets", json=payload)
tweet_id = resp.json()["data"]["id"]
ids.append(tweet_id)
reply_to = tweet_id
return ids
```
### Read User Timeline
```python
resp = requests.get(
f"https://api.x.com/2/users/{user_id}/tweets",
headers=headers,
params={
"max_results": 10,
"tweet.fields": "created_at,public_metrics",
}
)
```
### Search Tweets
```python
resp = requests.get(
"https://api.x.com/2/tweets/search/recent",
headers=headers,
params={
"query": "from:affaanmustafa -is:retweet",
"max_results": 10,
"tweet.fields": "public_metrics,created_at",
}
)
```
### Get User by Username
```python
resp = requests.get(
"https://api.x.com/2/users/by/username/affaanmustafa",
headers=headers,
params={"user.fields": "public_metrics,description,created_at"}
)
```
### Upload Media and Post
```python
# Media upload uses v1.1 endpoint
# Step 1: Upload media
media_resp = oauth.post(
"https://upload.twitter.com/1.1/media/upload.json",
files={"media": open("image.png", "rb")}
)
media_id = media_resp.json()["media_id_string"]
# Step 2: Post with media
resp = oauth.post(
"https://api.x.com/2/tweets",
json={"text": "Check this out", "media": {"media_ids": [media_id]}}
)
```
## Rate Limits Reference
| Endpoint | Limit | Window |
|----------|-------|--------|
| POST /2/tweets | 200 | 15 min |
| GET /2/tweets/search/recent | 450 | 15 min |
| GET /2/users/:id/tweets | 1500 | 15 min |
| GET /2/users/by/username | 300 | 15 min |
| POST media/upload | 415 | 15 min |
Always check `x-rate-limit-remaining` and `x-rate-limit-reset` headers.
```python
remaining = int(resp.headers.get("x-rate-limit-remaining", 0))
if remaining < 5:
reset = int(resp.headers.get("x-rate-limit-reset", 0))
wait = max(0, reset - int(time.time()))
print(f"Rate limit approaching. Resets in {wait}s")
```
## Error Handling
```python
resp = oauth.post("https://api.x.com/2/tweets", json={"text": content})
if resp.status_code == 201:
return resp.json()["data"]["id"]
elif resp.status_code == 429:
reset = int(resp.headers["x-rate-limit-reset"])
raise Exception(f"Rate limited. Resets at {reset}")
elif resp.status_code == 403:
raise Exception(f"Forbidden: {resp.json().get('detail', 'check permissions')}")
else:
raise Exception(f"X API error {resp.status_code}: {resp.text}")
```
## Security
- **Never hardcode tokens.** Use environment variables or `.env` files.
- **Never commit `.env` files.** Add to `.gitignore`.
- **Rotate tokens** if exposed. Regenerate at developer.x.com.
- **Use read-only tokens** when write access is not needed.
- **Store OAuth secrets securely** — not in source code or logs.
## Integration with Content Engine
Use `content-engine` skill to generate platform-native content, then post via X API:
1. Generate content with content-engine (X platform format)
2. Validate length (280 chars for single tweet)
3. Post via X API using patterns above
4. Track engagement via public_metrics
## Related Skills
- `content-engine` — Generate platform-native content for X
- `crosspost` — Distribute content across X, LinkedIn, and other platforms

View File

@@ -0,0 +1,7 @@
interface:
display_name: "X API"
short_description: "X/Twitter API integration for posting, threads, and analytics"
brand_color: "#000000"
default_prompt: "Use X API to post tweets, threads, or retrieve timeline and search data"
policy:
allow_implicit_invocation: true