docs: add sponsorship playbook and monthly metrics automation

This commit is contained in:
Affaan Mustafa
2026-03-04 16:17:12 -08:00
parent c4a5a69dbd
commit 5fe40f4a63
5 changed files with 296 additions and 1 deletions

View File

@@ -70,3 +70,5 @@ Use this on calls:
> We track adoption through npm distribution, GitHub App installs, and repository growth.
> Claude plugin installs are structurally undercounted publicly, so we use a blended metrics model.
> The project supports Claude Code, Cursor, OpenCode, and Codex app/CLI with production-grade hook reliability and a large passing test suite.
For launch-ready social copy snippets, see [`social-launch-copy.md`](./social-launch-copy.md).

View File

@@ -0,0 +1,62 @@
# Social Launch Copy (X + LinkedIn)
Use these templates as launch-ready starting points. Replace placeholders before posting.
## X Post: Release Announcement
```text
ECC v1.8.0 is live.
We moved from “config pack” to an agent harness performance system:
- hook reliability fixes
- new harness commands
- cross-tool parity (Claude Code, Cursor, OpenCode, Codex)
Start here: <repo-link>
```
## X Post: Proof + Metrics
```text
If you evaluate agent tooling, use blended distribution metrics:
- npm installs (`ecc-universal`, `ecc-agentshield`)
- GitHub App installs
- repo adoption (stars/forks/contributors)
We now track this monthly in-repo for sponsor transparency.
```
## X Quote Tweet: Eval Skills Article
```text
Strong point on eval discipline.
In ECC we turned this into production checks via:
- /harness-audit
- /quality-gate
- Stop-phase session summaries
This is where harness performance compounds over time.
```
## X Quote Tweet: Plankton / deslop workflow
```text
This workflow direction is right: optimize the harness, not just prompts.
Our v1.8.0 focus was reliability + parity + measurable quality gates across toolchains.
```
## LinkedIn Post: Partner-Friendly Summary
```text
We shipped ECC v1.8.0 with one objective: improve agent harness performance in production.
Highlights:
- more reliable hook lifecycle behavior
- new harness-level quality commands
- parity across Claude Code, Cursor, OpenCode, and Codex
- stronger sponsor-facing metrics tracking
If your team runs AI coding agents daily, this is designed for operational use.
```