feat: add 6 gap-closing skills — browser QA, design system, product lens, canary watch, benchmark, safety guard

Closes competitive gaps with gstack:
- browser-qa: automated visual testing via browser MCP
- design-system: generate, audit, and detect AI slop in UI
- product-lens: product diagnostic, founder review, feature prioritization
- canary-watch: post-deploy monitoring with alert thresholds
- benchmark: performance baseline and regression detection
- safety-guard: prevent destructive operations in autonomous sessions
This commit is contained in:
Affaan Mustafa
2026-03-23 04:31:10 -07:00
parent bacc585b87
commit df4f2df297
6 changed files with 485 additions and 0 deletions

View File

@@ -0,0 +1,79 @@
# Product Lens — Think Before You Build
## When to Use
- Before starting any feature — validate the "why"
- Weekly product review — are we building the right thing?
- When stuck choosing between features
- Before a launch — sanity check the user journey
- When converting a vague idea into a spec
## How It Works
### Mode 1: Product Diagnostic
Like YC office hours but automated. Asks the hard questions:
```
1. Who is this for? (specific person, not "developers")
2. What's the pain? (quantify: how often, how bad, what do they do today?)
3. Why now? (what changed that makes this possible/necessary?)
4. What's the 10-star version? (if money/time were unlimited)
5. What's the MVP? (smallest thing that proves the thesis)
6. What's the anti-goal? (what are you explicitly NOT building?)
7. How do you know it's working? (metric, not vibes)
```
Output: a `PRODUCT-BRIEF.md` with answers, risks, and a go/no-go recommendation.
### Mode 2: Founder Review
Reviews your current project through a founder lens:
```
1. Read README, CLAUDE.md, package.json, recent commits
2. Infer: what is this trying to be?
3. Score: product-market fit signals (0-10)
- Usage growth trajectory
- Retention indicators (repeat contributors, return users)
- Revenue signals (pricing page, billing code, Stripe integration)
- Competitive moat (what's hard to copy?)
4. Identify: the one thing that would 10x this
5. Flag: things you're building that don't matter
```
### Mode 3: User Journey Audit
Maps the actual user experience:
```
1. Clone/install the product as a new user
2. Document every friction point (confusing steps, errors, missing docs)
3. Time each step
4. Compare to competitor onboarding
5. Score: time-to-value (how long until the user gets their first win?)
6. Recommend: top 3 fixes for onboarding
```
### Mode 4: Feature Prioritization
When you have 10 ideas and need to pick 2:
```
1. List all candidate features
2. Score each on: impact (1-5) × confidence (1-5) ÷ effort (1-5)
3. Rank by ICE score
4. Apply constraints: runway, team size, dependencies
5. Output: prioritized roadmap with rationale
```
## Output
All modes output actionable docs, not essays. Every recommendation has a specific next step.
## Integration
Pair with:
- `/browser-qa` to verify the user journey audit findings
- `/design-system audit` for visual polish assessment
- `/canary-watch` for post-launch monitoring