docs(zh-CN): translate code block(plain text) (#753)

Co-authored-by: neo <neo.dowithless@gmail.com>
This commit is contained in:
zdoc.app
2026-03-23 06:39:24 +08:00
committed by GitHub
parent fd2a8edb53
commit 4f6f587700
118 changed files with 1807 additions and 1835 deletions

View File

@@ -21,25 +21,25 @@ origin: ECC
当 AI 编写代码然后审查其自身工作时,它会将相同的假设带入这两个步骤。这会形成一个可预测的失败模式:
```
AI writes fix → AI reviews fix → AI says "looks correct" → Bug still exists
AI 编写修复 → AI 审查修复 → AI 表示“看起来正确” → 漏洞依然存在
```
**实际示例**(在生产环境中观察到):
```
Fix 1: Added notification_settings to API response
Forgot to add it to the SELECT query
→ AI reviewed and missed it (same blind spot)
修复 1向 API 响应添加了 notification_settings
忘记将其添加到 SELECT 查询中
→ AI 审核时遗漏了(相同的盲点)
Fix 2: Added it to SELECT query
→ TypeScript build error (column not in generated types)
→ AI reviewed Fix 1 but didn't catch the SELECT issue
修复 2将其添加到 SELECT 查询中
→ TypeScript 构建错误(列不在生成的类型中)
→ AI 审核了修复 1但未发现 SELECT 问题
Fix 3: Changed to SELECT *
Fixed production path, forgot sandbox path
→ AI reviewed and missed it AGAIN (4th occurrence)
修复 3改为 SELECT *
修复了生产路径,忘记了沙箱路径
→ AI 审核时再次遗漏(第 4 次出现)
Fix 4: Test caught it instantly on first run
修复 4测试在首次运行时立即捕获了问题
```
模式:**沙盒/生产环境路径不一致**是 AI 引入的 #1 回归问题。
@@ -228,18 +228,18 @@ describe("GET /api/user/messages (conversation list)", () => {
User: "バグチェックして" (or "/bug-check")
├─ Step 1: npm run test
│ ├─ FAIL → Bug found mechanically (no AI judgment needed)
│ └─ PASS → Continue
│ ├─ FAIL → 发现机械性错误无需AI判断
│ └─ PASS → 继续
├─ Step 2: npm run build
│ ├─ FAIL → Type error found mechanically
│ └─ PASS → Continue
│ ├─ FAIL → 发现类型错误
│ └─ PASS → 继续
├─ Step 3: AI code review (with known blind spots in mind)
│ └─ Findings reported
├─ Step 3: AI代码审查(考虑已知盲点)
│ └─ 报告发现的问题
└─ Step 4: For each fix, write a regression test
└─ Next bug-check catches if fix breaks
└─ Step 4: 对每个修复编写回归测试
└─ 下次bug-check时捕获修复是否破坏功能
```
## 常见的 AI 回归模式
@@ -345,10 +345,10 @@ const handleRemove = async (id: string) => {
不要追求 100% 的覆盖率。相反:
```
Bug found in /api/user/profile → Write test for profile API
Bug found in /api/user/messages → Write test for messages API
Bug found in /api/user/favorites → Write test for favorites API
No bug in /api/user/notifications → Don't write test (yet)
/api/user/profile 发现 bug → 为 profile API 编写测试
/api/user/messages 发现 bug → 为 messages API 编写测试
/api/user/favorites 发现 bug → 为 favorites API 编写测试
/api/user/notifications 没有发现 bug → 暂时不编写测试
```
**为什么这在 AI 开发中有效:**

View File

@@ -22,13 +22,13 @@ origin: ECC
```
project/
├── app/ # Android entry point, DI wiring, Application class
├── core/ # Shared utilities, base classes, error types
├── domain/ # UseCases, domain models, repository interfaces (pure Kotlin)
├── data/ # Repository implementations, DataSources, DB, network
├── presentation/ # Screens, ViewModels, UI models, navigation
├── design-system/ # Reusable Compose components, theme, typography
└── feature/ # Feature modules (optional, for larger projects)
├── app/ # Android 入口点DI 装配,Application
├── core/ # 共享工具类,基类,错误类型
├── domain/ # 用例,领域模型,仓库接口(纯 Kotlin
├── data/ # 仓库实现,数据源,数据库,网络
├── presentation/ # 界面ViewModelUI 模型,导航
├── design-system/ # 可复用的 Compose 组件,主题,排版
└── feature/ # 功能模块(可选,用于大型项目)
├── auth/
├── settings/
└── profile/
@@ -37,11 +37,11 @@ project/
### 依赖规则
```
app → presentation, domain, data, core
presentation → domain, design-system, core
data → domain, core
domain → core (or no dependencies)
core → (nothing)
app → presentation, domain, data, core
presentation → domain, design-system, core
data → domain, core
domain → core (或无依赖)
core → (无依赖)
```
**关键**`domain` 绝不能依赖 `data``presentation` 或任何框架。它仅包含纯 Kotlin 代码。

View File

@@ -22,7 +22,7 @@ origin: ECC
### URL 结构
```
# Resources are nouns, plural, lowercase, kebab-case
# 资源使用名词、复数、小写、短横线连接
GET /api/v1/users
GET /api/v1/users/:id
POST /api/v1/users
@@ -30,11 +30,11 @@ PUT /api/v1/users/:id
PATCH /api/v1/users/:id
DELETE /api/v1/users/:id
# Sub-resources for relationships
# 用于关系的子资源
GET /api/v1/users/:id/orders
POST /api/v1/users/:id/orders
# Actions that don't map to CRUD (use verbs sparingly)
# 非 CRUD 映射的操作(谨慎使用动词)
POST /api/v1/orders/:id/cancel
POST /api/v1/auth/login
POST /api/v1/auth/refresh
@@ -43,16 +43,16 @@ POST /api/v1/auth/refresh
### 命名规则
```
# GOOD
/api/v1/team-members # kebab-case for multi-word resources
/api/v1/orders?status=active # query params for filtering
/api/v1/users/123/orders # nested resources for ownership
# 良好
/api/v1/team-members # 多单词资源使用 kebab-case
/api/v1/orders?status=active # 查询参数用于过滤
/api/v1/users/123/orders # 嵌套资源表示所有权关系
# BAD
/api/v1/getUsers # verb in URL
/api/v1/user # singular (use plural)
/api/v1/team_members # snake_case in URLs
/api/v1/users/123/getOrders # verb in nested resource
# 不良
/api/v1/getUsers # URL 中包含动词
/api/v1/user # 使用单数形式(应使用复数)
/api/v1/team_members # URL 中使用 snake_case
/api/v1/users/123/getOrders # 嵌套资源路径中包含动词
```
## HTTP 方法和状态码
@@ -72,41 +72,41 @@ POST /api/v1/auth/refresh
### 状态码参考
```
# Success
200 OK — GET, PUT, PATCH (with response body)
201 Created — POST (include Location header)
204 No Content — DELETE, PUT (no response body)
# 成功
200 OK — GETPUTPATCH(包含响应体)
201 Created — POST(包含 Location 头部)
204 No Content — DELETEPUT(无响应体)
# Client Errors
400 Bad Request — Validation failure, malformed JSON
401 Unauthorized — Missing or invalid authentication
403 Forbidden — Authenticated but not authorized
404 Not Found — Resource doesn't exist
409 Conflict — Duplicate entry, state conflict
422 Unprocessable Entity — Semantically invalid (valid JSON, bad data)
429 Too Many Requests — Rate limit exceeded
# 客户端错误
400 Bad Request — 验证失败、JSON 格式错误
401 Unauthorized — 缺少或无效的身份验证
403 Forbidden — 已认证但未授权
404 Not Found — 资源不存在
409 Conflict — 重复条目、状态冲突
422 Unprocessable Entity — 语义无效JSON 格式正确但数据错误)
429 Too Many Requests — 超出速率限制
# Server Errors
500 Internal Server Error — Unexpected failure (never expose details)
502 Bad Gateway — Upstream service failed
503 Service Unavailable — Temporary overload, include Retry-After
# 服务器错误
500 Internal Server Error — 意外故障(切勿暴露细节)
502 Bad Gateway — 上游服务失败
503 Service Unavailable — 临时过载,需包含 Retry-After 头部
```
### 常见错误
```
# BAD: 200 for everything
# 错误:对所有请求都返回 200
{ "status": 200, "success": false, "error": "Not found" }
# GOOD: Use HTTP status codes semantically
# 正确:按语义使用 HTTP 状态码
HTTP/1.1 404 Not Found
{ "error": { "code": "not_found", "message": "User not found" } }
# BAD: 500 for validation errors
# GOOD: 400 or 422 with field-level details
# 错误:验证错误返回 500
# 正确:返回 400 422 并包含字段级详情
# BAD: 200 for created resources
# GOOD: 201 with Location header
# 错误:创建资源返回 200
# 正确:返回 201 并包含 Location 标头
HTTP/1.1 201 Created
Location: /api/v1/users/abc-123
```
@@ -202,7 +202,7 @@ interface ApiError {
```
GET /api/v1/users?page=2&per_page=20
# Implementation
# 实现
SELECT * FROM users
ORDER BY created_at DESC
LIMIT 20 OFFSET 20;
@@ -216,11 +216,11 @@ LIMIT 20 OFFSET 20;
```
GET /api/v1/users?cursor=eyJpZCI6MTIzfQ&limit=20
# Implementation
# 实现
SELECT * FROM users
WHERE id > :cursor_id
ORDER BY id ASC
LIMIT 21; -- fetch one extra to determine has_next
LIMIT 21; -- 多取一条以判断是否有下一页
```
```json
@@ -250,44 +250,44 @@ LIMIT 21; -- fetch one extra to determine has_next
### 过滤
```
# Simple equality
# 简单相等
GET /api/v1/orders?status=active&customer_id=abc-123
# Comparison operators (use bracket notation)
# 比较运算符(使用括号表示法)
GET /api/v1/products?price[gte]=10&price[lte]=100
GET /api/v1/orders?created_at[after]=2025-01-01
# Multiple values (comma-separated)
# 多个值(逗号分隔)
GET /api/v1/products?category=electronics,clothing
# Nested fields (dot notation)
# 嵌套字段(点表示法)
GET /api/v1/orders?customer.country=US
```
### 排序
```
# Single field (prefix - for descending)
# 单字段排序(前缀 - 表示降序)
GET /api/v1/products?sort=-created_at
# Multiple fields (comma-separated)
# 多字段排序(逗号分隔)
GET /api/v1/products?sort=-featured,price,-created_at
```
### 全文搜索
```
# Search query parameter
# 搜索查询参数
GET /api/v1/products?q=wireless+headphones
# Field-specific search
# 字段特定搜索
GET /api/v1/users?email=alice
```
### 稀疏字段集
```
# Return only specified fields (reduces payload)
# 仅返回指定字段(减少负载)
GET /api/v1/users?fields=id,name,email
GET /api/v1/orders?fields=id,total,status&include=customer.name
```
@@ -334,7 +334,7 @@ X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1640000000
# When exceeded
# 超出限制时
HTTP/1.1 429 Too Many Requests
Retry-After: 60
{
@@ -379,21 +379,21 @@ Accept: application/vnd.myapp.v2+json
### 版本控制策略
```
1. Start with /api/v1/ — don't version until you need to
2. Maintain at most 2 active versions (current + previous)
3. Deprecation timeline:
- Announce deprecation (6 months notice for public APIs)
- Add Sunset header: Sunset: Sat, 01 Jan 2026 00:00:00 GMT
- Return 410 Gone after sunset date
4. Non-breaking changes don't need a new version:
- Adding new fields to responses
- Adding new optional query parameters
- Adding new endpoints
5. Breaking changes require a new version:
- Removing or renaming fields
- Changing field types
- Changing URL structure
- Changing authentication method
1. 从 /api/v1/ 开始 —— 除非必要,否则不要急于版本化
2. 最多同时维护 2 个活跃版本(当前版本 + 前一个版本)
3. 弃用时间线:
- 宣布弃用(公共 API 需提前 6 个月通知)
- 添加 Sunset 响应头:Sunset: Sat, 01 Jan 2026 00:00:00 GMT
- 在弃用日期后返回 410 Gone 状态
4. 非破坏性变更无需创建新版本:
- 向响应中添加新字段
- 添加新的可选查询参数
- 添加新的端点
5. 破坏性变更需要创建新版本:
- 移除或重命名字段
- 更改字段类型
- 更改 URL 结构
- 更改身份验证方法
```
## 实现模式

View File

@@ -96,11 +96,11 @@ origin: ECC
```
docs/
└── adr/
├── README.md ← index of all ADRs
├── README.md ← 所有 ADR 的索引
├── 0001-use-nextjs.md
├── 0002-postgres-over-mongo.md
├── 0003-rest-over-graphql.md
└── template.md ← blank template for manual use
└── template.md ← 供手动使用的空白模板
```
### ADR 索引格式

View File

@@ -147,13 +147,13 @@ CLAW_SESSION=my-project CLAW_SKILLS=tdd-workflow,security-review node scripts/cl
### 架构:双提示系统
```
PROMPT 1 (Orchestrator) PROMPT 2 (Sub-Agents)
PROMPT 1(协调器) PROMPT 2(子代理)
┌─────────────────────┐ ┌──────────────────────┐
Parse spec file │ │ Receive full context
Scan output dir │ deploys │ Read assigned number
Plan iteration │────────────│ Follow spec exactly
Assign creative dirs │ N agents │ Generate unique output
Manage waves │ │ Save to output dir
解析规范文件 │ │ 接收完整上下文
扫描输出目录 │ 部署 │ 读取分配编号
规划迭代 │────────────│ 严格遵循规范
分配创作目录 │ N个代理 │ 生成唯一输出
管理批次 │ │ 保存至输出目录
└─────────────────────┘ └──────────────────────┘
```
@@ -218,20 +218,20 @@ PROMPT 1 (Orchestrator) PROMPT 2 (Sub-Agents)
```
┌─────────────────────────────────────────────────────┐
CONTINUOUS CLAUDE ITERATION
持续 CLAUDE 迭代
│ │
│ 1. Create branch (continuous-claude/iteration-N) │
│ 2. Run claude -p with enhanced prompt
│ 3. (Optional) Reviewer pass — separate claude -p │
│ 4. Commit changes (claude generates message)
│ 5. Push + create PR (gh pr create) │
│ 6. Wait for CI checks (poll gh pr checks) │
│ 7. CI failure? → Auto-fix pass (claude -p) │
│ 8. Merge PR (squash/merge/rebase) │
│ 9. Return to main → repeat
│ 1. 创建分支 (continuous-claude/iteration-N)
│ 2. 使用增强提示运行 claude -p
│ 3. (可选) 审查者通过 — 单独的 claude -p
│ 4. 提交更改 (claude 生成提交信息)
│ 5. 推送 + 创建 PR (gh pr create)
│ 6. 等待 CI 检查 (轮询 gh pr checks)
│ 7. CI 失败? → 自动修复通过 (claude -p)
│ 8. 合并 PR (squash/merge/rebase)
│ 9. 返回 main → 重复
│ │
Limit by: --max-runs N | --max-cost $X
│ --max-duration 2h | completion signal
限制条件: --max-runs N | --max-cost $X │
│ --max-duration 2h | 完成信号
└─────────────────────────────────────────────────────┘
```
@@ -390,27 +390,27 @@ done
### 架构概述
```
RFC/PRD Document
RFC/PRD 文档
DECOMPOSITION (AI)
Break RFC into work units with dependency DAG
分解(AI
将 RFC 分解为具有依赖关系 DAG 的工作单元
┌──────────────────────────────────────────────────────┐
│ RALPH LOOP (up to 3 passes)
│ RALPH 循环(最多 3 轮)
│ │
For each DAG layer (sequential, by dependency):
针对每个 DAG 层级(按依赖关系顺序):
│ │
│ ┌── Quality Pipelines (parallel per unit) ───────┐
│ │ Each unit in its own worktree:
│ │ Research → Plan → Implement → Test → Review │
│ │ (depth varies by complexity tier)
│ ┌── 质量流水线(每个单元并行) ───────┐
│ │ 每个单元在其独立的工作树中:
│ │ 研究 → 规划 → 实现 → 测试 → 评审 │
│ │ (深度根据复杂度层级变化) │
│ └────────────────────────────────────────────────┘ │
│ │
│ ┌── Merge Queue ─────────────────────────────────┐ │
│ │ Rebase onto main → Run tests → Land or evict │
│ │ Evicted units re-enter with conflict context │
│ ┌── 合并队列 ─────────────────────────────────┐
│ │ 变基到主分支 → 运行测试 → 合并或移除 │
│ │ 被移除的单元携带冲突上下文重新进入 │
│ └────────────────────────────────────────────────┘ │
│ │
└──────────────────────────────────────────────────────┘
@@ -442,9 +442,9 @@ interface WorkUnit {
依赖关系 DAG 决定了执行顺序:
```
Layer 0: [unit-a, unit-b] ← no deps, run in parallel
Layer 1: [unit-c] ← depends on unit-a
Layer 2: [unit-d, unit-e] ← depend on unit-c
Layer 0: [unit-a, unit-b] ← 无依赖,并行运行
Layer 1: [unit-c] ← 依赖于 unit-a
Layer 2: [unit-d, unit-e] ← 依赖于 unit-c
```
### 复杂度层级
@@ -484,13 +484,13 @@ Layer 2: [unit-d, unit-e] ← depend on unit-c
```
Unit branch
├─ Rebase onto main
│ └─ Conflict? → EVICT (capture conflict context)
├─ 变基到 main 分支
│ └─ 冲突?→ 移除(捕获冲突上下文)
├─ Run build + tests
│ └─ Fail? → EVICT (capture test output)
├─ 运行构建 + 测试
│ └─ 失败?→ 移除(捕获测试输出)
└─ Pass → Fast-forward main, push, delete branch
└─ 通过 → 快进合并 main 分支,推送,删除分支
```
**文件重叠智能:**
@@ -513,13 +513,13 @@ Unit branch
### 阶段间的数据流
```
research.contextFilePath ──────────────────→ plan
plan.implementationSteps ──────────────────→ implement
implement.{filesCreated, whatWasDone} ─────→ test, reviews
test.failingSummary ───────────────────────→ reviews, implement (next pass)
reviews.{feedback, issues} ────────────────→ review-fix → implement (next pass)
final-review.reasoning ────────────────────→ implement (next pass)
evictionContext ───────────────────────────→ implement (after merge conflict)
research.contextFilePath ──────────────────→ 方案
plan.implementationSteps ──────────────────→ 实施
implement.{filesCreated, whatWasDone} ─────→ 测试, 审查
test.failingSummary ───────────────────────→ 审查, 实施(下一轮)
reviews.{feedback, issues} ────────────────→ 审查修复 → 实施(下一轮)
final-review.reasoning ────────────────────→ 实施(下一轮)
evictionContext ───────────────────────────→ 实施(合并冲突后)
```
### 工作树隔离
@@ -560,15 +560,15 @@ evictionContext ─────────────────────
### 决策矩阵
```
Is the task a single focused change?
├─ Yes → Sequential Pipeline or NanoClaw
└─ NoIs there a written spec/RFC?
├─ Yes → Do you need parallel implementation?
│ ├─ Yes → Ralphinho (DAG orchestration)
│ └─ No → Continuous Claude (iterative PR loop)
└─ NoDo you need many variations of the same thing?
├─ Yes → Infinite Agentic Loop (spec-driven generation)
└─ NoSequential Pipeline with de-sloppify
该任务是否是一个单一的、专注的变更?
├─ 是 → 顺序管道或NanoClaw
└─ 是否有书面的规范/RFC
├─ 有 → 是否需要并行实现?
│ ├─ → RalphinhoDAG编排)
│ └─ → Continuous Claude迭代式PR循环
└─ 是否需要同一事物的多种变体?
├─ 是 → 无限代理循环(规范驱动生成)
└─ 顺序管道与去杂乱化
```
### 模式组合

View File

@@ -33,7 +33,7 @@ Blueprint 自动检测 git/gh 可用性。如果具备 git + GitHub CLI它会
### 基本用法
```
/blueprint myapp "migrate database to PostgreSQL"
/blueprint myapp "将数据库迁移到PostgreSQL"
```
生成 `plans/myapp-migrate-database-to-postgresql.md`,包含类似以下的步骤:
@@ -47,7 +47,7 @@ Blueprint 自动检测 git/gh 可用性。如果具备 git + GitHub CLI它会
### 多代理项目
```
/blueprint chatbot "extract LLM providers into a plugin system"
/blueprint chatbot "将LLM提供商提取到插件系统中"
```
生成一个尽可能包含并行步骤的计划(例如,在插件接口步骤完成后,“实现 Anthropic 插件”和“实现 OpenAI 插件”可以并行运行),分配模型层级(接口设计步骤使用最强模型,实现步骤使用默认模型),并在每个步骤后验证不变量(例如“所有现有测试通过”、“核心模块无提供商导入”)。

View File

@@ -19,21 +19,21 @@ claude mcp add devfleet --transport http http://localhost:18801/mcp
## 工作原理
```
User → "Build a REST API with auth and tests"
用户 → "构建一个带有身份验证和测试的 REST API"
plan_project(prompt) → project_id + mission DAG
plan_project(prompt) → 项目ID + 任务DAG
Show plan to user → get approval
向用户展示计划 → 获取批准
dispatch_mission(M1) → Agent 1 spawns in worktree
dispatch_mission(M1) → 代理1在工作树中生成
M1 completes → auto-merge → auto-dispatch M2 (depends_on M1)
M1完成 → 自动合并 → 自动分发M2 (依赖于M1)
M2 completes → auto-merge
M2完成 → 自动合并
get_report(M2) → files_changed, what_done, errors, next_steps
get_report(M2) → 更改的文件、完成的工作、错误、后续步骤
Report back to user
向用户报告
```
### 工具

View File

@@ -23,28 +23,28 @@ origin: ECC
在不阅读每个文件的情况下,收集关于项目的原始信息。并行运行以下检查:
```
1. Package manifest detection
→ package.json, go.mod, Cargo.toml, pyproject.toml, pom.xml, build.gradle,
Gemfile, composer.json, mix.exs, pubspec.yaml
1. 包清单检测
→ package.jsongo.modCargo.tomlpyproject.tomlpom.xmlbuild.gradle
Gemfilecomposer.jsonmix.exspubspec.yaml
2. Framework fingerprinting
→ next.config.*, nuxt.config.*, angular.json, vite.config.*,
django settings, flask app factory, fastapi main, rails config
2. 框架指纹识别
→ next.config.*nuxt.config.*angular.jsonvite.config.*
django 设置、flask 应用工厂、fastapi 主程序、rails 配置
3. Entry point identification
→ main.*, index.*, app.*, server.*, cmd/, src/main/
3. 入口点识别
→ main.*index.*app.*server.*cmd/src/main/
4. Directory structure snapshot
Top 2 levels of the directory tree, ignoring node_modules, vendor,
.git, dist, build, __pycache__, .next
4. 目录结构快照
目录树的前 2 层,忽略 node_modulesvendor
.gitdistbuild__pycache__.next
5. Config and tooling detection
→ .eslintrc*, .prettierrc*, tsconfig.json, Makefile, Dockerfile,
docker-compose*, .github/workflows/, .env.example, CI configs
5. 配置与工具检测
→ .eslintrc*.prettierrc*tsconfig.jsonMakefileDockerfile
docker-compose*.github/workflows/.env.exampleCI 配置
6. Test structure detection
→ tests/, test/, __tests__/, *_test.go, *.spec.ts, *.test.js,
pytest.ini, jest.config.*, vitest.config.*
6. 测试结构检测
→ tests/test/__tests__/*_test.go*.spec.ts*.test.js
pytest.inijest.config.*vitest.config.*
```
### 阶段 2架构映射
@@ -71,12 +71,12 @@ origin: ECC
<!-- Example for a React project — replace with detected directories -->
```
src/components/ → React UI components
src/api/ → API route handlers
src/lib/ → Shared utilities
src/db/ → Database models and migrations
tests/ → Test suites
scripts/ → Build and deployment scripts
src/components/ → React UI 组件
src/api/ → API 路由处理程序
src/lib/ → 共享工具库
src/db/ → 数据库模型和迁移文件
tests/ → 测试套件
scripts/ → 构建和部署脚本
```
**数据流**

View File

@@ -244,14 +244,14 @@ setCount(count + 1) // Can be stale in async scenarios
### REST API 约定
```
GET /api/markets # List all markets
GET /api/markets/:id # Get specific market
POST /api/markets # Create new market
PUT /api/markets/:id # Update market (full)
PATCH /api/markets/:id # Update market (partial)
DELETE /api/markets/:id # Delete market
GET /api/markets # 列出所有市场
GET /api/markets/:id # 获取特定市场
POST /api/markets # 创建新市场
PUT /api/markets/:id # 更新市场(完整)
PATCH /api/markets/:id # 更新市场(部分)
DELETE /api/markets/:id # 删除市场
# Query parameters for filtering
# 用于筛选的查询参数
GET /api/markets?status=active&limit=10&offset=0
```
@@ -341,10 +341,10 @@ src/
### 文件命名
```
components/Button.tsx # PascalCase for components
hooks/useAuth.ts # camelCase with 'use' prefix
lib/formatDate.ts # camelCase for utilities
types/market.types.ts # camelCase with .types suffix
components/Button.tsx # 组件使用帕斯卡命名法
hooks/useAuth.ts # 使用 'use' 前缀的驼峰命名法
lib/formatDate.ts # 工具函数使用驼峰命名法
types/market.types.ts # 使用 .types 后缀的驼峰命名法
```
## 注释与文档

View File

@@ -44,11 +44,11 @@ git clone https://github.com/affaan-m/everything-claude-code.git /tmp/everything
使用 `AskUserQuestion` 询问用户安装位置:
```
Question: "Where should ECC components be installed?"
Options:
- "User-level (~/.claude/)" — "Applies to all your Claude Code projects"
- "Project-level (.claude/)" — "Applies only to the current project"
- "Both" — "Common/shared items user-level, project-specific items project-level"
问题:"ECC组件应安装在哪里"
选项:
- "用户级别 (~/.claude/)" — "适用于您所有的Claude Code项目"
- "项目级别 (.claude/)" — "仅适用于当前项目"
- "两者" — "通用/共享项在用户级别,项目特定项在项目级别"
```
将选择存储为 `INSTALL_LEVEL`。设置目标目录:
@@ -74,12 +74,12 @@ mkdir -p $TARGET/skills $TARGET/rules
使用 `AskUserQuestion`(单选):
```
Question: "Install core skills only, or include niche/framework packs?"
Options:
- "Core only (recommended)" — "tdd, e2e, evals, verification, research-first, security, frontend patterns, compacting, cross-functional Anthropic skills"
- "Core + selected niche" — "Add framework/domain-specific skills after core"
- "Niche only" — "Skip core, install specific framework/domain skills"
Default: Core only
问题:"只安装核心技能,还是包含小众/框架包?"
选项:
- "仅核心(推荐)" — "tdd, e2e, evals, verification, research-first, security, frontend patterns, compacting, cross-functional Anthropic skills"
- "核心 + 精选小众" — "在核心基础上添加框架/领域特定技能"
- "仅小众" — "跳过核心,安装特定框架/领域技能"
默认:仅核心
```
如果用户选择细分领域或核心 + 细分领域,则继续下面的类别选择,并且仅包含他们选择的那些细分领域技能。
@@ -89,16 +89,16 @@ Default: Core only
下方有7个可选的类别组。后续的详细确认列表涵盖了8个类别中的45项技能外加1个独立模板。使用 `AskUserQuestion``multiSelect: true`
```
Question: "Which skill categories do you want to install?"
Options:
- "Framework & Language""Django, Laravel, Spring Boot, Go, Python, Java, Frontend, Backend patterns"
- "Database""PostgreSQL, ClickHouse, JPA/Hibernate patterns"
- "Workflow & Quality""TDD, verification, learning, security review, compaction"
- "Research & APIs""Deep research, Exa search, Claude API patterns"
- "Social & Content Distribution""X/Twitter API, crossposting alongside content-engine"
- "Media Generation""fal.ai image/video/audio alongside VideoDB"
- "Orchestration""dmux multi-agent workflows"
- "All skills" — "Install every available skill"
问题:“您希望安装哪些技能类别?”
选项:
- “框架与语言”Django, Laravel, Spring Boot, Go, Python, Java, 前端, 后端模式”
- “数据库”PostgreSQL, ClickHouse, JPA/Hibernate 模式”
- “工作流与质量”TDD, 验证, 学习, 安全审查, 压缩”
- “研究与 API“深度研究, Exa 搜索, Claude API 模式”
- “社交与内容分发”X/Twitter API, 内容引擎并行交叉发布”
- “媒体生成”fal.ai 图像/视频/音频与 VideoDB 并行”
- “编排”dmux 多智能体工作流”
- “所有技能” — “安装所有可用技能”
```
### 2c: 确认个人技能
@@ -213,12 +213,12 @@ cp -r $ECC_ROOT/skills/<skill-name> $TARGET/skills/
使用 `AskUserQuestion``multiSelect: true`
```
Question: "Which rule sets do you want to install?"
Options:
- "Common rules (Recommended)" — "Language-agnostic principles: coding style, git workflow, testing, security, etc. (8 files)"
- "TypeScript/JavaScript" — "TS/JS patterns, hooks, testing with Playwright (5 files)"
- "Python" — "Python patterns, pytest, black/ruff formatting (5 files)"
- "Go" — "Go patterns, table-driven tests, gofmt/staticcheck (5 files)"
问题:"您希望安装哪些规则集?"
选项:
- "通用规则(推荐)" — "语言无关原则编码风格、Git工作流、测试、安全等8个文件"
- "TypeScript/JavaScript" — "TS/JS模式、钩子、Playwright测试5个文件"
- "Python" — "Python模式、pytestblack/ruff格式化5个文件"
- "Go" — "Go模式、表驱动测试、gofmt/staticcheck5个文件"
```
执行安装:
@@ -300,12 +300,12 @@ grep -rn "skills/" $TARGET/skills/
使用 `AskUserQuestion`
```
Question: "Would you like to optimize the installed files for your project?"
Options:
- "Optimize skills" — "Remove irrelevant sections, adjust paths, tailor to your tech stack"
- "Optimize rules" — "Adjust coverage targets, add project-specific patterns, customize tool configs"
- "Optimize both" — "Full optimization of all installed files"
- "Skip" — "Keep everything as-is"
问题:"您想要优化项目中的已安装文件吗?"
选项:
- "优化技能" — "移除无关部分,调整路径,适配您的技术栈"
- "优化规则" — "调整覆盖目标,添加项目特定模式,自定义工具配置"
- "两者都优化" — "对所有已安装文件进行全面优化"
- "跳过" — "保持原样不变"
```
### 如果优化技能:
@@ -341,26 +341,26 @@ rm -rf /tmp/everything-claude-code
然后打印摘要报告:
```
## ECC Installation Complete
## ECC 安装完成
### Installation Target
- Level: [user-level / project-level / both]
- Path: [target path]
### 安装目标
- 级别:[用户级别 / 项目级别 / 两者]
- 路径:[目标路径]
### Skills Installed ([count])
- skill-1, skill-2, skill-3, ...
### 已安装技能 ([数量])
- 技能-1, 技能-2, 技能-3, ...
### Rules Installed ([count])
- common (8 files)
- typescript (5 files)
### 已安装规则 ([数量])
- 通用规则 (8 个文件)
- TypeScript 规则 (5 个文件)
- ...
### Verification Results
- [count] issues found, [count] fixed
- [list any remaining issues]
### 验证结果
- 发现 [数量] 个问题,已修复 [数量] 个
- [列出任何剩余问题]
### Optimizations Applied
- [list changes made, or "None"]
### 已应用的优化
- [列出所做的更改,或 "无"]
```
***

View File

@@ -76,16 +76,16 @@ origin: ECC
生成上下文预算报告:
```
Context Budget Report
上下文预算报告
═══════════════════════════════════════
Total estimated overhead: ~XX,XXX tokens
Context model: Claude Sonnet (200K window)
Effective available context: ~XXX,XXX tokens (XX%)
总预估开销:约 XX,XXX 个词元
上下文模型:Claude Sonnet (200K 窗口)
有效可用上下文:约 XXX,XXX 个词元 (XX%)
Component Breakdown:
组件细分:
┌─────────────────┬────────┬───────────┐
Component │ Count │ Tokens
组件 │ 数量 │ 词元数
├─────────────────┼────────┼───────────┤
│ Agents │ N │ ~X,XXX │
│ Skills │ N │ ~X,XXX │
@@ -94,15 +94,15 @@ Component Breakdown:
│ CLAUDE.md │ N │ ~X,XXX │
└─────────────────┴────────┴───────────┘
Issues Found (N):
[ranked by token savings]
发现的问题 (N)
[按可节省词元数排序]
Top 3 Optimizations:
1. [action] → save ~X,XXX tokens
2. [action] → save ~X,XXX tokens
3. [action] → save ~X,XXX tokens
前 3 项优化建议:
1. [action] → 节省约 X,XXX 个词元
2. [action] → 节省约 X,XXX 个词元
3. [action] → 节省约 X,XXX 个词元
Potential savings: ~XX,XXX tokens (XX% of current overhead)
潜在节省空间:约 XX,XXX 个词元 (占当前开销的 XX%)
```
在详细模式下,额外输出每个文件的令牌计数、最繁重文件的行级细分、重叠组件之间的具体冗余行,以及 MCP 工具列表和每个工具模式大小的估算。
@@ -112,26 +112,26 @@ Potential savings: ~XX,XXX tokens (XX% of current overhead)
**基本审计**
```
User: /context-budget
Skill: Scans setup → 16 agents (12,400 tokens), 28 skills (6,200), 87 MCP tools (43,500), 2 CLAUDE.md (1,200)
Flags: 3 heavy agents, 14 MCP servers (3 CLI-replaceable)
Top saving: remove 3 MCP servers → -27,500 tokens (47% overhead reduction)
/context-budget
技能:扫描设置 → 16个代理12,400个令牌28个技能6,20087MCP工具(43,5002个CLAUDE.md1,200
标记3个重型代理14MCP服务器3个可替换为CLI
最高节省移除3个MCP服务器 → -27,500个令牌减少47%开销)
```
**详细模式**
```
User: /context-budget --verbose
Skill: Full report + per-file breakdown showing planner.md (213 lines, 1,840 tokens),
MCP tool list with per-tool sizes, duplicated rule lines side by side
/context-budget --verbose
技能:完整报告 + 按文件细目显示 planner.md213 1,840 个令牌),
MCP 工具列表及每个工具的大小,重复规则行并排显示
```
**扩容前检查**
```
User: I want to add 5 more MCP servers, do I have room?
Skill: Current overhead 33% → adding 5 servers (~50 tools) would add ~25,000 tokens → pushes to 45% overhead
Recommendation: remove 2 CLI-replaceable servers first to stay under 40%
User: 我想再添加5个MCP服务器有空间吗
Skill: 当前开销33% → 添加5个服务器约50个工具会增加约25,000tokens → 开销将升至45%
建议先移除2个可用CLI替代的服务器以保持在40%以下
```
## 最佳实践

View File

@@ -13,11 +13,11 @@ origin: ECC
```text
Start
|
+-- Need strict CI/PR control? -- yes --> continuous-pr
+-- 需要严格的 CI/PR 控制? -- yes --> continuous-pr
|
+-- Need RFC decomposition? -- yes --> rfc-dag
+-- 需要 RFC 分解? -- yes --> rfc-dag
|
+-- Need exploratory parallel generation? -- yes --> infinite
+-- 需要探索性并行生成? -- yes --> infinite
|
+-- default --> sequential
```

View File

@@ -82,43 +82,43 @@ Use functional patterns over classes when appropriate.
## 工作原理
```
Session Activity (in a git repo)
会话活动(在 git 仓库中)
|
| Hooks capture prompts + tool use (100% reliable)
| + detect project context (git remote / repo path)
| 钩子捕获提示 + 工具使用100% 可靠)
| + 检测项目上下文(git remote / 仓库路径)
v
+---------------------------------------------+
| projects/<project-hash>/observations.jsonl |
| (prompts, tool calls, outcomes, project) |
| (提示、工具调用、结果、项目) |
+---------------------------------------------+
|
| Observer agent reads (background, Haiku)
| 观察者代理读取(后台,Haiku
v
+---------------------------------------------+
| PATTERN DETECTION |
| * User corrections -> instinct |
| * Error resolutions -> instinct |
| * Repeated workflows -> instinct |
| * Scope decision: project or global? |
| 模式检测 |
| * 用户修正 -> 直觉 |
| * 错误解决 -> 直觉 |
| * 重复工作流 -> 直觉 |
| * 范围决策:项目级或全局? |
+---------------------------------------------+
|
| Creates/updates
| 创建/更新
v
+---------------------------------------------+
| projects/<project-hash>/instincts/personal/ |
| * prefer-functional.yaml (0.7) [project] |
| * use-react-hooks.yaml (0.9) [project] |
| * prefer-functional.yaml (0.7) [项目] |
| * use-react-hooks.yaml (0.9) [项目] |
+---------------------------------------------+
| instincts/personal/ (GLOBAL) |
| * always-validate-input.yaml (0.85) [global]|
| * grep-before-edit.yaml (0.6) [global] |
| instincts/personal/ (全局) |
| * always-validate-input.yaml (0.85) [全局] |
| * grep-before-edit.yaml (0.6) [全局] |
+---------------------------------------------+
|
| /evolve clusters + /promote
| /evolve 聚类 + /promote
v
+---------------------------------------------+
| projects/<hash>/evolved/ (project-scoped) |
| evolved/ (global) |
| projects/<hash>/evolved/ (项目范围) |
| evolved/ (全局) |
| * commands/new-feature.md |
| * skills/testing-workflow.md |
| * agents/refactor-specialist.md |
@@ -248,29 +248,29 @@ mkdir -p ~/.claude/homunculus/{instincts/{personal,inherited},evolved/{agents,sk
```
~/.claude/homunculus/
+-- identity.json # Your profile, technical level
+-- projects.json # Registry: project hash -> name/path/remote
+-- observations.jsonl # Global observations (fallback)
+-- identity.json # 你的个人资料,技术水平
+-- projects.json # 注册表:项目哈希 -> 名称/路径/远程地址
+-- observations.jsonl # 全局观察记录(备用)
+-- instincts/
| +-- personal/ # Global auto-learned instincts
| +-- inherited/ # Global imported instincts
| +-- personal/ # 全局自动学习的本能
| +-- inherited/ # 全局导入的本能
+-- evolved/
| +-- agents/ # Global generated agents
| +-- skills/ # Global generated skills
| +-- commands/ # Global generated commands
| +-- agents/ # 全局生成的代理
| +-- skills/ # 全局生成的技能
| +-- commands/ # 全局生成的命令
+-- projects/
+-- a1b2c3d4e5f6/ # Project hash (from git remote URL)
| +-- project.json # Per-project metadata mirror (id/name/root/remote)
+-- a1b2c3d4e5f6/ # 项目哈希(来自 git 远程 URL
| +-- project.json # 项目级元数据镜像ID/名称/根目录/远程地址)
| +-- observations.jsonl
| +-- observations.archive/
| +-- instincts/
| | +-- personal/ # Project-specific auto-learned
| | +-- inherited/ # Project-specific imported
| | +-- personal/ # 项目特定自动学习的
| | +-- inherited/ # 项目特定导入的
| +-- evolved/
| +-- skills/
| +-- commands/
| +-- agents/
+-- f6e5d4c3b2a1/ # Another project
+-- f6e5d4c3b2a1/ # 另一个项目
+-- ...
```

View File

@@ -102,35 +102,35 @@ origin: ECC
**X 版本:**
```
We just shipped [feature].
我们刚刚发布了 [feature]
[One specific thing it does that's impressive]
[它所实现的某个具体且令人印象深刻的功能]
[Link]
[链接]
```
**LinkedIn 版本:**
```
Excited to share: we just launched [feature] at [Company].
激动地宣布:我们刚刚在[Company]推出了[feature]。
Here's why it matters:
以下是其重要意义:
[2-3 short paragraphs with context]
[2-3段简短背景说明]
[Takeaway for the audience]
[对受众的核心启示]
[Link]
[链接]
```
**Threads 版本:**
```
just shipped something cool — [feature]
刚发布了一个很酷的东西 —— [feature]
[casual explanation of what it does]
[对这个功能是什么的随意解释]
link in bio
链接在简介里
```
### 源内容:技术见解
@@ -138,21 +138,21 @@ link in bio
**X 版本:**
```
TIL: [specific technical insight]
今天学到:[具体技术见解]
[Why it matters in one sentence]
[一句话说明其重要性]
```
**LinkedIn 版本:**
```
A pattern I've been using that's made a real difference:
我一直在使用的一种模式,它带来了真正的改变:
[Technical insight with professional framing]
[技术见解与专业框架]
[How it applies to teams/orgs]
[它如何适用于团队/组织]
#relevantHashtag
#相关标签
```
## API 集成

View File

@@ -101,34 +101,34 @@ for batch in chunks(items, size=5):
```
my-agent/
├── config.yaml # User customises this (keywords, filters, preferences)
├── config.yaml # 用户自定义此文件(关键词、过滤器、偏好设置)
├── profile/
│ └── context.md # User context the AI uses (resume, interests, criteria)
│ └── context.md # AI 使用的用户上下文(简历、兴趣、标准)
├── scraper/
│ ├── __init__.py
│ ├── main.py # Orchestrator: scrape → enrich → store
│ ├── filters.py # Rule-based pre-filter (fast, before AI)
│ ├── main.py # 协调器:抓取 → 丰富 → 存储
│ ├── filters.py # 基于规则的预过滤器(快速,在 AI 处理之前)
│ └── sources/
│ ├── __init__.py
│ └── source_name.py # One file per data source
│ └── source_name.py # 每个数据源一个文件
├── ai/
│ ├── __init__.py
│ ├── client.py # Gemini REST client with model fallback
│ ├── pipeline.py # Batch AI analysis
│ ├── jd_fetcher.py # Fetch full content from URLs (optional)
│ └── memory.py # Learn from user feedback
│ ├── client.py # Gemini REST 客户端,带模型回退
│ ├── pipeline.py # 批量 AI 分析
│ ├── jd_fetcher.py # 从 URL 获取完整内容(可选)
│ └── memory.py # 从用户反馈中学习
├── storage/
│ ├── __init__.py
│ └── notion_sync.py # Or sheets_sync.py / supabase_sync.py
│ └── notion_sync.py # sheets_sync.py / supabase_sync.py
├── data/
│ └── feedback.json # User decision history (auto-updated)
│ └── feedback.json # 用户决策历史(自动更新)
├── .env.example
├── setup.py # One-time DB/schema creation
├── enrich_existing.py # Backfill AI scores on old rows
├── setup.py # 一次性数据库/模式创建
├── enrich_existing.py # 对旧行进行 AI 分数回填
├── requirements.txt
└── .github/
└── workflows/
└── scraper.yml # GitHub Actions schedule
└── scraper.yml # GitHub Actions 计划任务
```
***
@@ -725,8 +725,8 @@ beautifulsoup4==4.12.3
lxml==5.1.0
python-dotenv==1.0.1
pyyaml==6.0.2
notion-client==2.2.1 # if using Notion
# playwright==1.40.0 # uncomment for JS-rendered sites
notion-client==2.2.1 # 如需使用 Notion
# playwright==1.40.0 # 针对 JS 渲染的站点,请取消注释
```
***
@@ -753,14 +753,14 @@ notion-client==2.2.1 # if using Notion
## 真实世界示例
```
"Build me an agent that monitors Hacker News for AI startup funding news"
"Scrape product prices from 3 e-commerce sites and alert when they drop"
"Track new GitHub repos tagged with 'llm' or 'agents' — summarise each one"
"Collect Chief of Staff job listings from LinkedIn and Cutshort into Notion"
"Monitor a subreddit for posts mentioning my company — classify sentiment"
"Scrape new academic papers from arXiv on a topic I care about daily"
"Track sports fixture results and keep a running table in Google Sheets"
"Build a real estate listing watcher — alert on new properties under ₹1 Cr"
"为我构建一个监控 Hacker News AI 初创公司融资新闻的智能体"
"从 3 家电商网站抓取产品价格并在降价时发出提醒"
"追踪标记有 'llm' 'agents' 的新 GitHub 仓库——并为每个仓库生成摘要"
" LinkedIn Cutshort 上的首席运营官职位列表收集到 Notion"
"监控一个提到我公司的 subreddit 帖子——并进行情感分类"
"每日从 arXiv 抓取我关注主题的新学术论文"
"追踪体育赛事结果并在 Google Sheets 中维护动态更新的表格"
"构建一个房地产房源监控器——在新房源价格低于 1 千万卢比时发出提醒"
```
***

View File

@@ -300,27 +300,27 @@ ALTER TABLE users DROP COLUMN IF EXISTS avatar_url;
```
Phase 1: EXPAND
- Add new column/table (nullable or with default)
- Deploy: app writes to BOTH old and new
- Backfill existing data
- 添加新列/表(可为空或带有默认值)
- 部署:应用同时写入旧数据和新数据
- 回填现有数据
Phase 2: MIGRATE
- Deploy: app reads from NEW, writes to BOTH
- Verify data consistency
- 部署:应用读取新数据,同时写入新旧数据
- 验证数据一致性
Phase 3: CONTRACT
- Deploy: app only uses NEW
- Drop old column/table in separate migration
- 部署:应用仅使用新数据
- 在单独迁移中删除旧列/表
```
### 时间线示例
```
Day 1: Migration adds new_status column (nullable)
Day 1: Deploy app v2 — writes to both status and new_status
Day 2: Run backfill migration for existing rows
Day 3: Deploy app v3 — reads from new_status only
Day 7: Migration drops old status column
Day 1:迁移添加新的 `new_status` 列(可空)
Day 1:部署应用 v2 —— 同时写入 `status` 和 `new_status`
Day 2:运行针对现有行的回填迁移
Day 3:部署应用 v3 —— 仅从 `new_status` 读取
Day 7:迁移删除旧的 `status` 列
```
## 反模式

View File

@@ -60,8 +60,8 @@ firecrawl_search(query: "<sub-question keywords>", limit: 8)
**使用 exa**
```
web_search_exa(query: "<sub-question keywords>", numResults: 8)
web_search_advanced_exa(query: "<keywords>", numResults: 5, startPublishedDate: "2025-01-01")
web_search_exa(query: "<子问题关键词>", numResults: 8)
web_search_advanced_exa(query: "<关键词>", numResults: 5, startPublishedDate: "2025-01-01")
```
**搜索策略:**
@@ -135,10 +135,10 @@ crawling_exa(url: "<url>", tokensNum: 5000)
对于广泛的主题,使用 Claude Code 的 Task 工具进行并行处理:
```
Launch 3 research agents in parallel:
1. Agent 1: Research sub-questions 1-2
2. Agent 2: Research sub-questions 3-4
3. Agent 3: Research sub-question 5 + cross-cutting themes
并行启动3个研究代理
1. 代理1研究子问题1-2
2. 代理2研究子问题3-4
3. 代理3研究子问题5 + 交叉主题
```
每个代理负责搜索、阅读来源并返回发现结果。主会话将其综合成最终报告。
@@ -155,9 +155,9 @@ Launch 3 research agents in parallel:
## 示例
```
"Research the current state of nuclear fusion energy"
"Deep dive into Rust vs Go for backend services in 2026"
"Research the best strategies for bootstrapping a SaaS business"
"What's happening with the US housing market right now?"
"Investigate the competitive landscape for AI code editors"
"研究核聚变能源的当前现状"
"深入探讨 2026 年 Rust Go 在后端服务中的对比"
"研究自举 SaaS 业务的最佳策略"
"美国房地产市场目前情况如何?"
"调查 AI 代码编辑器的竞争格局"
```

View File

@@ -24,17 +24,17 @@ origin: ECC
逐步替换实例——在发布过程中,新旧版本同时运行。
```
Instance 1: v1 → v2 (update first)
Instance 2: v1 (still running v1)
Instance 3: v1 (still running v1)
实例 1: v1 → v2 (首次更新)
实例 2: v1 (仍在运行 v1)
实例 3: v1 (仍在运行 v1)
Instance 1: v2
Instance 2: v1 → v2 (update second)
Instance 3: v1
实例 1: v2
实例 2: v1 → v2 (第二次更新)
实例 3: v1
Instance 1: v2
Instance 2: v2
Instance 3: v1 → v2 (update last)
实例 1: v2
实例 2: v2
实例 3: v1 → v2 (最后更新)
```
**优点:** 零停机时间,渐进式发布
@@ -46,12 +46,12 @@ Instance 3: v1 → v2 (update last)
运行两个相同的环境。原子化地切换流量。
```
Blue (v1) ← traffic
Green (v2) idle, running new version
Blue (v1) ← 流量
Green (v2) 空闲,运行新版本
# After verification:
Blue (v1) idle (becomes standby)
Green (v2) ← traffic
# 验证后:
Blue (v1) 空闲(转为备用状态)
Green (v2) ← 流量
```
**优点:** 即时回滚(切换回蓝色环境),切换干净利落
@@ -63,15 +63,15 @@ Green (v2) ← traffic
首先将一小部分流量路由到新版本。
```
v1: 95% of traffic
v2: 5% of traffic (canary)
v195% 的流量
v25% 的流量(金丝雀)
# If metrics look good:
v1: 50% of traffic
v2: 50% of traffic
# 如果指标表现良好:
v150% 的流量
v250% 的流量
# Final:
v2: 100% of traffic
# 最终:
v2100% 的流量
```
**优点:** 在全量发布前,通过真实流量发现问题
@@ -168,21 +168,21 @@ CMD ["gunicorn", "config.wsgi:application", "--bind", "0.0.0.0:8000", "--workers
### Docker 最佳实践
```
# GOOD practices
- Use specific version tags (node:22-alpine, not node:latest)
- Multi-stage builds to minimize image size
- Run as non-root user
- Copy dependency files first (layer caching)
- Use .dockerignore to exclude node_modules, .git, tests
- Add HEALTHCHECK instruction
- Set resource limits in docker-compose or k8s
# 良好实践
- 使用特定版本标签(node:22-alpine,而非 node:latest
- 采用多阶段构建以最小化镜像体积
- 以非 root 用户身份运行
- 优先复制依赖文件(利用分层缓存)
- 使用 .dockerignore 排除 node_modules.gittests 等文件
- 添加 HEALTHCHECK 指令
- docker-compose k8s 中设置资源限制
# BAD practices
- Running as root
- Using :latest tags
- Copying entire repo in one COPY layer
- Installing dev dependencies in production image
- Storing secrets in image (use env vars or secrets manager)
# 不良实践
- 以 root 身份运行
- 使用 :latest 标签
- 在单个 COPY 层中复制整个仓库
- 在生产镜像中安装开发依赖
- 在镜像中存储密钥(应使用环境变量或密钥管理器)
```
## CI/CD 流水线
@@ -254,11 +254,11 @@ jobs:
### 流水线阶段
```
PR opened:
lint → typecheck → unit tests → integration tests → preview deploy
PR 已开启:
lint → typecheck → 单元测试 → 集成测试 → 预览部署
Merged to main:
lint → typecheck → unit tests → integration tests → build image → deploy staging → smoke tests → deploy production
合并到 main
lint → typecheck → 单元测试 → 集成测试 → 构建镜像 → 部署到 staging → 冒烟测试 → 部署到 production
```
## 健康检查

View File

@@ -26,10 +26,10 @@ myproject/
│ ├── __init__.py
│ ├── settings/
│ │ ├── __init__.py
│ │ ├── base.py # Base settings
│ │ ├── development.py # Dev settings
│ │ ├── production.py # Production settings
│ │ └── test.py # Test settings
│ │ ├── base.py # 基础设置
│ │ ├── development.py # 开发环境设置
│ │ ├── production.py # 生产环境设置
│ │ └── test.py # 测试环境设置
│ ├── urls.py
│ ├── wsgi.py
│ └── asgi.py

View File

@@ -289,86 +289,86 @@ git diff | grep "import pdb" # Debugger
## 输出模板
```
DJANGO VERIFICATION REPORT
DJANGO 验证报告
==========================
Phase 1: Environment Check
阶段 1环境检查
✓ Python 3.11.5
Virtual environment active
All environment variables set
虚拟环境已激活
所有环境变量已设置
Phase 2: Code Quality
✓ mypy: No type errors
✗ ruff: 3 issues found (auto-fixed)
✓ black: No formatting issues
✓ isort: Imports properly sorted
✓ manage.py check: No issues
阶段 2代码质量
✓ mypy: 无类型错误
✗ ruff: 发现 3 个问题(已自动修复)
✓ black: 无格式问题
✓ isort: 导入已正确排序
✓ manage.py check: 无问题
Phase 3: Migrations
No unapplied migrations
No migration conflicts
All models have migrations
阶段 3数据库迁移
无未应用的迁移
无迁移冲突
所有模型均有对应的迁移文件
Phase 4: Tests + Coverage
Tests: 247 passed, 0 failed, 5 skipped
Coverage:
Overall: 87%
阶段 4测试与覆盖率
测试247 通过0 失败5 跳过
覆盖率:
总计:87%
users: 92%
products: 89%
orders: 85%
payments: 91%
Phase 5: Security Scan
✗ pip-audit: 2 vulnerabilities found (fix required)
✓ safety check: No issues
✓ bandit: No security issues
No secrets detected
阶段 5安全扫描
✗ pip-audit: 发现 2 个漏洞(需要修复)
✓ safety check: 无问题
✓ bandit: 无安全问题
未检测到密钥泄露
✓ DEBUG = False
Phase 6: Django Commands
✓ collectstatic completed
Database integrity OK
Cache backend reachable
阶段 6Django 命令
✓ collectstatic 完成
数据库完整性正常
缓存后端可访问
Phase 7: Performance
No N+1 queries detected
Database indexes configured
Query count acceptable
阶段 7性能
未检测到 N+1 查询
数据库索引已配置
查询数量可接受
Phase 8: Static Assets
✓ npm audit: No vulnerabilities
Assets built successfully
Static files collected
阶段 8静态资源
✓ npm audit: 无漏洞
资源构建成功
静态文件已收集
Phase 9: Configuration
阶段 9配置
✓ DEBUG = False
✓ SECRET_KEY configured
✓ ALLOWED_HOSTS set
✓ HTTPS enabled
✓ HSTS enabled
Database configured
✓ SECRET_KEY 已配置
✓ ALLOWED_HOSTS 已设置
✓ HTTPS 已启用
✓ HSTS 已启用
数据库已配置
Phase 10: Logging
Logging configured
Log files writable
阶段 10日志
日志配置完成
日志文件可写入
Phase 11: API Documentation
Schema generated
✓ Swagger UI accessible
阶段 11API 文档
架构已生成
✓ Swagger UI 可访问
Phase 12: Diff Review
Files changed: 12
+450, -120 lines
No debug statements
No hardcoded secrets
Migrations included
阶段 12差异审查
文件变更:12
行数变化:+450, -120
无调试语句
无硬编码密钥
包含迁移文件
RECOMMENDATION: ⚠️ Fix pip-audit vulnerabilities before deploying
建议:⚠️ 部署前修复 pip-audit 发现的漏洞
NEXT STEPS:
1. Update vulnerable dependencies
2. Re-run security scan
3. Deploy to staging for final testing
后续步骤:
1. 更新存在漏洞的依赖项
2. 重新运行安全扫描
3. 部署到预发布环境进行最终测试
```
## 预部署检查清单

View File

@@ -47,14 +47,14 @@ dmux
将研究和实现拆分为并行轨道:
```
Pane 1 (Research): "Research best practices for rate limiting in Node.js.
Check current libraries, compare approaches, and write findings to
Pane 1 (Research): "研究 Node.js 中速率限制的最佳实践。
检查当前可用的库,比较不同方法,并将研究结果写入
/tmp/rate-limit-research.md"
Pane 2 (Implement): "Implement rate limiting middleware for our Express API.
Start with a basic token bucket, we'll refine after research completes."
Pane 2 (Implement): "为我们的 Express API 实现速率限制中间件。
先从基本的令牌桶算法开始,研究完成后我们将进一步优化。"
# After Pane 1 completes, merge findings into Pane 2's context
# Pane 1 完成后,将研究结果合并到 Pane 2 的上下文中
```
### 模式 2多文件功能
@@ -62,11 +62,11 @@ Pane 2 (Implement): "Implement rate limiting middleware for our Express API.
在独立文件间并行工作:
```
Pane 1: "Create the database schema and migrations for the billing feature"
Pane 2: "Build the billing API endpoints in src/api/billing/"
Pane 3: "Create the billing dashboard UI components"
Pane 1: "创建计费功能的数据库模式和迁移"
Pane 2: " src/api/billing/ 中构建计费 API 端点"
Pane 3: "创建计费仪表板 UI 组件"
# Merge all, then do integration in main pane
# 合并所有内容,然后在主面板中进行集成
```
### 模式 3测试 + 修复循环
@@ -74,10 +74,10 @@ Pane 3: "Create the billing dashboard UI components"
在一个窗格中运行测试,在另一个窗格中修复:
```
Pane 1 (Watcher): "Run the test suite in watch mode. When tests fail,
summarize the failures."
窗格 1观察者“在监视模式下运行测试套件。当测试失败时
总结失败原因。”
Pane 2 (Fixer): "Fix failing tests based on the error output from pane 1"
窗格 2修复者“根据窗格 1 的错误输出修复失败的测试”
```
### 模式 4跨套件
@@ -95,11 +95,11 @@ Pane 3 (Claude Code): "Write E2E tests for the checkout flow"
并行审查视角:
```
Pane 1: "Review src/api/ for security vulnerabilities"
Pane 2: "Review src/api/ for performance issues"
Pane 3: "Review src/api/ for test coverage gaps"
Pane 1: "审查 src/api/ 中的安全漏洞"
Pane 2: "审查 src/api/ 中的性能问题"
Pane 3: "审查 src/api/ 中的测试覆盖缺口"
# Merge all reviews into a single report
# 将所有审查合并为一份报告
```
## 最佳实践

View File

@@ -156,9 +156,9 @@ docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
同一 Compose 网络中的服务可通过服务名解析:
```
# From "app" container:
postgres://postgres:postgres@db:5432/app_dev # "db" resolves to the db container
redis://redis:6379/0 # "redis" resolves to the redis container
# "app" 容器:
postgres://postgres:postgres@db:5432/app_dev # "db" 解析到 db 容器
redis://redis:6379/0 # "redis" 解析到 redis 容器
```
### 自定义网络
@@ -345,21 +345,21 @@ docker network inspect <project>_default
## 反模式
```
# BAD: Using docker compose in production without orchestration
# Use Kubernetes, ECS, or Docker Swarm for production multi-container workloads
# 错误做法:在生产环境中使用 docker compose 而不进行编排
# 生产环境多容器工作负载应使用 KubernetesECS Docker Swarm
# BAD: Storing data in containers without volumes
# Containers are ephemeral -- all data lost on restart without volumes
# 错误做法:在容器内存储数据而不使用卷
# 容器是临时性的——不使用卷时,重启会导致所有数据丢失
# BAD: Running as root
# Always create and use a non-root user
# 错误做法:以 root 用户身份运行
# 始终创建并使用非 root 用户
# BAD: Using :latest tag
# Pin to specific versions for reproducible builds
# 错误做法:使用 :latest 标签
# 固定到特定版本以实现可复现的构建
# BAD: One giant container with all services
# Separate concerns: one process per container
# 错误做法:将所有服务放入一个巨型容器
# 关注点分离:每个容器运行一个进程
# BAD: Putting secrets in docker-compose.yml
# Use .env files (gitignored) or Docker secrets
# 错误做法:将密钥放入 docker-compose.yml
# 使用 .env 文件(在 git 中忽略)或 Docker secrets
```

View File

@@ -208,7 +208,7 @@ npm test -- --testPathPattern="existing"
### 实施后
```
/eval report feature-name
/eval 报告 功能名称
```
生成完整的评估报告
@@ -220,9 +220,9 @@ npm test -- --testPathPattern="existing"
```
.claude/
evals/
feature-xyz.md # Eval definition
feature-xyz.log # Eval run history
baseline.json # Regression baselines
feature-xyz.md # Eval定义
feature-xyz.log # Eval运行历史
baseline.json # 回归基线
```
## 最佳实践

View File

@@ -40,7 +40,7 @@ origin: ECC
用于当前信息、新闻或事实的通用网页搜索。
```
web_search_exa(query: "latest AI developments 2026", numResults: 5)
web_search_exa(query: "2026年最新人工智能发展", numResults: 5)
```
**参数:**
@@ -73,27 +73,27 @@ get_code_context_exa(query: "Python asyncio patterns", tokensNum: 3000)
### 快速查找
```
web_search_exa(query: "Node.js 22 new features", numResults: 3)
web_search_exa(query: "Node.js 22 新功能", numResults: 3)
```
### 代码研究
```
get_code_context_exa(query: "Rust error handling patterns Result type", tokensNum: 3000)
get_code_context_exa(query: "Rust错误处理模式Result类型", tokensNum: 3000)
```
### 公司或人物研究
```
web_search_exa(query: "Vercel funding valuation 2026", numResults: 3, category: "company")
web_search_exa(query: "site:linkedin.com/in AI safety researchers Anthropic", numResults: 5)
web_search_exa(query: "Vercel 2026年融资估值", numResults: 3, category: "company")
web_search_exa(query: "site:linkedin.com/in Anthropic AI安全研究员", numResults: 5)
```
### 技术深度研究
```
web_search_exa(query: "WebAssembly component model status and adoption", numResults: 5)
get_code_context_exa(query: "WebAssembly component model examples", tokensNum: 4000)
web_search_exa(query: "WebAssembly 组件模型状态与采用情况", numResults: 5)
get_code_context_exa(query: "WebAssembly 组件模型示例", tokensNum: 4000)
```
## 提示

View File

@@ -56,7 +56,7 @@ fal.ai MCP 提供以下工具:
generate(
app_id: "fal-ai/nano-banana-2",
input_data: {
"prompt": "a futuristic cityscape at sunset, cyberpunk style",
"prompt": "未来主义日落城市景观,赛博朋克风格",
"image_size": "landscape_16_9",
"num_images": 1,
"seed": 42
@@ -72,7 +72,7 @@ generate(
generate(
app_id: "fal-ai/nano-banana-pro",
input_data: {
"prompt": "professional product photo of wireless headphones on marble surface, studio lighting",
"prompt": "专业产品照片,无线耳机置于大理石表面,影棚灯光",
"image_size": "square",
"num_images": 1,
"guidance_scale": 7.5
@@ -95,10 +95,10 @@ generate(
使用 Nano Banana 2 并输入图像进行修复、扩展或风格迁移:
```
# First upload the source image
# 首先上传源图像
upload(file_path: "/path/to/image.png")
# Then generate with image input
# 然后使用图像输入进行生成
generate(
app_id: "fal-ai/nano-banana-2",
input_data: {
@@ -137,7 +137,7 @@ generate(
generate(
app_id: "fal-ai/kling-video/v3/pro",
input_data: {
"prompt": "ocean waves crashing on a rocky coast, dramatic clouds",
"prompt": "海浪拍打着岩石海岸,乌云密布",
"duration": "5s",
"aspect_ratio": "16:9"
}
@@ -152,7 +152,7 @@ generate(
generate(
app_id: "fal-ai/veo-3",
input_data: {
"prompt": "a bustling Tokyo street market at night, neon signs, crowd noise",
"prompt": "夜晚熙熙攘攘的东京街头市场,霓虹灯招牌,人群喧嚣",
"aspect_ratio": "16:9"
}
)

View File

@@ -17,9 +17,9 @@
### 黄金法则
```text
Each slide = exactly one viewport height.
Too much content = split into more slides.
Never scroll inside a slide.
每个幻灯片 = 恰好一个视口高度。
内容过多 = 分割成更多幻灯片。
切勿在幻灯片内部滚动。
```
### 内容密度限制

View File

@@ -369,17 +369,17 @@ func WriteAndFlush(w io.Writer, data []byte) error {
myproject/
├── cmd/
│ └── myapp/
│ └── main.go # Entry point
│ └── main.go # 入口点
├── internal/
│ ├── handler/ # HTTP handlers
│ ├── service/ # Business logic
│ ├── repository/ # Data access
│ └── config/ # Configuration
│ ├── handler/ # HTTP 处理器
│ ├── service/ # 业务逻辑
│ ├── repository/ # 数据访问
│ └── config/ # 配置
├── pkg/
│ └── client/ # Public API client
│ └── client/ # 公共 API 客户端
├── api/
│ └── v1/ # API definitions (proto, OpenAPI)
├── testdata/ # Test fixtures
│ └── v1/ # API 定义(proto, OpenAPI
├── testdata/ # 测试夹具
├── go.mod
├── go.sum
└── Makefile

View File

@@ -21,10 +21,10 @@ origin: ECC
### 红-绿-重构循环
```
RED → Write a failing test first
GREEN → Write minimal code to pass the test
REFACTOR → Improve code while keeping tests green
REPEAT → Continue with next requirement
RED → 首先编写一个失败的测试
GREEN → 编写最少的代码来通过测试
REFACTOR → 改进代码,同时保持测试通过
REPEAT → 继续处理下一个需求
```
### Go 中的分步 TDD

View File

@@ -38,15 +38,15 @@ origin: ECC
┌─────────────────────────────────────────────┐
│ │
│ ┌──────────┐ ┌──────────┐ │
│ │ DISPATCH │─────▶│ EVALUATE │ │
│ │ 调度 │─────▶│ 评估 │ │
│ └──────────┘ └──────────┘ │
│ ▲ │ │
│ │ ▼ │
│ ┌──────────┐ ┌──────────┐ │
│ │ LOOP │◀─────│ REFINE │ │
│ │ 循环 │◀─────│ 优化 │ │
│ └──────────┘ └──────────┘ │
│ │
Max 3 cycles, then proceed
最多3次循环然后继续
└─────────────────────────────────────────────┘
```
@@ -148,42 +148,42 @@ async function iterativeRetrieve(task, maxCycles = 3) {
### 示例 1错误修复上下文
```
Task: "Fix the authentication token expiry bug"
任务:"修复身份验证令牌过期错误"
Cycle 1:
DISPATCH: Search for "token", "auth", "expiry" in src/**
EVALUATE: Found auth.ts (0.9), tokens.ts (0.8), user.ts (0.3)
REFINE: Add "refresh", "jwt" keywords; exclude user.ts
循环 1:
分发:在 src/** 中搜索 "token""auth""expiry"
评估:找到 auth.ts (0.9)tokens.ts (0.8)user.ts (0.3)
优化:添加 "refresh""jwt" 关键词;排除 user.ts
Cycle 2:
DISPATCH: Search refined terms
EVALUATE: Found session-manager.ts (0.95), jwt-utils.ts (0.85)
REFINE: Sufficient context (2 high-relevance files)
循环 2:
分发:搜索优化后的关键词
评估:找到 session-manager.ts (0.95)jwt-utils.ts (0.85)
优化上下文已充分2 个高相关文件)
Result: auth.ts, tokens.ts, session-manager.ts, jwt-utils.ts
结果:auth.tstokens.tssession-manager.tsjwt-utils.ts
```
### 示例 2功能实现
```
Task: "Add rate limiting to API endpoints"
任务:"为API端点添加速率限制"
Cycle 1:
DISPATCH: Search "rate", "limit", "api" in routes/**
EVALUATE: No matches - codebase uses "throttle" terminology
REFINE: Add "throttle", "middleware" keywords
周期 1
分发:在 routes/** 中搜索 "rate""limit""api"
评估:无匹配项 - 代码库使用 "throttle" 术语
优化:添加 "throttle""middleware" 关键词
Cycle 2:
DISPATCH: Search refined terms
EVALUATE: Found throttle.ts (0.9), middleware/index.ts (0.7)
REFINE: Need router patterns
周期 2
分发:搜索优化后的术语
评估:找到 throttle.ts (0.9)middleware/index.ts (0.7)
优化:需要路由模式
Cycle 3:
DISPATCH: Search "router", "express" patterns
EVALUATE: Found router-setup.ts (0.8)
REFINE: Sufficient context
周期 3
分发:搜索 "router""express" 模式
评估:找到 router-setup.ts (0.8)
优化:上下文已足够
Result: throttle.ts, middleware/index.ts, router-setup.ts
结果:throttle.tsmiddleware/index.tsrouter-setup.ts
```
## 与智能体集成

View File

@@ -23,9 +23,9 @@ origin: ECC
```
Application
└── viewModelScope (ViewModel)
└── coroutineScope { } (structured child)
├── async { } (concurrent task)
└── async { } (concurrent task)
└── coroutineScope { } (结构化子作用域)
├── async { } (并发任务)
└── async { } (并发任务)
```
始终使用结构化并发——绝不使用 `GlobalScope`

View File

@@ -24,28 +24,28 @@ origin: ECC
```text
src/main/kotlin/
├── com/example/
│ ├── Application.kt # Entry point, module configuration
│ ├── Application.kt # 入口点,模块配置
│ ├── plugins/
│ │ ├── Routing.kt # Route definitions
│ │ ├── Serialization.kt # Content negotiation setup
│ │ ├── Authentication.kt # Auth configuration
│ │ ├── StatusPages.kt # Error handling
│ │ └── CORS.kt # CORS configuration
│ │ ├── Routing.kt # 路由定义
│ │ ├── Serialization.kt # 内容协商设置
│ │ ├── Authentication.kt # 认证配置
│ │ ├── StatusPages.kt # 错误处理
│ │ └── CORS.kt # CORS 配置
│ ├── routes/
│ │ ├── UserRoutes.kt # /users endpoints
│ │ ├── AuthRoutes.kt # /auth endpoints
│ │ └── HealthRoutes.kt # /health endpoints
│ │ ├── UserRoutes.kt # /users 端点
│ │ ├── AuthRoutes.kt # /auth 端点
│ │ └── HealthRoutes.kt # /health 端点
│ ├── models/
│ │ ├── User.kt # Domain models
│ │ └── ApiResponse.kt # Response envelopes
│ │ ├── User.kt # 领域模型
│ │ └── ApiResponse.kt # 响应封装
│ ├── services/
│ │ ├── UserService.kt # Business logic
│ │ └── AuthService.kt # Auth logic
│ │ ├── UserService.kt # 业务逻辑
│ │ └── AuthService.kt # 认证逻辑
│ ├── repositories/
│ │ ├── UserRepository.kt # Data access interface
│ │ ├── UserRepository.kt # 数据访问接口
│ │ └── ExposedUserRepository.kt
│ └── di/
│ └── AppModule.kt # Koin modules
│ └── AppModule.kt # Koin 模块
src/test/kotlin/
├── com/example/
│ ├── routes/

View File

@@ -43,10 +43,10 @@ origin: ECC
#### RED-GREEN-REFACTOR 周期
```
RED -> Write a failing test first
GREEN -> Write minimal code to pass the test
REFACTOR -> Improve code while keeping tests green
REPEAT -> Continue with next requirement
RED -> 首先编写一个失败的测试
GREEN -> 编写最少的代码使测试通过
REFACTOR -> 改进代码同时保持测试通过
REPEAT -> 继续下一个需求
```
#### Kotlin 中逐步进行 TDD

View File

@@ -34,20 +34,20 @@ origin: ECC
```
app/
├── Actions/ # Single-purpose use cases
├── Actions/ # 单一用途的用例
├── Console/
├── Events/
├── Exceptions/
├── Http/
│ ├── Controllers/
│ ├── Middleware/
│ ├── Requests/ # Form request validation
│ └── Resources/ # API resources
│ ├── Requests/ # 表单请求验证
│ └── Resources/ # API 资源
├── Jobs/
├── Models/
├── Policies/
├── Providers/
├── Services/ # Coordinating domain services
├── Services/ # 协调领域服务
└── Support/
config/
database/

View File

@@ -372,19 +372,19 @@ for my $child (path('src')->children(qr/\.pl$/)) {
MyApp/
├── lib/
│ └── MyApp/
│ ├── App.pm # Main module
│ ├── Config.pm # Configuration
│ ├── DB.pm # Database layer
│ └── Util.pm # Utilities
│ ├── App.pm # 主模块
│ ├── Config.pm # 配置
│ ├── DB.pm # 数据库层
│ └── Util.pm # 工具集
├── bin/
│ └── myapp # Entry-point script
│ └── myapp # 入口脚本
├── t/
│ ├── 00-load.t # Compilation tests
│ ├── unit/ # Unit tests
│ └── integration/ # Integration tests
├── cpanfile # Dependencies
├── Makefile.PL # Build system
└── .perlcriticrc # Linting config
│ ├── 00-load.t # 编译测试
│ ├── unit/ # 单元测试
│ └── integration/ # 集成测试
├── cpanfile # 依赖项
├── Makefile.PL # 构建系统
└── .perlcriticrc # 代码检查配置
```
### 导出器模式
@@ -407,12 +407,12 @@ sub trim($str) { $str =~ s/^\s+|\s+$//gr }
### perltidy 配置 (.perltidyrc)
```text
-i=4 # 4-space indent
-l=100 # 100-char line length
-ci=4 # continuation indent
-ce # cuddled else
-bar # opening brace on same line
-nolq # don't outdent long quoted strings
-i=4 # 4 空格缩进
-l=100 # 100 字符行宽
-ci=4 # 续行缩进
-ce # else 与右花括号同行
-bar # 左花括号与语句同行
-nolq # 不对长引用字符串进行反向缩进
```
### perlcritic 配置 (.perlcriticrc)

View File

@@ -229,19 +229,19 @@ done_testing;
```text
t/
├── 00-load.t # Verify modules compile
├── 01-basic.t # Core functionality
├── 00-load.t # 验证模块编译
├── 01-basic.t # 核心功能
├── unit/
│ ├── config.t # Unit tests by module
│ ├── config.t # 按模块划分的单元测试
│ ├── user.t
│ └── util.t
├── integration/
│ ├── database.t
│ └── api.t
├── lib/
│ └── TestHelper.pm # Shared test utilities
│ └── TestHelper.pm # 共享测试工具
└── fixtures/
├── config.json # Test data files
├── config.json # 测试数据文件
└── users.csv
```

View File

@@ -22,24 +22,24 @@ Plankton作者@alxfazio的集成参考这是一个用于 Claude Code
每次 Claude Code 编辑或写入文件时Plankton 的 `multi_linter.sh` PostToolUse 钩子都会运行:
```
Phase 1: Auto-Format (Silent)
├─ Runs formatters (ruff format, biome, shfmt, taplo, markdownlint)
├─ Fixes 40-50% of issues silently
└─ No output to main agent
阶段 1自动格式化静默
├─ 运行格式化工具(ruff formatbiomeshfmttaplomarkdownlint
├─ 静默修复 40-50% 的问题
└─ 无输出至主代理
Phase 2: Collect Violations (JSON)
├─ Runs linters and collects unfixable violations
├─ Returns structured JSON: {line, column, code, message, linter}
└─ Still no output to main agent
阶段 2收集违规项JSON
├─ 运行 linter 并收集无法修复的违规项
├─ 返回结构化 JSON{line, column, code, message, linter}
└─ 仍无输出至主代理
Phase 3: Delegate + Verify
├─ Spawns claude -p subprocess with violations JSON
├─ Routes to model tier based on violation complexity:
│ ├─ Haiku: formatting, imports, style (E/W/F codes) — 120s timeout
│ ├─ Sonnet: complexity, refactoring (C901, PLR codes) — 300s timeout
│ └─ Opus: type system, deep reasoning (unresolved-attribute) — 600s timeout
├─ Re-runs Phase 1+2 to verify fixes
└─ Exit 0 if clean, Exit 2 if violations remain (reported to main agent)
阶段 3委托 + 验证
├─ 生成带有违规项 JSON 的 claude -p 子进程
├─ 根据违规项复杂度路由至模型层级:
│ ├─ Haiku格式化、导入、样式E/W/F 代码)—— 120 秒超时
│ ├─ Sonnet:复杂度、重构(C901PLR 代码)—— 300 秒超时
│ └─ Opus:类型系统、深度推理(unresolved-attribute)—— 600 秒超时
├─ 重新运行阶段 1+2 以验证修复
└─ 若清理完毕则退出码 0若违规项仍存在则退出码 2报告至主代理
```
### 主代理看到的内容

View File

@@ -37,23 +37,23 @@ origin: ECC
```
┌─────────────────────────────────────────────────────────────┐
Frontend
前端
│ Next.js 15 + TypeScript + TailwindCSS │
Deployed: Vercel / Cloud Run │
部署平台:Vercel / Cloud Run │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
Backend
后端
│ FastAPI + Python 3.11 + Pydantic │
Deployed: Cloud Run │
部署平台:Cloud Run │
└─────────────────────────────────────────────────────────────┘
┌───────────────┼───────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Supabase │ │ Claude │ │ Redis │
Database │ │ API │ │ Cache
数据库 │ │ API │ │ 缓存
└──────────┘ └──────────┘ └──────────┘
```
@@ -65,31 +65,31 @@ origin: ECC
project/
├── frontend/
│ └── src/
│ ├── app/ # Next.js app router pages
│ │ ├── api/ # API routes
│ │ ├── (auth)/ # Auth-protected routes
│ │ └── workspace/ # Main app workspace
│ ├── components/ # React components
│ │ ├── ui/ # Base UI components
│ │ ├── forms/ # Form components
│ │ └── layouts/ # Layout components
│ ├── hooks/ # Custom React hooks
│ ├── lib/ # Utilities
│ ├── types/ # TypeScript definitions
│ └── config/ # Configuration
│ ├── app/ # Next.js 应用路由页面
│ │ ├── api/ # API 路由
│ │ ├── (auth)/ # 受身份验证保护的路由
│ │ └── workspace/ # 主应用工作区
│ ├── components/ # React 组件
│ │ ├── ui/ # 基础 UI 组件
│ │ ├── forms/ # 表单组件
│ │ └── layouts/ # 布局组件
│ ├── hooks/ # 自定义 React 钩子
│ ├── lib/ # 工具库
│ ├── types/ # TypeScript 类型定义
│ └── config/ # 配置文件
├── backend/
│ ├── routers/ # FastAPI route handlers
│ ├── models.py # Pydantic models
│ ├── main.py # FastAPI app entry
│ ├── auth_system.py # Authentication
│ ├── database.py # Database operations
│ ├── services/ # Business logic
│ └── tests/ # pytest tests
│ ├── routers/ # FastAPI 路由处理器
│ ├── models.py # Pydantic 模型
│ ├── main.py # FastAPI 应用入口
│ ├── auth_system.py # 身份验证模块
│ ├── database.py # 数据库操作
│ ├── services/ # 业务逻辑层
│ └── tests/ # pytest 测试
├── deploy/ # Deployment configs
├── docs/ # Documentation
└── scripts/ # Utility scripts
├── deploy/ # 部署配置
├── docs/ # 文档
└── scripts/ # 工具脚本
```
***

View File

@@ -296,7 +296,7 @@ Research → Plan → Implement (TDD) → Review → Verify → Commit
**用户输入:**
```
Add a REST API endpoint for user profile updates with validation
添加一个用于用户资料更新的REST API端点并包含验证功能。
```
**阶段 0 检测到:** `go.mod`,使用 Go 1.22, Chi router
@@ -304,30 +304,30 @@ Add a REST API endpoint for user profile updates with validation
**优化提示(完整):**
```
Add a REST API endpoint for user profile updates (PATCH /api/users/:id).
添加用于用户资料更新的 REST API 端点(PATCH /api/users/:id)。
Tech stack: Go 1.22 + Chi router (detected from project)
技术栈:Go 1.22 + Chi 路由器(根据项目检测得出)
Requirements:
- PATCH /api/users/:id — partial update of user profile
- Input validation for fields: name, email, avatar_url, bio
- Auth: require valid token, users can only update own profile
- Return 200 with updated user on success
- Return 400 with validation errors on invalid input
- Return 401/403 for auth failures
- Follow existing API patterns in the codebase
要求:
- PATCH /api/users/:id — 部分更新用户资料
- 对字段进行输入验证:nameemailavatar_urlbio
- 身份验证:需要有效令牌,用户只能更新自己的资料
- 成功时返回 200 及更新后的用户信息
- 输入无效时返回 400 及验证错误信息
- 身份验证失败时返回 401/403
- 遵循代码库中现有的 API 模式
Workflow:
1. /plan the endpoint structure, middleware chain, and validation logic
2. /tdd — write table-driven tests for success, validation failure, auth failure, not-found
3. Implement following existing handler patterns
工作流程:
1. /plan 端点结构、中间件链和验证逻辑
2. /tdd — 为成功、验证失败、身份验证失败、未找到等情况编写表格驱动测试
3. 按照现有处理器模式实现
4. /go-review
5. /verify — run full test suite, confirm no regressions
5. /verify — 运行完整的测试套件,确认没有回归问题
Do not:
- Modify existing endpoints
- Change the database schema (use existing user table)
- Add new dependencies without checking existing ones first (use search-first skill)
禁止事项:
- 修改现有端点
- 更改数据库模式(使用现有的 user 表)
- 在未先检查现有依赖项的情况下添加新依赖(使用搜索优先技能)
```
### 示例 3史诗级项目
@@ -335,33 +335,33 @@ Do not:
**用户输入:**
```
Migrate our monolith to microservices
将我们的单体架构迁移到微服务
```
**优化提示(完整):**
```
Use the blueprint skill to plan: "Migrate monolith to microservices architecture"
使用蓝图技能规划:“将单体架构迁移至微服务架构”
Before executing, answer these questions in the blueprint:
1. Which domain boundaries exist in the current monolith?
2. Which service should be extracted first (lowest coupling)?
3. Communication pattern: REST APIs, gRPC, or event-driven (Kafka/RabbitMQ)?
4. Database strategy: shared DB initially or database-per-service from start?
5. Deployment target: Kubernetes, Docker Compose, or serverless?
在执行前,请在蓝图中回答以下问题:
1. 当前单体架构中存在哪些领域边界?
2. 应该首先提取哪个服务(耦合度最低)?
3. 通信模式:REST APIgRPC 还是事件驱动(Kafka/RabbitMQ
4. 数据库策略:初期使用共享数据库,还是一开始就采用“每个服务一个数据库”?
5. 部署目标:KubernetesDocker Compose 还是无服务器?
The blueprint should produce phases like:
- Phase 1: Identify service boundaries and create domain map
- Phase 2: Set up infrastructure (API gateway, service mesh, CI/CD per service)
- Phase 3: Extract first service (strangler fig pattern)
- Phase 4: Verify with integration tests, then extract next service
- Phase N: Decommission monolith
蓝图应生成如下阶段:
- 阶段 1识别服务边界并创建领域映射
- 阶段 2搭建基础设施API 网关、服务网格、每个服务的 CI/CD
- 阶段 3提取第一个服务采用绞杀者模式
- 阶段 4通过集成测试验证然后提取下一个服务
- 阶段 N停用单体架构
Each phase = 1 PR, with /verify gates between phases.
Use /save-session between phases. Use /resume-session to continue.
Use git worktrees for parallel service extraction when dependencies allow.
每个阶段 = 1 个 PR阶段之间设置 /verify 检查点。
阶段之间使用 /save-session。使用 /resume-session 继续。
在依赖关系允许时,使用 git worktrees 进行并行服务提取。
Recommended: Opus 4.6 for blueprint planning, Sonnet 4.6 for phase execution.
推荐:使用 Opus 4.6 进行蓝图规划,使用 Sonnet 4.6 执行各阶段。
```
***

View File

@@ -595,18 +595,18 @@ def test_with_tmpdir(tmpdir):
```
tests/
├── conftest.py # Shared fixtures
├── conftest.py # 共享 fixtures
├── __init__.py
├── unit/ # Unit tests
├── unit/ # 单元测试
│ ├── __init__.py
│ ├── test_models.py
│ ├── test_utils.py
│ └── test_services.py
├── integration/ # Integration tests
├── integration/ # 集成测试
│ ├── __init__.py
│ ├── test_api.py
│ └── test_database.py
└── e2e/ # End-to-end tests
└── e2e/ # 端到端测试
├── __init__.py
└── test_user_flow.py
```

View File

@@ -18,30 +18,27 @@ origin: ECC
## 决策框架
```
Is the text format consistent and repeating?
├── Yes (>90% follows a pattern) → Start with Regex
│ ├── Regex handles 95%+ → Done, no LLM needed
│ └── Regex handles <95% → Add LLM for edge cases only
└── No (free-form, highly variable) → Use LLM directly
文本格式是否一致且重复?
├── (>90% 遵循某种模式) → 从正则表达式开始
│ ├── 正则表达式处理 95%+ → 完成,无需 LLM
│ └── 正则表达式处理 <95% → 仅为边缘情况添加 LLM
└── 否 (自由格式,高度可变) → 直接使用 LLM
```
## 架构模式
```
Source Text
[正则表达式解析器] ─── 提取结构95-98% 准确率)
[Regex Parser] ─── Extracts structure (95-98% accuracy)
[文本清理器] ─── 去除噪声(标记、页码、伪影)
[Text Cleaner] ─── Removes noise (markers, page numbers, artifacts)
[置信度评分器] ─── 标记低置信度提取项
[Confidence Scorer] ─── Flags low-confidence extractions
├── 高置信度≥0.95)→ 直接输出
── High confidence (≥0.95) → Direct output
└── Low confidence (<0.95) → [LLM Validator] → Output
── 低置信度(<0.95)→ [LLM 验证器] → 输出
```
## 实现

View File

@@ -37,12 +37,12 @@ bash ~/.claude/skills/rules-distill/scripts/scan-rules.sh
#### 1c. 呈现给用户
```
Rules Distillation — Phase 1: Inventory
规则提炼 — 第一阶段:清点
────────────────────────────────────────
Skills: {N} files scanned
Rules: {M} files ({K} headings indexed)
技能:扫描 {N} 个文件
规则:索引 {M} 个文件(包含 {K} 个标题)
Proceeding to cross-read analysis...
正在进行交叉阅读分析...
```
### 阶段 2通读、匹配与裁决LLM判断
@@ -65,56 +65,56 @@ Proceeding to cross-read analysis...
使用以下提示启动通用智能体:
````
You are an analyst who cross-reads skills to extract principles that should be promoted to rules.
你是一位通过交叉阅读技能来提取应提升为规则的原则的分析师。
## Input
- Skills: {full text of skills in this batch}
- Existing rules: {full text of all rule files}
## 输入
- 技能:{本批次技能的全部文本}
- 现有规则:{所有规则文件的全部文本}
## Extraction Criteria
## 提取标准
Include a candidate ONLY if ALL of these are true:
**仅当**满足以下**所有**条件时,才包含一个候选原则:
1. **Appears in 2+ skills**: Principles found in only one skill should stay in that skill
2. **Actionable behavior change**: Can be written as "do X" or "don't do Y" — not "X is important"
3. **Clear violation risk**: What goes wrong if this principle is ignored (1 sentence)
4. **Not already in rules**: Check the full rules text — including concepts expressed in different words
1. **出现在 2+ 项技能中**:仅出现在一项技能中的原则应保留在该技能中
2. **可操作的行为改变**:可以写成“做 X”或“不要做 Y”的形式——而不是“X 很重要”
3. **明确的违规风险**如果忽略此原则会出什么问题1 句话)
4. **尚未存在于规则中**:检查全部规则文本——包括以不同措辞表达的概念
## Matching & Verdict
## 匹配与裁决
For each candidate, compare against the full rules text and assign a verdict:
对于每个候选原则,对照全部规则文本进行比较并给出裁决:
- **Append**: Add to an existing section of an existing rule file
- **Revise**: Existing rule content is inaccurate or insufficient — propose a correction
- **New Section**: Add a new section to an existing rule file
- **New File**: Create a new rule file
- **Already Covered**: Sufficiently covered in existing rules (even if worded differently)
- **Too Specific**: Should remain at the skill level
- **追加**:添加到现有规则文件的现有章节
- **修订**:现有规则内容不准确或不充分——提出修正建议
- **新章节**:在现有规则文件中添加新章节
- **新文件**:创建新的规则文件
- **已涵盖**:现有规则已充分涵盖(即使措辞不同)
- **过于具体**:应保留在技能层面
## Output Format (per candidate)
## 输出格式(每个候选原则)
```json
{
"principle": "1-2 sentences in 'do X' / 'don't do Y' form",
"evidence": ["skill-name: §Section", "skill-name: §Section"],
"violation_risk": "1 sentence",
"verdict": "Append / Revise / New Section / New File / Already Covered / Too Specific",
"target_rule": "filename §Section, or 'new'",
"confidence": "high / medium / low",
"draft": "Draft text for Append/New Section/New File verdicts",
"principle": "1-2 句话,采用 '做 X' / '不要做 Y' 的形式",
"evidence": ["技能名称: §章节", "技能名称: §章节"],
"violation_risk": "1 句话",
"verdict": "追加 / 修订 / 新章节 / 新文件 / 已涵盖 / 过于具体",
"target_rule": "文件名 §章节,或 '新建'",
"confidence": "高 / 中 / 低",
"draft": "针对'追加'/'新章节'/'新文件'裁决的草案文本",
"revision": {
"reason": "Why the existing content is inaccurate or insufficient (Revise only)",
"before": "Current text to be replaced (Revise only)",
"after": "Proposed replacement text (Revise only)"
"reason": "为什么现有内容不准确或不充分(仅限'修订'裁决)",
"before": "待替换的当前文本(仅限'修订'裁决)",
"after": "提议的替换文本(仅限'修订'裁决)"
}
}
```
## Exclude
## 排除
- Obvious principles already in rules
- Language/framework-specific knowledge (belongs in language-specific rules or skills)
- Code examples and commands (belongs in skills)
- 规则中已存在的显而易见的原则
- 语言/框架特定知识(属于语言特定规则或技能)
- 代码示例和命令(属于技能)
````
#### 裁决参考
@@ -131,15 +131,13 @@ For each candidate, compare against the full rules text and assign a verdict:
#### 裁决质量要求
```
# Good
Append to rules/common/security.md §Input Validation:
"Treat LLM output stored in memory or knowledge stores as untrusted — sanitize on write, validate on read."
Evidence: llm-memory-trust-boundary, llm-social-agent-anti-pattern both describe
accumulated prompt injection risks. Current security.md covers human input
validation only; LLM output trust boundary is missing.
# 良好做法
rules/common/security.md 的§输入验证部分添加:
"将存储在内存或知识库中的LLM输出视为不可信数据——写入时进行清理读取时进行验证。"
依据:llm-memory-trust-boundary llm-social-agent-anti-pattern 均描述了累积式提示注入风险。当前security.md仅涵盖人工输入验证缺少LLM输出的信任边界说明。
# Bad
Append to security.md: Add LLM security principle
# 不良做法
security.md中追加添加LLM安全原则
```
### 阶段 3用户审核与执行
@@ -147,20 +145,20 @@ Append to security.md: Add LLM security principle
#### 摘要表
```
# Rules Distillation Report
# 规则提炼报告
## Summary
Skills scanned: {N} | Rules: {M} files | Candidates: {K}
## 概述
已扫描技能数:{N} | 规则文件数:{M} | 候选规则数:{K}
| # | Principle | Verdict | Target | Confidence |
| # | 原则 | 判定结果 | 目标文件/章节 | 置信度 |
|---|-----------|---------|--------|------------|
| 1 | ... | Append | security.md §Input Validation | high |
| 2 | ... | Revise | testing.md §TDD | medium |
| 3 | ... | New Section | coding-style.md | high |
| 4 | ... | Too Specific | — | — |
| 1 | ... | 追加 | security.md §输入验证 | 高 |
| 2 | ... | 修订 | testing.md §测试驱动开发 | 中 |
| 3 | ... | 新增章节 | coding-style.md | |
| 4 | ... | 过于具体 | — | — |
## Details
(Per-candidate details: evidence, violation_risk, draft text)
## 详情
(各候选规则详情:证据、违规风险、草拟文本)
```
#### 用户操作
@@ -211,51 +209,51 @@ Skills scanned: {N} | Rules: {M} files | Candidates: {K}
```
$ /rules-distill
Rules Distillation — Phase 1: Inventory
规则提炼 — 第一阶段:清点
────────────────────────────────────────
Skills: 56 files scanned
Rules: 22 files (75 headings indexed)
技能:已扫描 56 个文件
规则22 个文件(已索引 75 个标题)
Proceeding to cross-read analysis...
正在进行交叉阅读分析...
[Subagent analysis: Batch 1 (agent/meta skills) ...]
[Subagent analysis: Batch 2 (coding/pattern skills) ...]
[Cross-batch merge: 2 duplicates removed, 1 cross-batch candidate promoted]
[子代理分析:批次 1 (agent/meta skills) ...]
[子代理分析:批次 2 (coding/pattern skills) ...]
[跨批次合并:已移除 2 个重复项1 个跨批次候选被提升]
# Rules Distillation Report
# 规则提炼报告
## Summary
Skills scanned: 56 | Rules: 22 files | Candidates: 4
## 摘要
已扫描技能56 | 规则22 个文件 | 候选:4
| # | Principle | Verdict | Target | Confidence |
| # | 原则 | 判定 | 目标 | 置信度 |
|---|-----------|---------|--------|------------|
| 1 | LLM output: normalize, type-check, sanitize before reuse | New Section | coding-style.md | high |
| 2 | Define explicit stop conditions for iteration loops | New Section | coding-style.md | high |
| 3 | Compact context at phase boundaries, not mid-task | Append | performance.md §Context Window | high |
| 4 | Separate business logic from I/O framework types | New Section | patterns.md | high |
| 1 | LLM 输出:重用前进行规范化、类型检查、清理 | 新章节 | coding-style.md | |
| 2 | 为迭代循环定义明确的停止条件 | 新章节 | coding-style.md | |
| 3 | 在阶段边界压缩上下文,而非任务中途 | 追加 | performance.md §Context Window | |
| 4 | 将业务逻辑与 I/O 框架类型分离 | 新章节 | patterns.md | |
## Details
## 详情
### 1. LLM Output Validation
Verdict: New Section in coding-style.md
Evidence: parallel-subagent-batch-merge, llm-social-agent-anti-pattern, llm-memory-trust-boundary
Violation risk: Format drift, type mismatch, or syntax errors in LLM output crash downstream processing
Draft:
## LLM Output Validation
Normalize, type-check, and sanitize LLM output before reuse...
See skill: parallel-subagent-batch-merge, llm-memory-trust-boundary
### 1. LLM 输出验证
判定:在 coding-style.md 中新建章节
证据:parallel-subagent-batch-merge, llm-social-agent-anti-pattern, llm-memory-trust-boundary
违规风险LLM 输出的格式漂移、类型不匹配或语法错误导致下游处理崩溃
草案:
## LLM 输出验证
在重用 LLM 输出前,请进行规范化、类型检查和清理...
参见技能:parallel-subagent-batch-merge, llm-memory-trust-boundary
[... details for candidates 2-4 ...]
[... 候选 2-4 的详情 ...]
Approve, modify, or skip each candidate by number:
> User: Approve 1, 3. Skip 2, 4.
按编号批准、修改或跳过每个候选:
> 用户:批准 1, 3。跳过 2, 4
Applied: coding-style.md §LLM Output Validation
Applied: performance.md §Context Window Management
Skipped: Iteration Bounds
Skipped: Boundary Type Conversion
已应用:coding-style.md §LLM 输出验证
已应用:performance.md §上下文窗口管理
已跳过:迭代边界
已跳过:边界类型转换
Results saved to results.json
结果已保存至 results.json
```
## 设计原则

View File

@@ -397,19 +397,19 @@ my_app/
├── src/
│ ├── main.rs
│ ├── lib.rs
│ ├── auth/ # Domain module
│ ├── auth/ # 领域模块
│ │ ├── mod.rs
│ │ ├── token.rs
│ │ └── middleware.rs
│ ├── orders/ # Domain module
│ ├── orders/ # 领域模块
│ │ ├── mod.rs
│ │ ├── model.rs
│ │ └── service.rs
│ └── db/ # Infrastructure
│ └── db/ # 基础设施
│ ├── mod.rs
│ └── pool.rs
├── tests/ # Integration tests
├── benches/ # Benchmarks
├── tests/ # 集成测试
├── benches/ # 基准测试
└── Cargo.toml
```

View File

@@ -31,10 +31,10 @@ origin: ECC
### RED-GREEN-REFACTOR 循环
```
RED → Write a failing test first
GREEN → Write minimal code to pass the test
REFACTOR → Improve code while keeping tests green
REPEAT → Continue with next requirement
RED → 先写一个失败的测试
GREEN → 编写最少代码使测试通过
REFACTOR → 重构代码,同时保持测试通过
REPEAT → 继续下一个需求
```
### Rust 中的分步 TDD
@@ -161,10 +161,10 @@ fn panics_with_specific_message() {
my_crate/
├── src/
│ └── lib.rs
├── tests/ # Integration tests
│ ├── api_test.rs # Each file is a separate test binary
├── tests/ # 集成测试
│ ├── api_test.rs # 每个文件都是一个独立的测试二进制文件
│ ├── db_test.rs
│ └── common/ # Shared test utilities
│ └── common/ # 共享测试工具
│ └── mod.rs
```

View File

@@ -21,29 +21,29 @@ origin: ECC
```
┌─────────────────────────────────────────────┐
│ 1. NEED ANALYSIS
Define what functionality is needed
Identify language/framework constraints
│ 1. 需求分析
确定所需功能
识别语言/框架限制
├─────────────────────────────────────────────┤
│ 2. PARALLEL SEARCH (researcher agent)
│ 2. 并行搜索(研究员代理)
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ npm / │ │ MCP / │ │ GitHub / │ │
│ │ PyPI │ │ Skills │ │ Web │ │
│ │ PyPI │ │ 技能 │ │ 网络 │ │
│ └──────────┘ └──────────┘ └──────────┘ │
├─────────────────────────────────────────────┤
│ 3. EVALUATE
Score candidates (functionality, maint,
community, docs, license, deps)
│ 3. 评估
对候选方案进行评分(功能、维护、
社区、文档、许可证、依赖)
├─────────────────────────────────────────────┤
│ 4. DECIDE
│ 4. 决策
│ ┌─────────┐ ┌──────────┐ ┌─────────┐ │
│ │ Adopt │ │ Extend │ │ Build │ │
│ │ as-is │ │ /Wrap │ │ Custom │ │
│ │ 采用 │ │ 扩展 │ │ 构建 │ │
│ │ 原样 │ │ /包装 │ │ 定制 │ │
│ └─────────┘ └──────────┘ └─────────┘ │
├─────────────────────────────────────────────┤
│ 5. IMPLEMENT
Install package / Configure MCP /
Write minimal custom code
│ 5. 实施
安装包 / 配置 MCP /
编写最小化自定义代码
└─────────────────────────────────────────────┘
```
@@ -73,13 +73,13 @@ origin: ECC
对于非平凡的功能,启动研究员代理:
```
Task(subagent_type="general-purpose", prompt="
Research existing tools for: [DESCRIPTION]
Language/framework: [LANG]
Constraints: [ANY]
任务(子代理类型="通用型",提示="
研究现有工具用于:[描述]
语言/框架:[语言]
约束:[任何]
Search: npm/PyPI, MCP servers, Claude Code skills, GitHub
Return: Structured comparison with recommendation
搜索:npm/PyPIMCP 服务器、Claude Code 技能、GitHub
返回:结构化对比与推荐
")
```
@@ -140,31 +140,31 @@ Task(subagent_type="general-purpose", prompt="
### 示例 1“添加死链检查”
```
Need: Check markdown files for broken links
Search: npm "markdown dead link checker"
Found: textlint-rule-no-dead-link (score: 9/10)
Action: ADOPT — npm install textlint-rule-no-dead-link
Result: Zero custom code, battle-tested solution
需求:检查 Markdown 文件中的失效链接
搜索:npm "markdown dead link checker"
发现:textlint-rule-no-dead-link(评分:9/10
行动:采纳 — npm install textlint-rule-no-dead-link
结果:无需自定义代码,经过实战检验的解决方案
```
### 示例 2“添加 HTTP 客户端包装器”
```
Need: Resilient HTTP client with retries and timeout handling
Search: npm "http client retry", PyPI "httpx retry"
Found: got (Node) with retry plugin, httpx (Python) with built-in retry
Action: ADOPT — use got/httpx directly with retry config
Result: Zero custom code, production-proven libraries
需求:具备重试和超时处理能力的弹性 HTTP 客户端
搜索:npm "http client retry"PyPI "httpx retry"
发现:gotNode)带重试插件、httpxPython)带内置重试功能
行动:采用——直接使用 got/httpx 并配置重试
结果:零定制代码,生产验证的库
```
### 示例 3“添加配置文件 linter”
```
Need: Validate project config files against a schema
Search: npm "config linter schema", "json schema validator cli"
Found: ajv-cli (score: 8/10)
Action: ADOPT + EXTEND — install ajv-cli, write project-specific schema
Result: 1 package + 1 schema file, no custom validation logic
需求:根据模式验证项目配置文件
搜索:npm "config linter schema""json schema validator cli"
发现ajv-cli评分8/10
操作:采用 + 扩展 —— 安装 ajv-cli编写项目特定的模式
结果1 个包 + 1 个模式文件,无需自定义验证逻辑
```
## 反模式

View File

@@ -62,9 +62,9 @@ cd ~/path/to/my-project
从脚本输出中呈现扫描摘要和清单表:
```
Scanning:
✓ ~/.claude/skills/ (17 files)
✗ {cwd}/.claude/skills/ (not found — global skills only)
扫描中:
✓ ~/.claude/skills/ (17 个文件)
✗ {cwd}/.claude/skills/ (未找到 — 仅限全局技能)
```
| 技能 | 7天使用 | 30天使用 | 描述 |
@@ -78,13 +78,13 @@ Scanning:
Agent(
subagent_type="general-purpose",
prompt="
Evaluate the following skill inventory against the checklist.
根据检查清单评估以下技能清单。
[INVENTORY]
[CHECKLIST]
Return JSON for each skill:
为每项技能返回 JSON
{ \"verdict\": \"Keep\"|\"Improve\"|\"Update\"|\"Retire\"|\"Merge into [X]\", \"reason\": \"...\" }
"
)
@@ -103,10 +103,10 @@ Return JSON for each skill:
每个技能都根据此检查清单进行评估:
```
- [ ] Content overlap with other skills checked
- [ ] Overlap with MEMORY.md / CLAUDE.md checked
- [ ] Freshness of technical references verified (use WebSearch if tool names / CLI flags / APIs are present)
- [ ] Usage frequency considered
- [ ] 已检查与其他技能的内容重叠情况
- [ ] 已检查与 MEMORY.md / CLAUDE.md 的重叠情况
- [ ] 已验证技术引用的时效性(如果存在工具名称 / CLI 参数 / API,请使用 WebSearch 进行验证)
- [ ] 已考虑使用频率
```
判定标准:

View File

@@ -178,13 +178,13 @@ git secrets --scan # if configured
### 常见安全发现
```
# Check for System.out.println (use logger instead)
# 检查 System.out.println(应使用日志记录器)
grep -rn "System\.out\.print" src/main/ --include="*.java"
# Check for raw exception messages in responses
# 检查响应中的原始异常消息
grep -rn "e\.getMessage()" src/main/ --include="*.java"
# Check for wildcard CORS
# 检查通配符 CORS 配置
grep -rn "allowedOrigins.*\*" src/main/ --include="*.java"
```
@@ -212,17 +212,17 @@ git diff
## 输出模板
```
VERIFICATION REPORT
验证报告
===================
Build: [PASS/FAIL]
Static: [PASS/FAIL] (spotbugs/pmd/checkstyle)
Tests: [PASS/FAIL] (X/Y passed, Z% coverage)
Security: [PASS/FAIL] (CVE findings: N)
Diff: [X files changed]
构建: [通过/失败]
静态分析: [通过/失败] (spotbugs/pmd/checkstyle)
测试: [通过/失败] (X/Y 通过, Z% 覆盖率)
安全性: [通过/失败] (CVE 发现数: N)
差异: [X 个文件变更]
Overall: [READY / NOT READY]
总体: [就绪 / 未就绪]
Issues to Fix:
待修复问题:
1. ...
2. ...
```

View File

@@ -57,11 +57,11 @@ origin: ECC
### 步骤 1: 编写用户旅程
```
As a [role], I want to [action], so that [benefit]
作为一个[角色],我希望能够[行动],以便[获得收益]
Example:
As a user, I want to search for markets semantically,
so that I can find relevant markets even without exact keywords.
示例:
作为一个用户,我希望能够进行语义搜索市场,
这样即使没有精确的关键词,我也能找到相关的市场。
```
### 步骤 2: 生成测试用例
@@ -252,7 +252,7 @@ src/
├── components/
│ ├── Button/
│ │ ├── Button.tsx
│ │ ├── Button.test.tsx # Unit tests
│ │ ├── Button.test.tsx # 单元测试
│ │ └── Button.stories.tsx # Storybook
│ └── MarketCard/
│ ├── MarketCard.tsx
@@ -261,9 +261,9 @@ src/
│ └── api/
│ └── markets/
│ ├── route.ts
│ └── route.test.ts # Integration tests
│ └── route.test.ts # 集成测试
└── e2e/
├── markets.spec.ts # E2E tests
├── markets.spec.ts # 端到端测试
├── trading.spec.ts
└── auth.spec.ts
```

View File

@@ -70,12 +70,12 @@ agents/
### 步骤 2呈现领域菜单
```
Available agent domains:
1. Engineering — Software Architect, Security Engineer
2. Marketing — SEO Specialist
3. Sales — Discovery Coach, Outbound Strategist
可用的代理领域:
1. 工程领域 — 软件架构师、安全工程师
2. 市场营销 — SEO专家
3. 销售领域 — 发现教练、外拓策略师
Pick domains or name specific agents (e.g., "1,3" or "security + seo"):
请选择领域或指定具体代理(例如:"1,3" "security + seo"
```
* 跳过智能体数量为零的领域(空目录)
@@ -94,8 +94,8 @@ Pick domains or name specific agents (e.g., "1,3" or "security + seo"):
确认选择:
```
Selected: Security Engineer + SEO Specialist
What should they work on? (describe the task):
选定:安全工程师 + SEO专家
他们应该专注于什么任务?(描述任务)
```
### 步骤 4并行启动智能体
@@ -133,33 +133,33 @@ What should they work on? (describe the task):
User: team builder
Claude:
Available agent domains:
1. Engineering (2) — Software Architect, Security Engineer
2. Marketing (1) — SEO Specialist
3. Sales (4) — Discovery Coach, Outbound Strategist, Proposal Strategist, Sales Engineer
4. Support (1) — Executive Summary
可用代理领域:
1. 工程 (2) — 软件架构师,安全工程师
2. 市场营销 (1) — SEO专家
3. 销售 (4) — 探索教练,外拓策略师,提案策略师,销售工程师
4. 支持 (1) — 执行摘要
Pick domains or name specific agents:
选择领域或指定具体代理:
User: security + seo
Claude:
Selected: Security Engineer + SEO Specialist
What should they work on?
已选择:安全工程师 + SEO专家
他们应该处理什么任务?
User: Review my Next.js e-commerce site before launch
User: 在发布前审查我的 Next.js 电商网站
[Both agents spawn in parallel, each applying their specialty to the codebase]
[两个代理并行启动,各自将他们的专长应用于代码库]
Claude:
## Security Engineer Findings
- [findings...]
## 安全工程师发现
- [发现内容...]
## SEO Specialist Findings
- [findings...]
## SEO专家发现
- [发现内容...]
## Synthesis
Both agents agree on: [...]
Tension: Security recommends CSP that blocks inline styles, SEO needs inline schema markup. Resolution: [...]
Next steps: [...]
## 综合分析
双方代理均同意:[...]
冲突点安全建议的CSP阻止内联样式SEO需要内联模式标记。解决方案[...]
后续步骤:[...]
```

View File

@@ -99,19 +99,19 @@ git diff HEAD~1 --name-only
运行所有阶段后,生成验证报告:
```
VERIFICATION REPORT
验证报告
==================
Build: [PASS/FAIL]
Types: [PASS/FAIL] (X errors)
Lint: [PASS/FAIL] (X warnings)
Tests: [PASS/FAIL] (X/Y passed, Z% coverage)
Security: [PASS/FAIL] (X issues)
Diff: [X files changed]
构建: [通过/失败]
类型: [通过/失败] (X 处错误)
代码检查: [通过/失败] (X 条警告)
测试: [通过/失败] (X/Y 通过,覆盖率 Z%)
安全: [通过/失败] (X 个问题)
差异: [X 个文件被修改]
Overall: [READY/NOT READY] for PR
总体: [就绪/未就绪] 提交 PR
Issues to Fix:
待修复问题:
1. ...
2. ...
```

View File

@@ -24,12 +24,12 @@ origin: ECC
## 处理流程
```
Screen Studio / raw footage
Screen Studio / 原始素材
→ Claude / Codex
→ FFmpeg
→ Remotion
→ ElevenLabs / fal.ai
→ Descript or CapCut
→ Descript CapCut
```
每个层级都有特定的工作。不要跳过层级。不要试图让一个工具完成所有事情。
@@ -55,9 +55,9 @@ Screen Studio / raw footage
* **搭建FFmpeg和Remotion代码**:生成命令和合成
```
Example prompt:
"Here's the transcript of a 4-hour recording. Identify the 8 strongest segments
for a 24-minute vlog. Give me FFmpeg cut commands for each segment."
示例提示词:
"这是一份4小时录音的文字记录。找出最适合制作24分钟vlog的8个精彩片段。
为每个片段提供FFmpeg剪辑命令。"
```
此层级关乎结构,而非最终的创意品味。
@@ -208,7 +208,7 @@ with open("voiceover.mp3", "wb") as f:
```
generate(app_id: "fal-ai/nano-banana-pro", input_data: {
"prompt": "professional thumbnail for tech vlog, dark background, code on screen",
"prompt": "专业科技视频缩略图,深色背景,屏幕上显示代码",
"image_size": "landscape_16_9"
})
```
@@ -286,8 +286,7 @@ ffmpeg -i input.mp4 -af silencedetect=noise=-30dB:d=2 -f null - 2>&1 | grep sile
使用Claude分析转录稿 + 场景时间戳:
```
"Given this transcript with timestamps and these scene change points,
identify the 5 most engaging 30-second clips for social media."
"根据这份带时间戳的转录稿和这些场景转换点找出最适合社交媒体发布的5段30秒最吸引人的剪辑片段。"
```
## 每个工具最擅长什么