mirror of
https://github.com/affaan-m/everything-claude-code.git
synced 2026-03-30 13:43:26 +08:00
Add Kiro skills (18 SKILL.md files) (#811)
Co-authored-by: Sungmin Hong <hsungmin@amazon.com>
This commit is contained in:
135
.kiro/skills/agentic-engineering/SKILL.md
Normal file
135
.kiro/skills/agentic-engineering/SKILL.md
Normal file
@@ -0,0 +1,135 @@
|
|||||||
|
---
|
||||||
|
name: agentic-engineering
|
||||||
|
description: >
|
||||||
|
Operate as an agentic engineer using eval-first execution, decomposition,
|
||||||
|
and cost-aware model routing. Use when AI agents perform most implementation
|
||||||
|
work and humans enforce quality and risk controls.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
---
|
||||||
|
|
||||||
|
# Agentic Engineering
|
||||||
|
|
||||||
|
Use this skill for engineering workflows where AI agents perform most implementation work and humans enforce quality and risk controls.
|
||||||
|
|
||||||
|
## Operating Principles
|
||||||
|
|
||||||
|
1. Define completion criteria before execution.
|
||||||
|
2. Decompose work into agent-sized units.
|
||||||
|
3. Route model tiers by task complexity.
|
||||||
|
4. Measure with evals and regression checks.
|
||||||
|
|
||||||
|
## Eval-First Loop
|
||||||
|
|
||||||
|
1. Define capability eval and regression eval.
|
||||||
|
2. Run baseline and capture failure signatures.
|
||||||
|
3. Execute implementation.
|
||||||
|
4. Re-run evals and compare deltas.
|
||||||
|
|
||||||
|
**Example workflow:**
|
||||||
|
```
|
||||||
|
1. Write test that captures desired behavior (eval)
|
||||||
|
2. Run test → capture baseline failures
|
||||||
|
3. Implement feature
|
||||||
|
4. Re-run test → verify improvements
|
||||||
|
5. Check for regressions in other tests
|
||||||
|
```
|
||||||
|
|
||||||
|
## Task Decomposition
|
||||||
|
|
||||||
|
Apply the 15-minute unit rule:
|
||||||
|
- Each unit should be independently verifiable
|
||||||
|
- Each unit should have a single dominant risk
|
||||||
|
- Each unit should expose a clear done condition
|
||||||
|
|
||||||
|
**Good decomposition:**
|
||||||
|
```
|
||||||
|
Task: Add user authentication
|
||||||
|
├─ Unit 1: Add password hashing (15 min, security risk)
|
||||||
|
├─ Unit 2: Create login endpoint (15 min, API contract risk)
|
||||||
|
├─ Unit 3: Add session management (15 min, state risk)
|
||||||
|
└─ Unit 4: Protect routes with middleware (15 min, auth logic risk)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Bad decomposition:**
|
||||||
|
```
|
||||||
|
Task: Add user authentication (2 hours, multiple risks)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Model Routing
|
||||||
|
|
||||||
|
Choose model tier based on task complexity:
|
||||||
|
|
||||||
|
- **Haiku**: Classification, boilerplate transforms, narrow edits
|
||||||
|
- Example: Rename variable, add type annotation, format code
|
||||||
|
|
||||||
|
- **Sonnet**: Implementation and refactors
|
||||||
|
- Example: Implement feature, refactor module, write tests
|
||||||
|
|
||||||
|
- **Opus**: Architecture, root-cause analysis, multi-file invariants
|
||||||
|
- Example: Design system, debug complex issue, review architecture
|
||||||
|
|
||||||
|
**Cost discipline:** Escalate model tier only when lower tier fails with a clear reasoning gap.
|
||||||
|
|
||||||
|
## Session Strategy
|
||||||
|
|
||||||
|
- **Continue session** for closely-coupled units
|
||||||
|
- Example: Implementing related functions in same module
|
||||||
|
|
||||||
|
- **Start fresh session** after major phase transitions
|
||||||
|
- Example: Moving from implementation to testing
|
||||||
|
|
||||||
|
- **Compact after milestone completion**, not during active debugging
|
||||||
|
- Example: After feature complete, before starting next feature
|
||||||
|
|
||||||
|
## Review Focus for AI-Generated Code
|
||||||
|
|
||||||
|
Prioritize:
|
||||||
|
- Invariants and edge cases
|
||||||
|
- Error boundaries
|
||||||
|
- Security and auth assumptions
|
||||||
|
- Hidden coupling and rollout risk
|
||||||
|
|
||||||
|
Do not waste review cycles on style-only disagreements when automated format/lint already enforce style.
|
||||||
|
|
||||||
|
**Review checklist:**
|
||||||
|
- [ ] Edge cases handled (null, empty, boundary values)
|
||||||
|
- [ ] Error handling comprehensive
|
||||||
|
- [ ] Security assumptions validated
|
||||||
|
- [ ] No hidden coupling between modules
|
||||||
|
- [ ] Rollout risk assessed (breaking changes, migrations)
|
||||||
|
|
||||||
|
## Cost Discipline
|
||||||
|
|
||||||
|
Track per task:
|
||||||
|
- Model tier used
|
||||||
|
- Token estimate
|
||||||
|
- Retries needed
|
||||||
|
- Wall-clock time
|
||||||
|
- Success/failure outcome
|
||||||
|
|
||||||
|
**Example tracking:**
|
||||||
|
```
|
||||||
|
Task: Implement user login
|
||||||
|
Model: Sonnet
|
||||||
|
Tokens: ~5k input, ~2k output
|
||||||
|
Retries: 1 (initial implementation had auth bug)
|
||||||
|
Time: 8 minutes
|
||||||
|
Outcome: Success
|
||||||
|
```
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
- Managing AI-driven development workflows
|
||||||
|
- Planning agent task decomposition
|
||||||
|
- Optimizing model tier selection
|
||||||
|
- Implementing eval-first development
|
||||||
|
- Reviewing AI-generated code
|
||||||
|
- Tracking development costs
|
||||||
|
|
||||||
|
## Integration with Other Skills
|
||||||
|
|
||||||
|
- **tdd-workflow**: Combine with eval-first loop for test-driven development
|
||||||
|
- **verification-loop**: Use for continuous validation during implementation
|
||||||
|
- **search-first**: Apply before implementation to find existing solutions
|
||||||
|
- **coding-standards**: Reference during code review phase
|
||||||
525
.kiro/skills/api-design/SKILL.md
Normal file
525
.kiro/skills/api-design/SKILL.md
Normal file
@@ -0,0 +1,525 @@
|
|||||||
|
---
|
||||||
|
name: api-design
|
||||||
|
description: >
|
||||||
|
REST API design patterns including resource naming, status codes, pagination, filtering, error responses, versioning, and rate limiting for production APIs.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
---
|
||||||
|
|
||||||
|
# API Design Patterns
|
||||||
|
|
||||||
|
Conventions and best practices for designing consistent, developer-friendly REST APIs.
|
||||||
|
|
||||||
|
## When to Activate
|
||||||
|
|
||||||
|
- Designing new API endpoints
|
||||||
|
- Reviewing existing API contracts
|
||||||
|
- Adding pagination, filtering, or sorting
|
||||||
|
- Implementing error handling for APIs
|
||||||
|
- Planning API versioning strategy
|
||||||
|
- Building public or partner-facing APIs
|
||||||
|
|
||||||
|
## Resource Design
|
||||||
|
|
||||||
|
### URL Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
# Resources are nouns, plural, lowercase, kebab-case
|
||||||
|
GET /api/v1/users
|
||||||
|
GET /api/v1/users/:id
|
||||||
|
POST /api/v1/users
|
||||||
|
PUT /api/v1/users/:id
|
||||||
|
PATCH /api/v1/users/:id
|
||||||
|
DELETE /api/v1/users/:id
|
||||||
|
|
||||||
|
# Sub-resources for relationships
|
||||||
|
GET /api/v1/users/:id/orders
|
||||||
|
POST /api/v1/users/:id/orders
|
||||||
|
|
||||||
|
# Actions that don't map to CRUD (use verbs sparingly)
|
||||||
|
POST /api/v1/orders/:id/cancel
|
||||||
|
POST /api/v1/auth/login
|
||||||
|
POST /api/v1/auth/refresh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Naming Rules
|
||||||
|
|
||||||
|
```
|
||||||
|
# GOOD
|
||||||
|
/api/v1/team-members # kebab-case for multi-word resources
|
||||||
|
/api/v1/orders?status=active # query params for filtering
|
||||||
|
/api/v1/users/123/orders # nested resources for ownership
|
||||||
|
|
||||||
|
# BAD
|
||||||
|
/api/v1/getUsers # verb in URL
|
||||||
|
/api/v1/user # singular (use plural)
|
||||||
|
/api/v1/team_members # snake_case in URLs
|
||||||
|
/api/v1/users/123/getOrders # verb in nested resource
|
||||||
|
```
|
||||||
|
|
||||||
|
## HTTP Methods and Status Codes
|
||||||
|
|
||||||
|
### Method Semantics
|
||||||
|
|
||||||
|
| Method | Idempotent | Safe | Use For |
|
||||||
|
|--------|-----------|------|---------|
|
||||||
|
| GET | Yes | Yes | Retrieve resources |
|
||||||
|
| POST | No | No | Create resources, trigger actions |
|
||||||
|
| PUT | Yes | No | Full replacement of a resource |
|
||||||
|
| PATCH | No* | No | Partial update of a resource |
|
||||||
|
| DELETE | Yes | No | Remove a resource |
|
||||||
|
|
||||||
|
*PATCH can be made idempotent with proper implementation
|
||||||
|
|
||||||
|
### Status Code Reference
|
||||||
|
|
||||||
|
```
|
||||||
|
# Success
|
||||||
|
200 OK — GET, PUT, PATCH (with response body)
|
||||||
|
201 Created — POST (include Location header)
|
||||||
|
204 No Content — DELETE, PUT (no response body)
|
||||||
|
|
||||||
|
# Client Errors
|
||||||
|
400 Bad Request — Validation failure, malformed JSON
|
||||||
|
401 Unauthorized — Missing or invalid authentication
|
||||||
|
403 Forbidden — Authenticated but not authorized
|
||||||
|
404 Not Found — Resource doesn't exist
|
||||||
|
409 Conflict — Duplicate entry, state conflict
|
||||||
|
422 Unprocessable Entity — Semantically invalid (valid JSON, bad data)
|
||||||
|
429 Too Many Requests — Rate limit exceeded
|
||||||
|
|
||||||
|
# Server Errors
|
||||||
|
500 Internal Server Error — Unexpected failure (never expose details)
|
||||||
|
502 Bad Gateway — Upstream service failed
|
||||||
|
503 Service Unavailable — Temporary overload, include Retry-After
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Mistakes
|
||||||
|
|
||||||
|
```
|
||||||
|
# BAD: 200 for everything
|
||||||
|
{ "status": 200, "success": false, "error": "Not found" }
|
||||||
|
|
||||||
|
# GOOD: Use HTTP status codes semantically
|
||||||
|
HTTP/1.1 404 Not Found
|
||||||
|
{ "error": { "code": "not_found", "message": "User not found" } }
|
||||||
|
|
||||||
|
# BAD: 500 for validation errors
|
||||||
|
# GOOD: 400 or 422 with field-level details
|
||||||
|
|
||||||
|
# BAD: 200 for created resources
|
||||||
|
# GOOD: 201 with Location header
|
||||||
|
HTTP/1.1 201 Created
|
||||||
|
Location: /api/v1/users/abc-123
|
||||||
|
```
|
||||||
|
|
||||||
|
## Response Format
|
||||||
|
|
||||||
|
### Success Response
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"id": "abc-123",
|
||||||
|
"email": "alice@example.com",
|
||||||
|
"name": "Alice",
|
||||||
|
"created_at": "2025-01-15T10:30:00Z"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Collection Response (with Pagination)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": [
|
||||||
|
{ "id": "abc-123", "name": "Alice" },
|
||||||
|
{ "id": "def-456", "name": "Bob" }
|
||||||
|
],
|
||||||
|
"meta": {
|
||||||
|
"total": 142,
|
||||||
|
"page": 1,
|
||||||
|
"per_page": 20,
|
||||||
|
"total_pages": 8
|
||||||
|
},
|
||||||
|
"links": {
|
||||||
|
"self": "/api/v1/users?page=1&per_page=20",
|
||||||
|
"next": "/api/v1/users?page=2&per_page=20",
|
||||||
|
"last": "/api/v1/users?page=8&per_page=20"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error Response
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"error": {
|
||||||
|
"code": "validation_error",
|
||||||
|
"message": "Request validation failed",
|
||||||
|
"details": [
|
||||||
|
{
|
||||||
|
"field": "email",
|
||||||
|
"message": "Must be a valid email address",
|
||||||
|
"code": "invalid_format"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"field": "age",
|
||||||
|
"message": "Must be between 0 and 150",
|
||||||
|
"code": "out_of_range"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Response Envelope Variants
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Option A: Envelope with data wrapper (recommended for public APIs)
|
||||||
|
interface ApiResponse<T> {
|
||||||
|
data: T;
|
||||||
|
meta?: PaginationMeta;
|
||||||
|
links?: PaginationLinks;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface ApiError {
|
||||||
|
error: {
|
||||||
|
code: string;
|
||||||
|
message: string;
|
||||||
|
details?: FieldError[];
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// Option B: Flat response (simpler, common for internal APIs)
|
||||||
|
// Success: just return the resource directly
|
||||||
|
// Error: return error object
|
||||||
|
// Distinguish by HTTP status code
|
||||||
|
```
|
||||||
|
|
||||||
|
## Pagination
|
||||||
|
|
||||||
|
### Offset-Based (Simple)
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/v1/users?page=2&per_page=20
|
||||||
|
|
||||||
|
# Implementation
|
||||||
|
SELECT * FROM users
|
||||||
|
ORDER BY created_at DESC
|
||||||
|
LIMIT 20 OFFSET 20;
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pros:** Easy to implement, supports "jump to page N"
|
||||||
|
**Cons:** Slow on large offsets (OFFSET 100000), inconsistent with concurrent inserts
|
||||||
|
|
||||||
|
### Cursor-Based (Scalable)
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/v1/users?cursor=eyJpZCI6MTIzfQ&limit=20
|
||||||
|
|
||||||
|
# Implementation
|
||||||
|
SELECT * FROM users
|
||||||
|
WHERE id > :cursor_id
|
||||||
|
ORDER BY id ASC
|
||||||
|
LIMIT 21; -- fetch one extra to determine has_next
|
||||||
|
```
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": [...],
|
||||||
|
"meta": {
|
||||||
|
"has_next": true,
|
||||||
|
"next_cursor": "eyJpZCI6MTQzfQ"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pros:** Consistent performance regardless of position, stable with concurrent inserts
|
||||||
|
**Cons:** Cannot jump to arbitrary page, cursor is opaque
|
||||||
|
|
||||||
|
### When to Use Which
|
||||||
|
|
||||||
|
| Use Case | Pagination Type |
|
||||||
|
|----------|----------------|
|
||||||
|
| Admin dashboards, small datasets (<10K) | Offset |
|
||||||
|
| Infinite scroll, feeds, large datasets | Cursor |
|
||||||
|
| Public APIs | Cursor (default) with offset (optional) |
|
||||||
|
| Search results | Offset (users expect page numbers) |
|
||||||
|
|
||||||
|
## Filtering, Sorting, and Search
|
||||||
|
|
||||||
|
### Filtering
|
||||||
|
|
||||||
|
```
|
||||||
|
# Simple equality
|
||||||
|
GET /api/v1/orders?status=active&customer_id=abc-123
|
||||||
|
|
||||||
|
# Comparison operators (use bracket notation)
|
||||||
|
GET /api/v1/products?price[gte]=10&price[lte]=100
|
||||||
|
GET /api/v1/orders?created_at[after]=2025-01-01
|
||||||
|
|
||||||
|
# Multiple values (comma-separated)
|
||||||
|
GET /api/v1/products?category=electronics,clothing
|
||||||
|
|
||||||
|
# Nested fields (dot notation)
|
||||||
|
GET /api/v1/orders?customer.country=US
|
||||||
|
```
|
||||||
|
|
||||||
|
### Sorting
|
||||||
|
|
||||||
|
```
|
||||||
|
# Single field (prefix - for descending)
|
||||||
|
GET /api/v1/products?sort=-created_at
|
||||||
|
|
||||||
|
# Multiple fields (comma-separated)
|
||||||
|
GET /api/v1/products?sort=-featured,price,-created_at
|
||||||
|
```
|
||||||
|
|
||||||
|
### Full-Text Search
|
||||||
|
|
||||||
|
```
|
||||||
|
# Search query parameter
|
||||||
|
GET /api/v1/products?q=wireless+headphones
|
||||||
|
|
||||||
|
# Field-specific search
|
||||||
|
GET /api/v1/users?email=alice
|
||||||
|
```
|
||||||
|
|
||||||
|
### Sparse Fieldsets
|
||||||
|
|
||||||
|
```
|
||||||
|
# Return only specified fields (reduces payload)
|
||||||
|
GET /api/v1/users?fields=id,name,email
|
||||||
|
GET /api/v1/orders?fields=id,total,status&include=customer.name
|
||||||
|
```
|
||||||
|
|
||||||
|
## Authentication and Authorization
|
||||||
|
|
||||||
|
### Token-Based Auth
|
||||||
|
|
||||||
|
```
|
||||||
|
# Bearer token in Authorization header
|
||||||
|
GET /api/v1/users
|
||||||
|
Authorization: Bearer eyJhbGciOiJIUzI1NiIs...
|
||||||
|
|
||||||
|
# API key (for server-to-server)
|
||||||
|
GET /api/v1/data
|
||||||
|
X-API-Key: sk_live_abc123
|
||||||
|
```
|
||||||
|
|
||||||
|
### Authorization Patterns
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Resource-level: check ownership
|
||||||
|
app.get("/api/v1/orders/:id", async (req, res) => {
|
||||||
|
const order = await Order.findById(req.params.id);
|
||||||
|
if (!order) return res.status(404).json({ error: { code: "not_found" } });
|
||||||
|
if (order.userId !== req.user.id) return res.status(403).json({ error: { code: "forbidden" } });
|
||||||
|
return res.json({ data: order });
|
||||||
|
});
|
||||||
|
|
||||||
|
// Role-based: check permissions
|
||||||
|
app.delete("/api/v1/users/:id", requireRole("admin"), async (req, res) => {
|
||||||
|
await User.delete(req.params.id);
|
||||||
|
return res.status(204).send();
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Rate Limiting
|
||||||
|
|
||||||
|
### Headers
|
||||||
|
|
||||||
|
```
|
||||||
|
HTTP/1.1 200 OK
|
||||||
|
X-RateLimit-Limit: 100
|
||||||
|
X-RateLimit-Remaining: 95
|
||||||
|
X-RateLimit-Reset: 1640000000
|
||||||
|
|
||||||
|
# When exceeded
|
||||||
|
HTTP/1.1 429 Too Many Requests
|
||||||
|
Retry-After: 60
|
||||||
|
{
|
||||||
|
"error": {
|
||||||
|
"code": "rate_limit_exceeded",
|
||||||
|
"message": "Rate limit exceeded. Try again in 60 seconds."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rate Limit Tiers
|
||||||
|
|
||||||
|
| Tier | Limit | Window | Use Case |
|
||||||
|
|------|-------|--------|----------|
|
||||||
|
| Anonymous | 30/min | Per IP | Public endpoints |
|
||||||
|
| Authenticated | 100/min | Per user | Standard API access |
|
||||||
|
| Premium | 1000/min | Per API key | Paid API plans |
|
||||||
|
| Internal | 10000/min | Per service | Service-to-service |
|
||||||
|
|
||||||
|
## Versioning
|
||||||
|
|
||||||
|
### URL Path Versioning (Recommended)
|
||||||
|
|
||||||
|
```
|
||||||
|
/api/v1/users
|
||||||
|
/api/v2/users
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pros:** Explicit, easy to route, cacheable
|
||||||
|
**Cons:** URL changes between versions
|
||||||
|
|
||||||
|
### Header Versioning
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/users
|
||||||
|
Accept: application/vnd.myapp.v2+json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pros:** Clean URLs
|
||||||
|
**Cons:** Harder to test, easy to forget
|
||||||
|
|
||||||
|
### Versioning Strategy
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Start with /api/v1/ — don't version until you need to
|
||||||
|
2. Maintain at most 2 active versions (current + previous)
|
||||||
|
3. Deprecation timeline:
|
||||||
|
- Announce deprecation (6 months notice for public APIs)
|
||||||
|
- Add Sunset header: Sunset: Sat, 01 Jan 2026 00:00:00 GMT
|
||||||
|
- Return 410 Gone after sunset date
|
||||||
|
4. Non-breaking changes don't need a new version:
|
||||||
|
- Adding new fields to responses
|
||||||
|
- Adding new optional query parameters
|
||||||
|
- Adding new endpoints
|
||||||
|
5. Breaking changes require a new version:
|
||||||
|
- Removing or renaming fields
|
||||||
|
- Changing field types
|
||||||
|
- Changing URL structure
|
||||||
|
- Changing authentication method
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation Patterns
|
||||||
|
|
||||||
|
### TypeScript (Next.js API Route)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { z } from "zod";
|
||||||
|
import { NextRequest, NextResponse } from "next/server";
|
||||||
|
|
||||||
|
const createUserSchema = z.object({
|
||||||
|
email: z.string().email(),
|
||||||
|
name: z.string().min(1).max(100),
|
||||||
|
});
|
||||||
|
|
||||||
|
export async function POST(req: NextRequest) {
|
||||||
|
const body = await req.json();
|
||||||
|
const parsed = createUserSchema.safeParse(body);
|
||||||
|
|
||||||
|
if (!parsed.success) {
|
||||||
|
return NextResponse.json({
|
||||||
|
error: {
|
||||||
|
code: "validation_error",
|
||||||
|
message: "Request validation failed",
|
||||||
|
details: parsed.error.issues.map(i => ({
|
||||||
|
field: i.path.join("."),
|
||||||
|
message: i.message,
|
||||||
|
code: i.code,
|
||||||
|
})),
|
||||||
|
},
|
||||||
|
}, { status: 422 });
|
||||||
|
}
|
||||||
|
|
||||||
|
const user = await createUser(parsed.data);
|
||||||
|
|
||||||
|
return NextResponse.json(
|
||||||
|
{ data: user },
|
||||||
|
{
|
||||||
|
status: 201,
|
||||||
|
headers: { Location: `/api/v1/users/${user.id}` },
|
||||||
|
},
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Python (Django REST Framework)
|
||||||
|
|
||||||
|
```python
|
||||||
|
from rest_framework import serializers, viewsets, status
|
||||||
|
from rest_framework.response import Response
|
||||||
|
|
||||||
|
class CreateUserSerializer(serializers.Serializer):
|
||||||
|
email = serializers.EmailField()
|
||||||
|
name = serializers.CharField(max_length=100)
|
||||||
|
|
||||||
|
class UserSerializer(serializers.ModelSerializer):
|
||||||
|
class Meta:
|
||||||
|
model = User
|
||||||
|
fields = ["id", "email", "name", "created_at"]
|
||||||
|
|
||||||
|
class UserViewSet(viewsets.ModelViewSet):
|
||||||
|
serializer_class = UserSerializer
|
||||||
|
permission_classes = [IsAuthenticated]
|
||||||
|
|
||||||
|
def get_serializer_class(self):
|
||||||
|
if self.action == "create":
|
||||||
|
return CreateUserSerializer
|
||||||
|
return UserSerializer
|
||||||
|
|
||||||
|
def create(self, request):
|
||||||
|
serializer = CreateUserSerializer(data=request.data)
|
||||||
|
serializer.is_valid(raise_exception=True)
|
||||||
|
user = UserService.create(**serializer.validated_data)
|
||||||
|
return Response(
|
||||||
|
{"data": UserSerializer(user).data},
|
||||||
|
status=status.HTTP_201_CREATED,
|
||||||
|
headers={"Location": f"/api/v1/users/{user.id}"},
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Go (net/http)
|
||||||
|
|
||||||
|
```go
|
||||||
|
func (h *UserHandler) CreateUser(w http.ResponseWriter, r *http.Request) {
|
||||||
|
var req CreateUserRequest
|
||||||
|
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||||
|
writeError(w, http.StatusBadRequest, "invalid_json", "Invalid request body")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := req.Validate(); err != nil {
|
||||||
|
writeError(w, http.StatusUnprocessableEntity, "validation_error", err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
user, err := h.service.Create(r.Context(), req)
|
||||||
|
if err != nil {
|
||||||
|
switch {
|
||||||
|
case errors.Is(err, domain.ErrEmailTaken):
|
||||||
|
writeError(w, http.StatusConflict, "email_taken", "Email already registered")
|
||||||
|
default:
|
||||||
|
writeError(w, http.StatusInternalServerError, "internal_error", "Internal error")
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
w.Header().Set("Location", fmt.Sprintf("/api/v1/users/%s", user.ID))
|
||||||
|
writeJSON(w, http.StatusCreated, map[string]any{"data": user})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## API Design Checklist
|
||||||
|
|
||||||
|
Before shipping a new endpoint:
|
||||||
|
|
||||||
|
- [ ] Resource URL follows naming conventions (plural, kebab-case, no verbs)
|
||||||
|
- [ ] Correct HTTP method used (GET for reads, POST for creates, etc.)
|
||||||
|
- [ ] Appropriate status codes returned (not 200 for everything)
|
||||||
|
- [ ] Input validated with schema (Zod, Pydantic, Bean Validation)
|
||||||
|
- [ ] Error responses follow standard format with codes and messages
|
||||||
|
- [ ] Pagination implemented for list endpoints (cursor or offset)
|
||||||
|
- [ ] Authentication required (or explicitly marked as public)
|
||||||
|
- [ ] Authorization checked (user can only access their own resources)
|
||||||
|
- [ ] Rate limiting configured
|
||||||
|
- [ ] Response does not leak internal details (stack traces, SQL errors)
|
||||||
|
- [ ] Consistent naming with existing endpoints (camelCase vs snake_case)
|
||||||
|
- [ ] Documented (OpenAPI/Swagger spec updated)
|
||||||
600
.kiro/skills/backend-patterns/SKILL.md
Normal file
600
.kiro/skills/backend-patterns/SKILL.md
Normal file
@@ -0,0 +1,600 @@
|
|||||||
|
---
|
||||||
|
name: backend-patterns
|
||||||
|
description: >
|
||||||
|
Backend architecture patterns, API design, database optimization, and server-side best practices for Node.js, Express, and Next.js API routes.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
---
|
||||||
|
|
||||||
|
# Backend Development Patterns
|
||||||
|
|
||||||
|
Backend architecture patterns and best practices for scalable server-side applications.
|
||||||
|
|
||||||
|
## When to Activate
|
||||||
|
|
||||||
|
- Designing REST or GraphQL API endpoints
|
||||||
|
- Implementing repository, service, or controller layers
|
||||||
|
- Optimizing database queries (N+1, indexing, connection pooling)
|
||||||
|
- Adding caching (Redis, in-memory, HTTP cache headers)
|
||||||
|
- Setting up background jobs or async processing
|
||||||
|
- Structuring error handling and validation for APIs
|
||||||
|
- Building middleware (auth, logging, rate limiting)
|
||||||
|
|
||||||
|
## API Design Patterns
|
||||||
|
|
||||||
|
### RESTful API Structure
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ Resource-based URLs
|
||||||
|
GET /api/markets # List resources
|
||||||
|
GET /api/markets/:id # Get single resource
|
||||||
|
POST /api/markets # Create resource
|
||||||
|
PUT /api/markets/:id # Replace resource
|
||||||
|
PATCH /api/markets/:id # Update resource
|
||||||
|
DELETE /api/markets/:id # Delete resource
|
||||||
|
|
||||||
|
// ✅ Query parameters for filtering, sorting, pagination
|
||||||
|
GET /api/markets?status=active&sort=volume&limit=20&offset=0
|
||||||
|
```
|
||||||
|
|
||||||
|
### Repository Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Abstract data access logic
|
||||||
|
interface MarketRepository {
|
||||||
|
findAll(filters?: MarketFilters): Promise<Market[]>
|
||||||
|
findById(id: string): Promise<Market | null>
|
||||||
|
create(data: CreateMarketDto): Promise<Market>
|
||||||
|
update(id: string, data: UpdateMarketDto): Promise<Market>
|
||||||
|
delete(id: string): Promise<void>
|
||||||
|
}
|
||||||
|
|
||||||
|
class SupabaseMarketRepository implements MarketRepository {
|
||||||
|
async findAll(filters?: MarketFilters): Promise<Market[]> {
|
||||||
|
let query = supabase.from('markets').select('*')
|
||||||
|
|
||||||
|
if (filters?.status) {
|
||||||
|
query = query.eq('status', filters.status)
|
||||||
|
}
|
||||||
|
|
||||||
|
if (filters?.limit) {
|
||||||
|
query = query.limit(filters.limit)
|
||||||
|
}
|
||||||
|
|
||||||
|
const { data, error } = await query
|
||||||
|
|
||||||
|
if (error) throw new Error(error.message)
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
// Other methods...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Service Layer Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Business logic separated from data access
|
||||||
|
class MarketService {
|
||||||
|
constructor(private marketRepo: MarketRepository) {}
|
||||||
|
|
||||||
|
async searchMarkets(query: string, limit: number = 10): Promise<Market[]> {
|
||||||
|
// Business logic
|
||||||
|
const embedding = await generateEmbedding(query)
|
||||||
|
const results = await this.vectorSearch(embedding, limit)
|
||||||
|
|
||||||
|
// Fetch full data
|
||||||
|
const markets = await this.marketRepo.findByIds(results.map(r => r.id))
|
||||||
|
|
||||||
|
// Sort by similarity
|
||||||
|
return markets.sort((a, b) => {
|
||||||
|
const scoreA = results.find(r => r.id === a.id)?.score || 0
|
||||||
|
const scoreB = results.find(r => r.id === b.id)?.score || 0
|
||||||
|
return scoreA - scoreB
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
private async vectorSearch(embedding: number[], limit: number) {
|
||||||
|
// Vector search implementation
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Middleware Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Request/response processing pipeline
|
||||||
|
export function withAuth(handler: NextApiHandler): NextApiHandler {
|
||||||
|
return async (req, res) => {
|
||||||
|
const token = req.headers.authorization?.replace('Bearer ', '')
|
||||||
|
|
||||||
|
if (!token) {
|
||||||
|
return res.status(401).json({ error: 'Unauthorized' })
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const user = await verifyToken(token)
|
||||||
|
req.user = user
|
||||||
|
return handler(req, res)
|
||||||
|
} catch (error) {
|
||||||
|
return res.status(401).json({ error: 'Invalid token' })
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
export default withAuth(async (req, res) => {
|
||||||
|
// Handler has access to req.user
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
## Database Patterns
|
||||||
|
|
||||||
|
### Query Optimization
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Select only needed columns
|
||||||
|
const { data } = await supabase
|
||||||
|
.from('markets')
|
||||||
|
.select('id, name, status, volume')
|
||||||
|
.eq('status', 'active')
|
||||||
|
.order('volume', { ascending: false })
|
||||||
|
.limit(10)
|
||||||
|
|
||||||
|
// ❌ BAD: Select everything
|
||||||
|
const { data } = await supabase
|
||||||
|
.from('markets')
|
||||||
|
.select('*')
|
||||||
|
```
|
||||||
|
|
||||||
|
### N+1 Query Prevention
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ BAD: N+1 query problem
|
||||||
|
const markets = await getMarkets()
|
||||||
|
for (const market of markets) {
|
||||||
|
market.creator = await getUser(market.creator_id) // N queries
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ GOOD: Batch fetch
|
||||||
|
const markets = await getMarkets()
|
||||||
|
const creatorIds = markets.map(m => m.creator_id)
|
||||||
|
const creators = await getUsers(creatorIds) // 1 query
|
||||||
|
const creatorMap = new Map(creators.map(c => [c.id, c]))
|
||||||
|
|
||||||
|
markets.forEach(market => {
|
||||||
|
market.creator = creatorMap.get(market.creator_id)
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### Transaction Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
async function createMarketWithPosition(
|
||||||
|
marketData: CreateMarketDto,
|
||||||
|
positionData: CreatePositionDto
|
||||||
|
) {
|
||||||
|
// Use Supabase transaction
|
||||||
|
const { data, error } = await supabase.rpc('create_market_with_position', {
|
||||||
|
market_data: marketData,
|
||||||
|
position_data: positionData
|
||||||
|
})
|
||||||
|
|
||||||
|
if (error) throw new Error('Transaction failed')
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
// SQL function in Supabase
|
||||||
|
CREATE OR REPLACE FUNCTION create_market_with_position(
|
||||||
|
market_data jsonb,
|
||||||
|
position_data jsonb
|
||||||
|
)
|
||||||
|
RETURNS jsonb
|
||||||
|
LANGUAGE plpgsql
|
||||||
|
AS $
|
||||||
|
BEGIN
|
||||||
|
-- Start transaction automatically
|
||||||
|
INSERT INTO markets VALUES (market_data);
|
||||||
|
INSERT INTO positions VALUES (position_data);
|
||||||
|
RETURN jsonb_build_object('success', true);
|
||||||
|
EXCEPTION
|
||||||
|
WHEN OTHERS THEN
|
||||||
|
-- Rollback happens automatically
|
||||||
|
RETURN jsonb_build_object('success', false, 'error', SQLERRM);
|
||||||
|
END;
|
||||||
|
$;
|
||||||
|
```
|
||||||
|
|
||||||
|
## Caching Strategies
|
||||||
|
|
||||||
|
### Redis Caching Layer
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
class CachedMarketRepository implements MarketRepository {
|
||||||
|
constructor(
|
||||||
|
private baseRepo: MarketRepository,
|
||||||
|
private redis: RedisClient
|
||||||
|
) {}
|
||||||
|
|
||||||
|
async findById(id: string): Promise<Market | null> {
|
||||||
|
// Check cache first
|
||||||
|
const cached = await this.redis.get(`market:${id}`)
|
||||||
|
|
||||||
|
if (cached) {
|
||||||
|
return JSON.parse(cached)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cache miss - fetch from database
|
||||||
|
const market = await this.baseRepo.findById(id)
|
||||||
|
|
||||||
|
if (market) {
|
||||||
|
// Cache for 5 minutes
|
||||||
|
await this.redis.setex(`market:${id}`, 300, JSON.stringify(market))
|
||||||
|
}
|
||||||
|
|
||||||
|
return market
|
||||||
|
}
|
||||||
|
|
||||||
|
async invalidateCache(id: string): Promise<void> {
|
||||||
|
await this.redis.del(`market:${id}`)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cache-Aside Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
async function getMarketWithCache(id: string): Promise<Market> {
|
||||||
|
const cacheKey = `market:${id}`
|
||||||
|
|
||||||
|
// Try cache
|
||||||
|
const cached = await redis.get(cacheKey)
|
||||||
|
if (cached) return JSON.parse(cached)
|
||||||
|
|
||||||
|
// Cache miss - fetch from DB
|
||||||
|
const market = await db.markets.findUnique({ where: { id } })
|
||||||
|
|
||||||
|
if (!market) throw new Error('Market not found')
|
||||||
|
|
||||||
|
// Update cache
|
||||||
|
await redis.setex(cacheKey, 300, JSON.stringify(market))
|
||||||
|
|
||||||
|
return market
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling Patterns
|
||||||
|
|
||||||
|
### Centralized Error Handler
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
class ApiError extends Error {
|
||||||
|
constructor(
|
||||||
|
public statusCode: number,
|
||||||
|
public message: string,
|
||||||
|
public isOperational = true
|
||||||
|
) {
|
||||||
|
super(message)
|
||||||
|
Object.setPrototypeOf(this, ApiError.prototype)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export function errorHandler(error: unknown, req: Request): Response {
|
||||||
|
if (error instanceof ApiError) {
|
||||||
|
return NextResponse.json({
|
||||||
|
success: false,
|
||||||
|
error: error.message
|
||||||
|
}, { status: error.statusCode })
|
||||||
|
}
|
||||||
|
|
||||||
|
if (error instanceof z.ZodError) {
|
||||||
|
return NextResponse.json({
|
||||||
|
success: false,
|
||||||
|
error: 'Validation failed',
|
||||||
|
details: error.errors
|
||||||
|
}, { status: 400 })
|
||||||
|
}
|
||||||
|
|
||||||
|
// Log unexpected errors
|
||||||
|
console.error('Unexpected error:', error)
|
||||||
|
|
||||||
|
return NextResponse.json({
|
||||||
|
success: false,
|
||||||
|
error: 'Internal server error'
|
||||||
|
}, { status: 500 })
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
export async function GET(request: Request) {
|
||||||
|
try {
|
||||||
|
const data = await fetchData()
|
||||||
|
return NextResponse.json({ success: true, data })
|
||||||
|
} catch (error) {
|
||||||
|
return errorHandler(error, request)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Retry with Exponential Backoff
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
async function fetchWithRetry<T>(
|
||||||
|
fn: () => Promise<T>,
|
||||||
|
maxRetries = 3
|
||||||
|
): Promise<T> {
|
||||||
|
let lastError: Error
|
||||||
|
|
||||||
|
for (let i = 0; i < maxRetries; i++) {
|
||||||
|
try {
|
||||||
|
return await fn()
|
||||||
|
} catch (error) {
|
||||||
|
lastError = error as Error
|
||||||
|
|
||||||
|
if (i < maxRetries - 1) {
|
||||||
|
// Exponential backoff: 1s, 2s, 4s
|
||||||
|
const delay = Math.pow(2, i) * 1000
|
||||||
|
await new Promise(resolve => setTimeout(resolve, delay))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
throw lastError!
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
const data = await fetchWithRetry(() => fetchFromAPI())
|
||||||
|
```
|
||||||
|
|
||||||
|
## Authentication & Authorization
|
||||||
|
|
||||||
|
### JWT Token Validation
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import jwt from 'jsonwebtoken'
|
||||||
|
|
||||||
|
interface JWTPayload {
|
||||||
|
userId: string
|
||||||
|
email: string
|
||||||
|
role: 'admin' | 'user'
|
||||||
|
}
|
||||||
|
|
||||||
|
export function verifyToken(token: string): JWTPayload {
|
||||||
|
try {
|
||||||
|
const payload = jwt.verify(token, process.env.JWT_SECRET!) as JWTPayload
|
||||||
|
return payload
|
||||||
|
} catch (error) {
|
||||||
|
throw new ApiError(401, 'Invalid token')
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function requireAuth(request: Request) {
|
||||||
|
const token = request.headers.get('authorization')?.replace('Bearer ', '')
|
||||||
|
|
||||||
|
if (!token) {
|
||||||
|
throw new ApiError(401, 'Missing authorization token')
|
||||||
|
}
|
||||||
|
|
||||||
|
return verifyToken(token)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage in API route
|
||||||
|
export async function GET(request: Request) {
|
||||||
|
const user = await requireAuth(request)
|
||||||
|
|
||||||
|
const data = await getDataForUser(user.userId)
|
||||||
|
|
||||||
|
return NextResponse.json({ success: true, data })
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Role-Based Access Control
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
type Permission = 'read' | 'write' | 'delete' | 'admin'
|
||||||
|
|
||||||
|
interface User {
|
||||||
|
id: string
|
||||||
|
role: 'admin' | 'moderator' | 'user'
|
||||||
|
}
|
||||||
|
|
||||||
|
const rolePermissions: Record<User['role'], Permission[]> = {
|
||||||
|
admin: ['read', 'write', 'delete', 'admin'],
|
||||||
|
moderator: ['read', 'write', 'delete'],
|
||||||
|
user: ['read', 'write']
|
||||||
|
}
|
||||||
|
|
||||||
|
export function hasPermission(user: User, permission: Permission): boolean {
|
||||||
|
return rolePermissions[user.role].includes(permission)
|
||||||
|
}
|
||||||
|
|
||||||
|
export function requirePermission(permission: Permission) {
|
||||||
|
return (handler: (request: Request, user: User) => Promise<Response>) => {
|
||||||
|
return async (request: Request) => {
|
||||||
|
const user = await requireAuth(request)
|
||||||
|
|
||||||
|
if (!hasPermission(user, permission)) {
|
||||||
|
throw new ApiError(403, 'Insufficient permissions')
|
||||||
|
}
|
||||||
|
|
||||||
|
return handler(request, user)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage - HOF wraps the handler
|
||||||
|
export const DELETE = requirePermission('delete')(
|
||||||
|
async (request: Request, user: User) => {
|
||||||
|
// Handler receives authenticated user with verified permission
|
||||||
|
return new Response('Deleted', { status: 200 })
|
||||||
|
}
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Rate Limiting
|
||||||
|
|
||||||
|
### Simple In-Memory Rate Limiter
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
class RateLimiter {
|
||||||
|
private requests = new Map<string, number[]>()
|
||||||
|
|
||||||
|
async checkLimit(
|
||||||
|
identifier: string,
|
||||||
|
maxRequests: number,
|
||||||
|
windowMs: number
|
||||||
|
): Promise<boolean> {
|
||||||
|
const now = Date.now()
|
||||||
|
const requests = this.requests.get(identifier) || []
|
||||||
|
|
||||||
|
// Remove old requests outside window
|
||||||
|
const recentRequests = requests.filter(time => now - time < windowMs)
|
||||||
|
|
||||||
|
if (recentRequests.length >= maxRequests) {
|
||||||
|
return false // Rate limit exceeded
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add current request
|
||||||
|
recentRequests.push(now)
|
||||||
|
this.requests.set(identifier, recentRequests)
|
||||||
|
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const limiter = new RateLimiter()
|
||||||
|
|
||||||
|
export async function GET(request: Request) {
|
||||||
|
const ip = request.headers.get('x-forwarded-for') || 'unknown'
|
||||||
|
|
||||||
|
const allowed = await limiter.checkLimit(ip, 100, 60000) // 100 req/min
|
||||||
|
|
||||||
|
if (!allowed) {
|
||||||
|
return NextResponse.json({
|
||||||
|
error: 'Rate limit exceeded'
|
||||||
|
}, { status: 429 })
|
||||||
|
}
|
||||||
|
|
||||||
|
// Continue with request
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Background Jobs & Queues
|
||||||
|
|
||||||
|
### Simple Queue Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
class JobQueue<T> {
|
||||||
|
private queue: T[] = []
|
||||||
|
private processing = false
|
||||||
|
|
||||||
|
async add(job: T): Promise<void> {
|
||||||
|
this.queue.push(job)
|
||||||
|
|
||||||
|
if (!this.processing) {
|
||||||
|
this.process()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private async process(): Promise<void> {
|
||||||
|
this.processing = true
|
||||||
|
|
||||||
|
while (this.queue.length > 0) {
|
||||||
|
const job = this.queue.shift()!
|
||||||
|
|
||||||
|
try {
|
||||||
|
await this.execute(job)
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Job failed:', error)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
this.processing = false
|
||||||
|
}
|
||||||
|
|
||||||
|
private async execute(job: T): Promise<void> {
|
||||||
|
// Job execution logic
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage for indexing markets
|
||||||
|
interface IndexJob {
|
||||||
|
marketId: string
|
||||||
|
}
|
||||||
|
|
||||||
|
const indexQueue = new JobQueue<IndexJob>()
|
||||||
|
|
||||||
|
export async function POST(request: Request) {
|
||||||
|
const { marketId } = await request.json()
|
||||||
|
|
||||||
|
// Add to queue instead of blocking
|
||||||
|
await indexQueue.add({ marketId })
|
||||||
|
|
||||||
|
return NextResponse.json({ success: true, message: 'Job queued' })
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Logging & Monitoring
|
||||||
|
|
||||||
|
### Structured Logging
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
interface LogContext {
|
||||||
|
userId?: string
|
||||||
|
requestId?: string
|
||||||
|
method?: string
|
||||||
|
path?: string
|
||||||
|
[key: string]: unknown
|
||||||
|
}
|
||||||
|
|
||||||
|
class Logger {
|
||||||
|
log(level: 'info' | 'warn' | 'error', message: string, context?: LogContext) {
|
||||||
|
const entry = {
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
level,
|
||||||
|
message,
|
||||||
|
...context
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(JSON.stringify(entry))
|
||||||
|
}
|
||||||
|
|
||||||
|
info(message: string, context?: LogContext) {
|
||||||
|
this.log('info', message, context)
|
||||||
|
}
|
||||||
|
|
||||||
|
warn(message: string, context?: LogContext) {
|
||||||
|
this.log('warn', message, context)
|
||||||
|
}
|
||||||
|
|
||||||
|
error(message: string, error: Error, context?: LogContext) {
|
||||||
|
this.log('error', message, {
|
||||||
|
...context,
|
||||||
|
error: error.message,
|
||||||
|
stack: error.stack
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const logger = new Logger()
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
export async function GET(request: Request) {
|
||||||
|
const requestId = crypto.randomUUID()
|
||||||
|
|
||||||
|
logger.info('Fetching markets', {
|
||||||
|
requestId,
|
||||||
|
method: 'GET',
|
||||||
|
path: '/api/markets'
|
||||||
|
})
|
||||||
|
|
||||||
|
try {
|
||||||
|
const markets = await fetchMarkets()
|
||||||
|
return NextResponse.json({ success: true, data: markets })
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('Failed to fetch markets', error as Error, { requestId })
|
||||||
|
return NextResponse.json({ error: 'Internal error' }, { status: 500 })
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Remember**: Backend patterns enable scalable, maintainable server-side applications. Choose patterns that fit your complexity level.
|
||||||
532
.kiro/skills/coding-standards/SKILL.md
Normal file
532
.kiro/skills/coding-standards/SKILL.md
Normal file
@@ -0,0 +1,532 @@
|
|||||||
|
---
|
||||||
|
name: coding-standards
|
||||||
|
description: >
|
||||||
|
Universal coding standards, best practices, and patterns for TypeScript, JavaScript, React, and Node.js development.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
---
|
||||||
|
|
||||||
|
# Coding Standards & Best Practices
|
||||||
|
|
||||||
|
Universal coding standards applicable across all projects.
|
||||||
|
|
||||||
|
## When to Activate
|
||||||
|
|
||||||
|
- Starting a new project or module
|
||||||
|
- Reviewing code for quality and maintainability
|
||||||
|
- Refactoring existing code to follow conventions
|
||||||
|
- Enforcing naming, formatting, or structural consistency
|
||||||
|
- Setting up linting, formatting, or type-checking rules
|
||||||
|
- Onboarding new contributors to coding conventions
|
||||||
|
|
||||||
|
## Code Quality Principles
|
||||||
|
|
||||||
|
### 1. Readability First
|
||||||
|
- Code is read more than written
|
||||||
|
- Clear variable and function names
|
||||||
|
- Self-documenting code preferred over comments
|
||||||
|
- Consistent formatting
|
||||||
|
|
||||||
|
### 2. KISS (Keep It Simple, Stupid)
|
||||||
|
- Simplest solution that works
|
||||||
|
- Avoid over-engineering
|
||||||
|
- No premature optimization
|
||||||
|
- Easy to understand > clever code
|
||||||
|
|
||||||
|
### 3. DRY (Don't Repeat Yourself)
|
||||||
|
- Extract common logic into functions
|
||||||
|
- Create reusable components
|
||||||
|
- Share utilities across modules
|
||||||
|
- Avoid copy-paste programming
|
||||||
|
|
||||||
|
### 4. YAGNI (You Aren't Gonna Need It)
|
||||||
|
- Don't build features before they're needed
|
||||||
|
- Avoid speculative generality
|
||||||
|
- Add complexity only when required
|
||||||
|
- Start simple, refactor when needed
|
||||||
|
|
||||||
|
## TypeScript/JavaScript Standards
|
||||||
|
|
||||||
|
### Variable Naming
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Descriptive names
|
||||||
|
const marketSearchQuery = 'election'
|
||||||
|
const isUserAuthenticated = true
|
||||||
|
const totalRevenue = 1000
|
||||||
|
|
||||||
|
// ❌ BAD: Unclear names
|
||||||
|
const q = 'election'
|
||||||
|
const flag = true
|
||||||
|
const x = 1000
|
||||||
|
```
|
||||||
|
|
||||||
|
### Function Naming
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Verb-noun pattern
|
||||||
|
async function fetchMarketData(marketId: string) { }
|
||||||
|
function calculateSimilarity(a: number[], b: number[]) { }
|
||||||
|
function isValidEmail(email: string): boolean { }
|
||||||
|
|
||||||
|
// ❌ BAD: Unclear or noun-only
|
||||||
|
async function market(id: string) { }
|
||||||
|
function similarity(a, b) { }
|
||||||
|
function email(e) { }
|
||||||
|
```
|
||||||
|
|
||||||
|
### Immutability Pattern (CRITICAL)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ ALWAYS use spread operator
|
||||||
|
const updatedUser = {
|
||||||
|
...user,
|
||||||
|
name: 'New Name'
|
||||||
|
}
|
||||||
|
|
||||||
|
const updatedArray = [...items, newItem]
|
||||||
|
|
||||||
|
// ❌ NEVER mutate directly
|
||||||
|
user.name = 'New Name' // BAD
|
||||||
|
items.push(newItem) // BAD
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Comprehensive error handling
|
||||||
|
async function fetchData(url: string) {
|
||||||
|
try {
|
||||||
|
const response = await fetch(url)
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error(`HTTP ${response.status}: ${response.statusText}`)
|
||||||
|
}
|
||||||
|
|
||||||
|
return await response.json()
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Fetch failed:', error)
|
||||||
|
throw new Error('Failed to fetch data')
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ❌ BAD: No error handling
|
||||||
|
async function fetchData(url) {
|
||||||
|
const response = await fetch(url)
|
||||||
|
return response.json()
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Async/Await Best Practices
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Parallel execution when possible
|
||||||
|
const [users, markets, stats] = await Promise.all([
|
||||||
|
fetchUsers(),
|
||||||
|
fetchMarkets(),
|
||||||
|
fetchStats()
|
||||||
|
])
|
||||||
|
|
||||||
|
// ❌ BAD: Sequential when unnecessary
|
||||||
|
const users = await fetchUsers()
|
||||||
|
const markets = await fetchMarkets()
|
||||||
|
const stats = await fetchStats()
|
||||||
|
```
|
||||||
|
|
||||||
|
### Type Safety
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Proper types
|
||||||
|
interface Market {
|
||||||
|
id: string
|
||||||
|
name: string
|
||||||
|
status: 'active' | 'resolved' | 'closed'
|
||||||
|
created_at: Date
|
||||||
|
}
|
||||||
|
|
||||||
|
function getMarket(id: string): Promise<Market> {
|
||||||
|
// Implementation
|
||||||
|
}
|
||||||
|
|
||||||
|
// ❌ BAD: Using 'any'
|
||||||
|
function getMarket(id: any): Promise<any> {
|
||||||
|
// Implementation
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## React Best Practices
|
||||||
|
|
||||||
|
### Component Structure
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Functional component with types
|
||||||
|
interface ButtonProps {
|
||||||
|
children: React.ReactNode
|
||||||
|
onClick: () => void
|
||||||
|
disabled?: boolean
|
||||||
|
variant?: 'primary' | 'secondary'
|
||||||
|
}
|
||||||
|
|
||||||
|
export function Button({
|
||||||
|
children,
|
||||||
|
onClick,
|
||||||
|
disabled = false,
|
||||||
|
variant = 'primary'
|
||||||
|
}: ButtonProps) {
|
||||||
|
return (
|
||||||
|
<button
|
||||||
|
onClick={onClick}
|
||||||
|
disabled={disabled}
|
||||||
|
className={`btn btn-${variant}`}
|
||||||
|
>
|
||||||
|
{children}
|
||||||
|
</button>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ❌ BAD: No types, unclear structure
|
||||||
|
export function Button(props) {
|
||||||
|
return <button onClick={props.onClick}>{props.children}</button>
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Hooks
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Reusable custom hook
|
||||||
|
export function useDebounce<T>(value: T, delay: number): T {
|
||||||
|
const [debouncedValue, setDebouncedValue] = useState<T>(value)
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
const handler = setTimeout(() => {
|
||||||
|
setDebouncedValue(value)
|
||||||
|
}, delay)
|
||||||
|
|
||||||
|
return () => clearTimeout(handler)
|
||||||
|
}, [value, delay])
|
||||||
|
|
||||||
|
return debouncedValue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
const debouncedQuery = useDebounce(searchQuery, 500)
|
||||||
|
```
|
||||||
|
|
||||||
|
### State Management
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Proper state updates
|
||||||
|
const [count, setCount] = useState(0)
|
||||||
|
|
||||||
|
// Functional update for state based on previous state
|
||||||
|
setCount(prev => prev + 1)
|
||||||
|
|
||||||
|
// ❌ BAD: Direct state reference
|
||||||
|
setCount(count + 1) // Can be stale in async scenarios
|
||||||
|
```
|
||||||
|
|
||||||
|
### Conditional Rendering
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Clear conditional rendering
|
||||||
|
{isLoading && <Spinner />}
|
||||||
|
{error && <ErrorMessage error={error} />}
|
||||||
|
{data && <DataDisplay data={data} />}
|
||||||
|
|
||||||
|
// ❌ BAD: Ternary hell
|
||||||
|
{isLoading ? <Spinner /> : error ? <ErrorMessage error={error} /> : data ? <DataDisplay data={data} /> : null}
|
||||||
|
```
|
||||||
|
|
||||||
|
## API Design Standards
|
||||||
|
|
||||||
|
### REST API Conventions
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/markets # List all markets
|
||||||
|
GET /api/markets/:id # Get specific market
|
||||||
|
POST /api/markets # Create new market
|
||||||
|
PUT /api/markets/:id # Update market (full)
|
||||||
|
PATCH /api/markets/:id # Update market (partial)
|
||||||
|
DELETE /api/markets/:id # Delete market
|
||||||
|
|
||||||
|
# Query parameters for filtering
|
||||||
|
GET /api/markets?status=active&limit=10&offset=0
|
||||||
|
```
|
||||||
|
|
||||||
|
### Response Format
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Consistent response structure
|
||||||
|
interface ApiResponse<T> {
|
||||||
|
success: boolean
|
||||||
|
data?: T
|
||||||
|
error?: string
|
||||||
|
meta?: {
|
||||||
|
total: number
|
||||||
|
page: number
|
||||||
|
limit: number
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Success response
|
||||||
|
return NextResponse.json({
|
||||||
|
success: true,
|
||||||
|
data: markets,
|
||||||
|
meta: { total: 100, page: 1, limit: 10 }
|
||||||
|
})
|
||||||
|
|
||||||
|
// Error response
|
||||||
|
return NextResponse.json({
|
||||||
|
success: false,
|
||||||
|
error: 'Invalid request'
|
||||||
|
}, { status: 400 })
|
||||||
|
```
|
||||||
|
|
||||||
|
### Input Validation
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { z } from 'zod'
|
||||||
|
|
||||||
|
// ✅ GOOD: Schema validation
|
||||||
|
const CreateMarketSchema = z.object({
|
||||||
|
name: z.string().min(1).max(200),
|
||||||
|
description: z.string().min(1).max(2000),
|
||||||
|
endDate: z.string().datetime(),
|
||||||
|
categories: z.array(z.string()).min(1)
|
||||||
|
})
|
||||||
|
|
||||||
|
export async function POST(request: Request) {
|
||||||
|
const body = await request.json()
|
||||||
|
|
||||||
|
try {
|
||||||
|
const validated = CreateMarketSchema.parse(body)
|
||||||
|
// Proceed with validated data
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof z.ZodError) {
|
||||||
|
return NextResponse.json({
|
||||||
|
success: false,
|
||||||
|
error: 'Validation failed',
|
||||||
|
details: error.errors
|
||||||
|
}, { status: 400 })
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## File Organization
|
||||||
|
|
||||||
|
### Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
├── app/ # Next.js App Router
|
||||||
|
│ ├── api/ # API routes
|
||||||
|
│ ├── markets/ # Market pages
|
||||||
|
│ └── (auth)/ # Auth pages (route groups)
|
||||||
|
├── components/ # React components
|
||||||
|
│ ├── ui/ # Generic UI components
|
||||||
|
│ ├── forms/ # Form components
|
||||||
|
│ └── layouts/ # Layout components
|
||||||
|
├── hooks/ # Custom React hooks
|
||||||
|
├── lib/ # Utilities and configs
|
||||||
|
│ ├── api/ # API clients
|
||||||
|
│ ├── utils/ # Helper functions
|
||||||
|
│ └── constants/ # Constants
|
||||||
|
├── types/ # TypeScript types
|
||||||
|
└── styles/ # Global styles
|
||||||
|
```
|
||||||
|
|
||||||
|
### File Naming
|
||||||
|
|
||||||
|
```
|
||||||
|
components/Button.tsx # PascalCase for components
|
||||||
|
hooks/useAuth.ts # camelCase with 'use' prefix
|
||||||
|
lib/formatDate.ts # camelCase for utilities
|
||||||
|
types/market.types.ts # camelCase with .types suffix
|
||||||
|
```
|
||||||
|
|
||||||
|
## Comments & Documentation
|
||||||
|
|
||||||
|
### When to Comment
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Explain WHY, not WHAT
|
||||||
|
// Use exponential backoff to avoid overwhelming the API during outages
|
||||||
|
const delay = Math.min(1000 * Math.pow(2, retryCount), 30000)
|
||||||
|
|
||||||
|
// Deliberately using mutation here for performance with large arrays
|
||||||
|
items.push(newItem)
|
||||||
|
|
||||||
|
// ❌ BAD: Stating the obvious
|
||||||
|
// Increment counter by 1
|
||||||
|
count++
|
||||||
|
|
||||||
|
// Set name to user's name
|
||||||
|
name = user.name
|
||||||
|
```
|
||||||
|
|
||||||
|
### JSDoc for Public APIs
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
/**
|
||||||
|
* Searches markets using semantic similarity.
|
||||||
|
*
|
||||||
|
* @param query - Natural language search query
|
||||||
|
* @param limit - Maximum number of results (default: 10)
|
||||||
|
* @returns Array of markets sorted by similarity score
|
||||||
|
* @throws {Error} If OpenAI API fails or Redis unavailable
|
||||||
|
*
|
||||||
|
* @example
|
||||||
|
* ```typescript
|
||||||
|
* const results = await searchMarkets('election', 5)
|
||||||
|
* console.log(results[0].name) // "Trump vs Biden"
|
||||||
|
* ```
|
||||||
|
*/
|
||||||
|
export async function searchMarkets(
|
||||||
|
query: string,
|
||||||
|
limit: number = 10
|
||||||
|
): Promise<Market[]> {
|
||||||
|
// Implementation
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance Best Practices
|
||||||
|
|
||||||
|
### Memoization
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { useMemo, useCallback } from 'react'
|
||||||
|
|
||||||
|
// ✅ GOOD: Memoize expensive computations
|
||||||
|
const sortedMarkets = useMemo(() => {
|
||||||
|
return markets.sort((a, b) => b.volume - a.volume)
|
||||||
|
}, [markets])
|
||||||
|
|
||||||
|
// ✅ GOOD: Memoize callbacks
|
||||||
|
const handleSearch = useCallback((query: string) => {
|
||||||
|
setSearchQuery(query)
|
||||||
|
}, [])
|
||||||
|
```
|
||||||
|
|
||||||
|
### Lazy Loading
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { lazy, Suspense } from 'react'
|
||||||
|
|
||||||
|
// ✅ GOOD: Lazy load heavy components
|
||||||
|
const HeavyChart = lazy(() => import('./HeavyChart'))
|
||||||
|
|
||||||
|
export function Dashboard() {
|
||||||
|
return (
|
||||||
|
<Suspense fallback={<Spinner />}>
|
||||||
|
<HeavyChart />
|
||||||
|
</Suspense>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Queries
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Select only needed columns
|
||||||
|
const { data } = await supabase
|
||||||
|
.from('markets')
|
||||||
|
.select('id, name, status')
|
||||||
|
.limit(10)
|
||||||
|
|
||||||
|
// ❌ BAD: Select everything
|
||||||
|
const { data } = await supabase
|
||||||
|
.from('markets')
|
||||||
|
.select('*')
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing Standards
|
||||||
|
|
||||||
|
### Test Structure (AAA Pattern)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
test('calculates similarity correctly', () => {
|
||||||
|
// Arrange
|
||||||
|
const vector1 = [1, 0, 0]
|
||||||
|
const vector2 = [0, 1, 0]
|
||||||
|
|
||||||
|
// Act
|
||||||
|
const similarity = calculateCosineSimilarity(vector1, vector2)
|
||||||
|
|
||||||
|
// Assert
|
||||||
|
expect(similarity).toBe(0)
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Naming
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Descriptive test names
|
||||||
|
test('returns empty array when no markets match query', () => { })
|
||||||
|
test('throws error when OpenAI API key is missing', () => { })
|
||||||
|
test('falls back to substring search when Redis unavailable', () => { })
|
||||||
|
|
||||||
|
// ❌ BAD: Vague test names
|
||||||
|
test('works', () => { })
|
||||||
|
test('test search', () => { })
|
||||||
|
```
|
||||||
|
|
||||||
|
## Code Smell Detection
|
||||||
|
|
||||||
|
Watch for these anti-patterns:
|
||||||
|
|
||||||
|
### 1. Long Functions
|
||||||
|
```typescript
|
||||||
|
// ❌ BAD: Function > 50 lines
|
||||||
|
function processMarketData() {
|
||||||
|
// 100 lines of code
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ GOOD: Split into smaller functions
|
||||||
|
function processMarketData() {
|
||||||
|
const validated = validateData()
|
||||||
|
const transformed = transformData(validated)
|
||||||
|
return saveData(transformed)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Deep Nesting
|
||||||
|
```typescript
|
||||||
|
// ❌ BAD: 5+ levels of nesting
|
||||||
|
if (user) {
|
||||||
|
if (user.isAdmin) {
|
||||||
|
if (market) {
|
||||||
|
if (market.isActive) {
|
||||||
|
if (hasPermission) {
|
||||||
|
// Do something
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ GOOD: Early returns
|
||||||
|
if (!user) return
|
||||||
|
if (!user.isAdmin) return
|
||||||
|
if (!market) return
|
||||||
|
if (!market.isActive) return
|
||||||
|
if (!hasPermission) return
|
||||||
|
|
||||||
|
// Do something
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Magic Numbers
|
||||||
|
```typescript
|
||||||
|
// ❌ BAD: Unexplained numbers
|
||||||
|
if (retryCount > 3) { }
|
||||||
|
setTimeout(callback, 500)
|
||||||
|
|
||||||
|
// ✅ GOOD: Named constants
|
||||||
|
const MAX_RETRIES = 3
|
||||||
|
const DEBOUNCE_DELAY_MS = 500
|
||||||
|
|
||||||
|
if (retryCount > MAX_RETRIES) { }
|
||||||
|
setTimeout(callback, DEBOUNCE_DELAY_MS)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Remember**: Code quality is not negotiable. Clear, maintainable code enables rapid development and confident refactoring.
|
||||||
348
.kiro/skills/database-migrations/SKILL.md
Normal file
348
.kiro/skills/database-migrations/SKILL.md
Normal file
@@ -0,0 +1,348 @@
|
|||||||
|
---
|
||||||
|
name: database-migrations
|
||||||
|
description: >
|
||||||
|
Database migration best practices for schema changes, data migrations, rollbacks,
|
||||||
|
and zero-downtime deployments across PostgreSQL, MySQL, and common ORMs (Prisma,
|
||||||
|
Drizzle, Django, TypeORM, golang-migrate). Use when planning or implementing
|
||||||
|
database schema changes.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
---
|
||||||
|
|
||||||
|
# Database Migration Patterns
|
||||||
|
|
||||||
|
Safe, reversible database schema changes for production systems.
|
||||||
|
|
||||||
|
## When to Activate
|
||||||
|
|
||||||
|
- Creating or altering database tables
|
||||||
|
- Adding/removing columns or indexes
|
||||||
|
- Running data migrations (backfill, transform)
|
||||||
|
- Planning zero-downtime schema changes
|
||||||
|
- Setting up migration tooling for a new project
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
1. **Every change is a migration** — never alter production databases manually
|
||||||
|
2. **Migrations are forward-only in production** — rollbacks use new forward migrations
|
||||||
|
3. **Schema and data migrations are separate** — never mix DDL and DML in one migration
|
||||||
|
4. **Test migrations against production-sized data** — a migration that works on 100 rows may lock on 10M
|
||||||
|
5. **Migrations are immutable once deployed** — never edit a migration that has run in production
|
||||||
|
|
||||||
|
## Migration Safety Checklist
|
||||||
|
|
||||||
|
Before applying any migration:
|
||||||
|
|
||||||
|
- [ ] Migration has both UP and DOWN (or is explicitly marked irreversible)
|
||||||
|
- [ ] No full table locks on large tables (use concurrent operations)
|
||||||
|
- [ ] New columns have defaults or are nullable (never add NOT NULL without default)
|
||||||
|
- [ ] Indexes created concurrently (not inline with CREATE TABLE for existing tables)
|
||||||
|
- [ ] Data backfill is a separate migration from schema change
|
||||||
|
- [ ] Tested against a copy of production data
|
||||||
|
- [ ] Rollback plan documented
|
||||||
|
|
||||||
|
## PostgreSQL Patterns
|
||||||
|
|
||||||
|
### Adding a Column Safely
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- GOOD: Nullable column, no lock
|
||||||
|
ALTER TABLE users ADD COLUMN avatar_url TEXT;
|
||||||
|
|
||||||
|
-- GOOD: Column with default (Postgres 11+ is instant, no rewrite)
|
||||||
|
ALTER TABLE users ADD COLUMN is_active BOOLEAN NOT NULL DEFAULT true;
|
||||||
|
|
||||||
|
-- BAD: NOT NULL without default on existing table (requires full rewrite)
|
||||||
|
ALTER TABLE users ADD COLUMN role TEXT NOT NULL;
|
||||||
|
-- This locks the table and rewrites every row
|
||||||
|
```
|
||||||
|
|
||||||
|
### Adding an Index Without Downtime
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- BAD: Blocks writes on large tables
|
||||||
|
CREATE INDEX idx_users_email ON users (email);
|
||||||
|
|
||||||
|
-- GOOD: Non-blocking, allows concurrent writes
|
||||||
|
CREATE INDEX CONCURRENTLY idx_users_email ON users (email);
|
||||||
|
|
||||||
|
-- Note: CONCURRENTLY cannot run inside a transaction block
|
||||||
|
-- Most migration tools need special handling for this
|
||||||
|
```
|
||||||
|
|
||||||
|
### Renaming a Column (Zero-Downtime)
|
||||||
|
|
||||||
|
Never rename directly in production. Use the expand-contract pattern:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Step 1: Add new column (migration 001)
|
||||||
|
ALTER TABLE users ADD COLUMN display_name TEXT;
|
||||||
|
|
||||||
|
-- Step 2: Backfill data (migration 002, data migration)
|
||||||
|
UPDATE users SET display_name = username WHERE display_name IS NULL;
|
||||||
|
|
||||||
|
-- Step 3: Update application code to read/write both columns
|
||||||
|
-- Deploy application changes
|
||||||
|
|
||||||
|
-- Step 4: Stop writing to old column, drop it (migration 003)
|
||||||
|
ALTER TABLE users DROP COLUMN username;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Removing a Column Safely
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Step 1: Remove all application references to the column
|
||||||
|
-- Step 2: Deploy application without the column reference
|
||||||
|
-- Step 3: Drop column in next migration
|
||||||
|
ALTER TABLE orders DROP COLUMN legacy_status;
|
||||||
|
|
||||||
|
-- For Django: use SeparateDatabaseAndState to remove from model
|
||||||
|
-- without generating DROP COLUMN (then drop in next migration)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Large Data Migrations
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- BAD: Updates all rows in one transaction (locks table)
|
||||||
|
UPDATE users SET normalized_email = LOWER(email);
|
||||||
|
|
||||||
|
-- GOOD: Batch update with progress
|
||||||
|
DO $$
|
||||||
|
DECLARE
|
||||||
|
batch_size INT := 10000;
|
||||||
|
rows_updated INT;
|
||||||
|
BEGIN
|
||||||
|
LOOP
|
||||||
|
UPDATE users
|
||||||
|
SET normalized_email = LOWER(email)
|
||||||
|
WHERE id IN (
|
||||||
|
SELECT id FROM users
|
||||||
|
WHERE normalized_email IS NULL
|
||||||
|
LIMIT batch_size
|
||||||
|
FOR UPDATE SKIP LOCKED
|
||||||
|
);
|
||||||
|
GET DIAGNOSTICS rows_updated = ROW_COUNT;
|
||||||
|
RAISE NOTICE 'Updated % rows', rows_updated;
|
||||||
|
EXIT WHEN rows_updated = 0;
|
||||||
|
COMMIT;
|
||||||
|
END LOOP;
|
||||||
|
END $$;
|
||||||
|
```
|
||||||
|
|
||||||
|
## Prisma (TypeScript/Node.js)
|
||||||
|
|
||||||
|
### Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create migration from schema changes
|
||||||
|
npx prisma migrate dev --name add_user_avatar
|
||||||
|
|
||||||
|
# Apply pending migrations in production
|
||||||
|
npx prisma migrate deploy
|
||||||
|
|
||||||
|
# Reset database (dev only)
|
||||||
|
npx prisma migrate reset
|
||||||
|
|
||||||
|
# Generate client after schema changes
|
||||||
|
npx prisma generate
|
||||||
|
```
|
||||||
|
|
||||||
|
### Schema Example
|
||||||
|
|
||||||
|
```prisma
|
||||||
|
model User {
|
||||||
|
id String @id @default(cuid())
|
||||||
|
email String @unique
|
||||||
|
name String?
|
||||||
|
avatarUrl String? @map("avatar_url")
|
||||||
|
createdAt DateTime @default(now()) @map("created_at")
|
||||||
|
updatedAt DateTime @updatedAt @map("updated_at")
|
||||||
|
orders Order[]
|
||||||
|
|
||||||
|
@@map("users")
|
||||||
|
@@index([email])
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom SQL Migration
|
||||||
|
|
||||||
|
For operations Prisma cannot express (concurrent indexes, data backfills):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create empty migration, then edit the SQL manually
|
||||||
|
npx prisma migrate dev --create-only --name add_email_index
|
||||||
|
```
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- migrations/20240115_add_email_index/migration.sql
|
||||||
|
-- Prisma cannot generate CONCURRENTLY, so we write it manually
|
||||||
|
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_email ON users (email);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Drizzle (TypeScript/Node.js)
|
||||||
|
|
||||||
|
### Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate migration from schema changes
|
||||||
|
npx drizzle-kit generate
|
||||||
|
|
||||||
|
# Apply migrations
|
||||||
|
npx drizzle-kit migrate
|
||||||
|
|
||||||
|
# Push schema directly (dev only, no migration file)
|
||||||
|
npx drizzle-kit push
|
||||||
|
```
|
||||||
|
|
||||||
|
### Schema Example
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { pgTable, text, timestamp, uuid, boolean } from "drizzle-orm/pg-core";
|
||||||
|
|
||||||
|
export const users = pgTable("users", {
|
||||||
|
id: uuid("id").primaryKey().defaultRandom(),
|
||||||
|
email: text("email").notNull().unique(),
|
||||||
|
name: text("name"),
|
||||||
|
isActive: boolean("is_active").notNull().default(true),
|
||||||
|
createdAt: timestamp("created_at").notNull().defaultNow(),
|
||||||
|
updatedAt: timestamp("updated_at").notNull().defaultNow(),
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Django (Python)
|
||||||
|
|
||||||
|
### Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate migration from model changes
|
||||||
|
python manage.py makemigrations
|
||||||
|
|
||||||
|
# Apply migrations
|
||||||
|
python manage.py migrate
|
||||||
|
|
||||||
|
# Show migration status
|
||||||
|
python manage.py showmigrations
|
||||||
|
|
||||||
|
# Generate empty migration for custom SQL
|
||||||
|
python manage.py makemigrations --empty app_name -n description
|
||||||
|
```
|
||||||
|
|
||||||
|
### Data Migration
|
||||||
|
|
||||||
|
```python
|
||||||
|
from django.db import migrations
|
||||||
|
|
||||||
|
def backfill_display_names(apps, schema_editor):
|
||||||
|
User = apps.get_model("accounts", "User")
|
||||||
|
batch_size = 5000
|
||||||
|
users = User.objects.filter(display_name="")
|
||||||
|
while users.exists():
|
||||||
|
batch = list(users[:batch_size])
|
||||||
|
for user in batch:
|
||||||
|
user.display_name = user.username
|
||||||
|
User.objects.bulk_update(batch, ["display_name"], batch_size=batch_size)
|
||||||
|
|
||||||
|
def reverse_backfill(apps, schema_editor):
|
||||||
|
pass # Data migration, no reverse needed
|
||||||
|
|
||||||
|
class Migration(migrations.Migration):
|
||||||
|
dependencies = [("accounts", "0015_add_display_name")]
|
||||||
|
|
||||||
|
operations = [
|
||||||
|
migrations.RunPython(backfill_display_names, reverse_backfill),
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
### SeparateDatabaseAndState
|
||||||
|
|
||||||
|
Remove a column from the Django model without dropping it from the database immediately:
|
||||||
|
|
||||||
|
```python
|
||||||
|
class Migration(migrations.Migration):
|
||||||
|
operations = [
|
||||||
|
migrations.SeparateDatabaseAndState(
|
||||||
|
state_operations=[
|
||||||
|
migrations.RemoveField(model_name="user", name="legacy_field"),
|
||||||
|
],
|
||||||
|
database_operations=[], # Don't touch the DB yet
|
||||||
|
),
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
## golang-migrate (Go)
|
||||||
|
|
||||||
|
### Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create migration pair
|
||||||
|
migrate create -ext sql -dir migrations -seq add_user_avatar
|
||||||
|
|
||||||
|
# Apply all pending migrations
|
||||||
|
migrate -path migrations -database "$DATABASE_URL" up
|
||||||
|
|
||||||
|
# Rollback last migration
|
||||||
|
migrate -path migrations -database "$DATABASE_URL" down 1
|
||||||
|
|
||||||
|
# Force version (fix dirty state)
|
||||||
|
migrate -path migrations -database "$DATABASE_URL" force VERSION
|
||||||
|
```
|
||||||
|
|
||||||
|
### Migration Files
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- migrations/000003_add_user_avatar.up.sql
|
||||||
|
ALTER TABLE users ADD COLUMN avatar_url TEXT;
|
||||||
|
CREATE INDEX CONCURRENTLY idx_users_avatar ON users (avatar_url) WHERE avatar_url IS NOT NULL;
|
||||||
|
|
||||||
|
-- migrations/000003_add_user_avatar.down.sql
|
||||||
|
DROP INDEX IF EXISTS idx_users_avatar;
|
||||||
|
ALTER TABLE users DROP COLUMN IF EXISTS avatar_url;
|
||||||
|
```
|
||||||
|
|
||||||
|
## Zero-Downtime Migration Strategy
|
||||||
|
|
||||||
|
For critical production changes, follow the expand-contract pattern:
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 1: EXPAND
|
||||||
|
- Add new column/table (nullable or with default)
|
||||||
|
- Deploy: app writes to BOTH old and new
|
||||||
|
- Backfill existing data
|
||||||
|
|
||||||
|
Phase 2: MIGRATE
|
||||||
|
- Deploy: app reads from NEW, writes to BOTH
|
||||||
|
- Verify data consistency
|
||||||
|
|
||||||
|
Phase 3: CONTRACT
|
||||||
|
- Deploy: app only uses NEW
|
||||||
|
- Drop old column/table in separate migration
|
||||||
|
```
|
||||||
|
|
||||||
|
### Timeline Example
|
||||||
|
|
||||||
|
```
|
||||||
|
Day 1: Migration adds new_status column (nullable)
|
||||||
|
Day 1: Deploy app v2 — writes to both status and new_status
|
||||||
|
Day 2: Run backfill migration for existing rows
|
||||||
|
Day 3: Deploy app v3 — reads from new_status only
|
||||||
|
Day 7: Migration drops old status column
|
||||||
|
```
|
||||||
|
|
||||||
|
## Anti-Patterns
|
||||||
|
|
||||||
|
| Anti-Pattern | Why It Fails | Better Approach |
|
||||||
|
|-------------|-------------|-----------------|
|
||||||
|
| Manual SQL in production | No audit trail, unrepeatable | Always use migration files |
|
||||||
|
| Editing deployed migrations | Causes drift between environments | Create new migration instead |
|
||||||
|
| NOT NULL without default | Locks table, rewrites all rows | Add nullable, backfill, then add constraint |
|
||||||
|
| Inline index on large table | Blocks writes during build | CREATE INDEX CONCURRENTLY |
|
||||||
|
| Schema + data in one migration | Hard to rollback, long transactions | Separate migrations |
|
||||||
|
| Dropping column before removing code | Application errors on missing column | Remove code first, drop column next deploy |
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
- Planning database schema changes
|
||||||
|
- Implementing zero-downtime migrations
|
||||||
|
- Setting up migration tooling
|
||||||
|
- Troubleshooting migration issues
|
||||||
|
- Reviewing migration pull requests
|
||||||
440
.kiro/skills/deployment-patterns/SKILL.md
Normal file
440
.kiro/skills/deployment-patterns/SKILL.md
Normal file
@@ -0,0 +1,440 @@
|
|||||||
|
---
|
||||||
|
name: deployment-patterns
|
||||||
|
description: >
|
||||||
|
Deployment workflows, CI/CD pipeline patterns, Docker containerization, health
|
||||||
|
checks, rollback strategies, and production readiness checklists for web
|
||||||
|
applications. Use when setting up deployment infrastructure or planning releases.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
---
|
||||||
|
|
||||||
|
# Deployment Patterns
|
||||||
|
|
||||||
|
Production deployment workflows and CI/CD best practices.
|
||||||
|
|
||||||
|
## When to Activate
|
||||||
|
|
||||||
|
- Setting up CI/CD pipelines
|
||||||
|
- Dockerizing an application
|
||||||
|
- Planning deployment strategy (blue-green, canary, rolling)
|
||||||
|
- Implementing health checks and readiness probes
|
||||||
|
- Preparing for a production release
|
||||||
|
- Configuring environment-specific settings
|
||||||
|
|
||||||
|
## Deployment Strategies
|
||||||
|
|
||||||
|
### Rolling Deployment (Default)
|
||||||
|
|
||||||
|
Replace instances gradually — old and new versions run simultaneously during rollout.
|
||||||
|
|
||||||
|
```
|
||||||
|
Instance 1: v1 → v2 (update first)
|
||||||
|
Instance 2: v1 (still running v1)
|
||||||
|
Instance 3: v1 (still running v1)
|
||||||
|
|
||||||
|
Instance 1: v2
|
||||||
|
Instance 2: v1 → v2 (update second)
|
||||||
|
Instance 3: v1
|
||||||
|
|
||||||
|
Instance 1: v2
|
||||||
|
Instance 2: v2
|
||||||
|
Instance 3: v1 → v2 (update last)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pros:** Zero downtime, gradual rollout
|
||||||
|
**Cons:** Two versions run simultaneously — requires backward-compatible changes
|
||||||
|
**Use when:** Standard deployments, backward-compatible changes
|
||||||
|
|
||||||
|
### Blue-Green Deployment
|
||||||
|
|
||||||
|
Run two identical environments. Switch traffic atomically.
|
||||||
|
|
||||||
|
```
|
||||||
|
Blue (v1) ← traffic
|
||||||
|
Green (v2) idle, running new version
|
||||||
|
|
||||||
|
# After verification:
|
||||||
|
Blue (v1) idle (becomes standby)
|
||||||
|
Green (v2) ← traffic
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pros:** Instant rollback (switch back to blue), clean cutover
|
||||||
|
**Cons:** Requires 2x infrastructure during deployment
|
||||||
|
**Use when:** Critical services, zero-tolerance for issues
|
||||||
|
|
||||||
|
### Canary Deployment
|
||||||
|
|
||||||
|
Route a small percentage of traffic to the new version first.
|
||||||
|
|
||||||
|
```
|
||||||
|
v1: 95% of traffic
|
||||||
|
v2: 5% of traffic (canary)
|
||||||
|
|
||||||
|
# If metrics look good:
|
||||||
|
v1: 50% of traffic
|
||||||
|
v2: 50% of traffic
|
||||||
|
|
||||||
|
# Final:
|
||||||
|
v2: 100% of traffic
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pros:** Catches issues with real traffic before full rollout
|
||||||
|
**Cons:** Requires traffic splitting infrastructure, monitoring
|
||||||
|
**Use when:** High-traffic services, risky changes, feature flags
|
||||||
|
|
||||||
|
## Docker
|
||||||
|
|
||||||
|
### Multi-Stage Dockerfile (Node.js)
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# Stage 1: Install dependencies
|
||||||
|
FROM node:22-alpine AS deps
|
||||||
|
WORKDIR /app
|
||||||
|
COPY package.json package-lock.json ./
|
||||||
|
RUN npm ci --production=false
|
||||||
|
|
||||||
|
# Stage 2: Build
|
||||||
|
FROM node:22-alpine AS builder
|
||||||
|
WORKDIR /app
|
||||||
|
COPY --from=deps /app/node_modules ./node_modules
|
||||||
|
COPY . .
|
||||||
|
RUN npm run build
|
||||||
|
RUN npm prune --production
|
||||||
|
|
||||||
|
# Stage 3: Production image
|
||||||
|
FROM node:22-alpine AS runner
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
RUN addgroup -g 1001 -S appgroup && adduser -S appuser -u 1001
|
||||||
|
USER appuser
|
||||||
|
|
||||||
|
COPY --from=builder --chown=appuser:appgroup /app/node_modules ./node_modules
|
||||||
|
COPY --from=builder --chown=appuser:appgroup /app/dist ./dist
|
||||||
|
COPY --from=builder --chown=appuser:appgroup /app/package.json ./
|
||||||
|
|
||||||
|
ENV NODE_ENV=production
|
||||||
|
EXPOSE 3000
|
||||||
|
|
||||||
|
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
||||||
|
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
|
||||||
|
|
||||||
|
CMD ["node", "dist/server.js"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multi-Stage Dockerfile (Go)
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
FROM golang:1.22-alpine AS builder
|
||||||
|
WORKDIR /app
|
||||||
|
COPY go.mod go.sum ./
|
||||||
|
RUN go mod download
|
||||||
|
COPY . .
|
||||||
|
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o /server ./cmd/server
|
||||||
|
|
||||||
|
FROM alpine:3.19 AS runner
|
||||||
|
RUN apk --no-cache add ca-certificates
|
||||||
|
RUN adduser -D -u 1001 appuser
|
||||||
|
USER appuser
|
||||||
|
|
||||||
|
COPY --from=builder /server /server
|
||||||
|
|
||||||
|
EXPOSE 8080
|
||||||
|
HEALTHCHECK --interval=30s --timeout=3s CMD wget -qO- http://localhost:8080/health || exit 1
|
||||||
|
CMD ["/server"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multi-Stage Dockerfile (Python/Django)
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
FROM python:3.12-slim AS builder
|
||||||
|
WORKDIR /app
|
||||||
|
RUN pip install --no-cache-dir uv
|
||||||
|
COPY requirements.txt .
|
||||||
|
RUN uv pip install --system --no-cache -r requirements.txt
|
||||||
|
|
||||||
|
FROM python:3.12-slim AS runner
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
RUN useradd -r -u 1001 appuser
|
||||||
|
USER appuser
|
||||||
|
|
||||||
|
COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
|
||||||
|
COPY --from=builder /usr/local/bin /usr/local/bin
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
ENV PYTHONUNBUFFERED=1
|
||||||
|
EXPOSE 8000
|
||||||
|
|
||||||
|
HEALTHCHECK --interval=30s --timeout=3s CMD python -c "import urllib.request; urllib.request.urlopen('http://localhost:8000/health/')" || exit 1
|
||||||
|
CMD ["gunicorn", "config.wsgi:application", "--bind", "0.0.0.0:8000", "--workers", "4"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Docker Best Practices
|
||||||
|
|
||||||
|
```
|
||||||
|
# GOOD practices
|
||||||
|
- Use specific version tags (node:22-alpine, not node:latest)
|
||||||
|
- Multi-stage builds to minimize image size
|
||||||
|
- Run as non-root user
|
||||||
|
- Copy dependency files first (layer caching)
|
||||||
|
- Use .dockerignore to exclude node_modules, .git, tests
|
||||||
|
- Add HEALTHCHECK instruction
|
||||||
|
- Set resource limits in docker-compose or k8s
|
||||||
|
|
||||||
|
# BAD practices
|
||||||
|
- Running as root
|
||||||
|
- Using :latest tags
|
||||||
|
- Copying entire repo in one COPY layer
|
||||||
|
- Installing dev dependencies in production image
|
||||||
|
- Storing secrets in image (use env vars or secrets manager)
|
||||||
|
```
|
||||||
|
|
||||||
|
## CI/CD Pipeline
|
||||||
|
|
||||||
|
### GitHub Actions (Standard Pipeline)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
name: CI/CD
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches: [main]
|
||||||
|
pull_request:
|
||||||
|
branches: [main]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 22
|
||||||
|
cache: npm
|
||||||
|
- run: npm ci
|
||||||
|
- run: npm run lint
|
||||||
|
- run: npm run typecheck
|
||||||
|
- run: npm test -- --coverage
|
||||||
|
- uses: actions/upload-artifact@v4
|
||||||
|
if: always()
|
||||||
|
with:
|
||||||
|
name: coverage
|
||||||
|
path: coverage/
|
||||||
|
|
||||||
|
build:
|
||||||
|
needs: test
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: github.ref == 'refs/heads/main'
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: docker/setup-buildx-action@v3
|
||||||
|
- uses: docker/login-action@v3
|
||||||
|
with:
|
||||||
|
registry: ghcr.io
|
||||||
|
username: ${{ github.actor }}
|
||||||
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- uses: docker/build-push-action@v5
|
||||||
|
with:
|
||||||
|
push: true
|
||||||
|
tags: ghcr.io/${{ github.repository }}:${{ github.sha }}
|
||||||
|
cache-from: type=gha
|
||||||
|
cache-to: type=gha,mode=max
|
||||||
|
|
||||||
|
deploy:
|
||||||
|
needs: build
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: github.ref == 'refs/heads/main'
|
||||||
|
environment: production
|
||||||
|
steps:
|
||||||
|
- name: Deploy to production
|
||||||
|
run: |
|
||||||
|
# Platform-specific deployment command
|
||||||
|
# Railway: railway up
|
||||||
|
# Vercel: vercel --prod
|
||||||
|
# K8s: kubectl set image deployment/app app=ghcr.io/${{ github.repository }}:${{ github.sha }}
|
||||||
|
echo "Deploying ${{ github.sha }}"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pipeline Stages
|
||||||
|
|
||||||
|
```
|
||||||
|
PR opened:
|
||||||
|
lint → typecheck → unit tests → integration tests → preview deploy
|
||||||
|
|
||||||
|
Merged to main:
|
||||||
|
lint → typecheck → unit tests → integration tests → build image → deploy staging → smoke tests → deploy production
|
||||||
|
```
|
||||||
|
|
||||||
|
## Health Checks
|
||||||
|
|
||||||
|
### Health Check Endpoint
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Simple health check
|
||||||
|
app.get("/health", (req, res) => {
|
||||||
|
res.status(200).json({ status: "ok" });
|
||||||
|
});
|
||||||
|
|
||||||
|
// Detailed health check (for internal monitoring)
|
||||||
|
app.get("/health/detailed", async (req, res) => {
|
||||||
|
const checks = {
|
||||||
|
database: await checkDatabase(),
|
||||||
|
redis: await checkRedis(),
|
||||||
|
externalApi: await checkExternalApi(),
|
||||||
|
};
|
||||||
|
|
||||||
|
const allHealthy = Object.values(checks).every(c => c.status === "ok");
|
||||||
|
|
||||||
|
res.status(allHealthy ? 200 : 503).json({
|
||||||
|
status: allHealthy ? "ok" : "degraded",
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
version: process.env.APP_VERSION || "unknown",
|
||||||
|
uptime: process.uptime(),
|
||||||
|
checks,
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
async function checkDatabase(): Promise<HealthCheck> {
|
||||||
|
try {
|
||||||
|
await db.query("SELECT 1");
|
||||||
|
return { status: "ok", latency_ms: 2 };
|
||||||
|
} catch (err) {
|
||||||
|
return { status: "error", message: "Database unreachable" };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Kubernetes Probes
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
livenessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /health
|
||||||
|
port: 3000
|
||||||
|
initialDelaySeconds: 10
|
||||||
|
periodSeconds: 30
|
||||||
|
failureThreshold: 3
|
||||||
|
|
||||||
|
readinessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /health
|
||||||
|
port: 3000
|
||||||
|
initialDelaySeconds: 5
|
||||||
|
periodSeconds: 10
|
||||||
|
failureThreshold: 2
|
||||||
|
|
||||||
|
startupProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /health
|
||||||
|
port: 3000
|
||||||
|
initialDelaySeconds: 0
|
||||||
|
periodSeconds: 5
|
||||||
|
failureThreshold: 30 # 30 * 5s = 150s max startup time
|
||||||
|
```
|
||||||
|
|
||||||
|
## Environment Configuration
|
||||||
|
|
||||||
|
### Twelve-Factor App Pattern
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# All config via environment variables — never in code
|
||||||
|
DATABASE_URL=postgres://user:pass@host:5432/db
|
||||||
|
REDIS_URL=redis://host:6379/0
|
||||||
|
API_KEY=${API_KEY} # injected by secrets manager
|
||||||
|
LOG_LEVEL=info
|
||||||
|
PORT=3000
|
||||||
|
|
||||||
|
# Environment-specific behavior
|
||||||
|
NODE_ENV=production # or staging, development
|
||||||
|
APP_ENV=production # explicit app environment
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuration Validation
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { z } from "zod";
|
||||||
|
|
||||||
|
const envSchema = z.object({
|
||||||
|
NODE_ENV: z.enum(["development", "staging", "production"]),
|
||||||
|
PORT: z.coerce.number().default(3000),
|
||||||
|
DATABASE_URL: z.string().url(),
|
||||||
|
REDIS_URL: z.string().url(),
|
||||||
|
JWT_SECRET: z.string().min(32),
|
||||||
|
LOG_LEVEL: z.enum(["debug", "info", "warn", "error"]).default("info"),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Validate at startup — fail fast if config is wrong
|
||||||
|
export const env = envSchema.parse(process.env);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Rollback Strategy
|
||||||
|
|
||||||
|
### Instant Rollback
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Docker/Kubernetes: point to previous image
|
||||||
|
kubectl rollout undo deployment/app
|
||||||
|
|
||||||
|
# Vercel: promote previous deployment
|
||||||
|
vercel rollback
|
||||||
|
|
||||||
|
# Railway: redeploy previous commit
|
||||||
|
railway up --commit <previous-sha>
|
||||||
|
|
||||||
|
# Database: rollback migration (if reversible)
|
||||||
|
npx prisma migrate resolve --rolled-back <migration-name>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rollback Checklist
|
||||||
|
|
||||||
|
- [ ] Previous image/artifact is available and tagged
|
||||||
|
- [ ] Database migrations are backward-compatible (no destructive changes)
|
||||||
|
- [ ] Feature flags can disable new features without deploy
|
||||||
|
- [ ] Monitoring alerts configured for error rate spikes
|
||||||
|
- [ ] Rollback tested in staging before production release
|
||||||
|
|
||||||
|
## Production Readiness Checklist
|
||||||
|
|
||||||
|
Before any production deployment:
|
||||||
|
|
||||||
|
### Application
|
||||||
|
- [ ] All tests pass (unit, integration, E2E)
|
||||||
|
- [ ] No hardcoded secrets in code or config files
|
||||||
|
- [ ] Error handling covers all edge cases
|
||||||
|
- [ ] Logging is structured (JSON) and does not contain PII
|
||||||
|
- [ ] Health check endpoint returns meaningful status
|
||||||
|
|
||||||
|
### Infrastructure
|
||||||
|
- [ ] Docker image builds reproducibly (pinned versions)
|
||||||
|
- [ ] Environment variables documented and validated at startup
|
||||||
|
- [ ] Resource limits set (CPU, memory)
|
||||||
|
- [ ] Horizontal scaling configured (min/max instances)
|
||||||
|
- [ ] SSL/TLS enabled on all endpoints
|
||||||
|
|
||||||
|
### Monitoring
|
||||||
|
- [ ] Application metrics exported (request rate, latency, errors)
|
||||||
|
- [ ] Alerts configured for error rate > threshold
|
||||||
|
- [ ] Log aggregation set up (structured logs, searchable)
|
||||||
|
- [ ] Uptime monitoring on health endpoint
|
||||||
|
|
||||||
|
### Security
|
||||||
|
- [ ] Dependencies scanned for CVEs
|
||||||
|
- [ ] CORS configured for allowed origins only
|
||||||
|
- [ ] Rate limiting enabled on public endpoints
|
||||||
|
- [ ] Authentication and authorization verified
|
||||||
|
- [ ] Security headers set (CSP, HSTS, X-Frame-Options)
|
||||||
|
|
||||||
|
### Operations
|
||||||
|
- [ ] Rollback plan documented and tested
|
||||||
|
- [ ] Database migration tested against production-sized data
|
||||||
|
- [ ] Runbook for common failure scenarios
|
||||||
|
- [ ] On-call rotation and escalation path defined
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
- Setting up CI/CD pipelines
|
||||||
|
- Dockerizing applications
|
||||||
|
- Planning deployment strategies
|
||||||
|
- Implementing health checks
|
||||||
|
- Preparing for production releases
|
||||||
|
- Troubleshooting deployment issues
|
||||||
376
.kiro/skills/docker-patterns/SKILL.md
Normal file
376
.kiro/skills/docker-patterns/SKILL.md
Normal file
@@ -0,0 +1,376 @@
|
|||||||
|
---
|
||||||
|
name: docker-patterns
|
||||||
|
description: >
|
||||||
|
Docker and Docker Compose patterns for local development, container security,
|
||||||
|
networking, volume strategies, and multi-service orchestration. Use when
|
||||||
|
setting up containerized development environments or reviewing Docker configurations.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
---
|
||||||
|
|
||||||
|
# Docker Patterns
|
||||||
|
|
||||||
|
Docker and Docker Compose best practices for containerized development.
|
||||||
|
|
||||||
|
## When to Activate
|
||||||
|
|
||||||
|
- Setting up Docker Compose for local development
|
||||||
|
- Designing multi-container architectures
|
||||||
|
- Troubleshooting container networking or volume issues
|
||||||
|
- Reviewing Dockerfiles for security and size
|
||||||
|
- Migrating from local dev to containerized workflow
|
||||||
|
|
||||||
|
## Docker Compose for Local Development
|
||||||
|
|
||||||
|
### Standard Web App Stack
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# docker-compose.yml
|
||||||
|
services:
|
||||||
|
app:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
target: dev # Use dev stage of multi-stage Dockerfile
|
||||||
|
ports:
|
||||||
|
- "3000:3000"
|
||||||
|
volumes:
|
||||||
|
- .:/app # Bind mount for hot reload
|
||||||
|
- /app/node_modules # Anonymous volume -- preserves container deps
|
||||||
|
environment:
|
||||||
|
- DATABASE_URL=postgres://postgres:postgres@db:5432/app_dev
|
||||||
|
- REDIS_URL=redis://redis:6379/0
|
||||||
|
- NODE_ENV=development
|
||||||
|
depends_on:
|
||||||
|
db:
|
||||||
|
condition: service_healthy
|
||||||
|
redis:
|
||||||
|
condition: service_started
|
||||||
|
command: npm run dev
|
||||||
|
|
||||||
|
db:
|
||||||
|
image: postgres:16-alpine
|
||||||
|
ports:
|
||||||
|
- "5432:5432"
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: postgres
|
||||||
|
POSTGRES_PASSWORD: postgres
|
||||||
|
POSTGRES_DB: app_dev
|
||||||
|
volumes:
|
||||||
|
- pgdata:/var/lib/postgresql/data
|
||||||
|
- ./scripts/init-db.sql:/docker-entrypoint-initdb.d/init.sql
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U postgres"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 5
|
||||||
|
|
||||||
|
redis:
|
||||||
|
image: redis:7-alpine
|
||||||
|
ports:
|
||||||
|
- "6379:6379"
|
||||||
|
volumes:
|
||||||
|
- redisdata:/data
|
||||||
|
|
||||||
|
mailpit: # Local email testing
|
||||||
|
image: axllent/mailpit
|
||||||
|
ports:
|
||||||
|
- "8025:8025" # Web UI
|
||||||
|
- "1025:1025" # SMTP
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
pgdata:
|
||||||
|
redisdata:
|
||||||
|
```
|
||||||
|
|
||||||
|
### Development vs Production Dockerfile
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# Stage: dependencies
|
||||||
|
FROM node:22-alpine AS deps
|
||||||
|
WORKDIR /app
|
||||||
|
COPY package.json package-lock.json ./
|
||||||
|
RUN npm ci
|
||||||
|
|
||||||
|
# Stage: dev (hot reload, debug tools)
|
||||||
|
FROM node:22-alpine AS dev
|
||||||
|
WORKDIR /app
|
||||||
|
COPY --from=deps /app/node_modules ./node_modules
|
||||||
|
COPY . .
|
||||||
|
EXPOSE 3000
|
||||||
|
CMD ["npm", "run", "dev"]
|
||||||
|
|
||||||
|
# Stage: build
|
||||||
|
FROM node:22-alpine AS build
|
||||||
|
WORKDIR /app
|
||||||
|
COPY --from=deps /app/node_modules ./node_modules
|
||||||
|
COPY . .
|
||||||
|
RUN npm run build && npm prune --production
|
||||||
|
|
||||||
|
# Stage: production (minimal image)
|
||||||
|
FROM node:22-alpine AS production
|
||||||
|
WORKDIR /app
|
||||||
|
RUN addgroup -g 1001 -S appgroup && adduser -S appuser -u 1001
|
||||||
|
USER appuser
|
||||||
|
COPY --from=build --chown=appuser:appgroup /app/dist ./dist
|
||||||
|
COPY --from=build --chown=appuser:appgroup /app/node_modules ./node_modules
|
||||||
|
COPY --from=build --chown=appuser:appgroup /app/package.json ./
|
||||||
|
ENV NODE_ENV=production
|
||||||
|
EXPOSE 3000
|
||||||
|
HEALTHCHECK --interval=30s --timeout=3s CMD wget -qO- http://localhost:3000/health || exit 1
|
||||||
|
CMD ["node", "dist/server.js"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Override Files
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# docker-compose.override.yml (auto-loaded, dev-only settings)
|
||||||
|
services:
|
||||||
|
app:
|
||||||
|
environment:
|
||||||
|
- DEBUG=app:*
|
||||||
|
- LOG_LEVEL=debug
|
||||||
|
ports:
|
||||||
|
- "9229:9229" # Node.js debugger
|
||||||
|
|
||||||
|
# docker-compose.prod.yml (explicit for production)
|
||||||
|
services:
|
||||||
|
app:
|
||||||
|
build:
|
||||||
|
target: production
|
||||||
|
restart: always
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpus: "1.0"
|
||||||
|
memory: 512M
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Development (auto-loads override)
|
||||||
|
docker compose up
|
||||||
|
|
||||||
|
# Production
|
||||||
|
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
## Networking
|
||||||
|
|
||||||
|
### Service Discovery
|
||||||
|
|
||||||
|
Services in the same Compose network resolve by service name:
|
||||||
|
```
|
||||||
|
# From "app" container:
|
||||||
|
postgres://postgres:postgres@db:5432/app_dev # "db" resolves to the db container
|
||||||
|
redis://redis:6379/0 # "redis" resolves to the redis container
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Networks
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
frontend:
|
||||||
|
networks:
|
||||||
|
- frontend-net
|
||||||
|
|
||||||
|
api:
|
||||||
|
networks:
|
||||||
|
- frontend-net
|
||||||
|
- backend-net
|
||||||
|
|
||||||
|
db:
|
||||||
|
networks:
|
||||||
|
- backend-net # Only reachable from api, not frontend
|
||||||
|
|
||||||
|
networks:
|
||||||
|
frontend-net:
|
||||||
|
backend-net:
|
||||||
|
```
|
||||||
|
|
||||||
|
### Exposing Only What's Needed
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
db:
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:5432:5432" # Only accessible from host, not network
|
||||||
|
# Omit ports entirely in production -- accessible only within Docker network
|
||||||
|
```
|
||||||
|
|
||||||
|
## Volume Strategies
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
volumes:
|
||||||
|
# Named volume: persists across container restarts, managed by Docker
|
||||||
|
pgdata:
|
||||||
|
|
||||||
|
# Bind mount: maps host directory into container (for development)
|
||||||
|
# - ./src:/app/src
|
||||||
|
|
||||||
|
# Anonymous volume: preserves container-generated content from bind mount override
|
||||||
|
# - /app/node_modules
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Patterns
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
app:
|
||||||
|
volumes:
|
||||||
|
- .:/app # Source code (bind mount for hot reload)
|
||||||
|
- /app/node_modules # Protect container's node_modules from host
|
||||||
|
- /app/.next # Protect build cache
|
||||||
|
|
||||||
|
db:
|
||||||
|
volumes:
|
||||||
|
- pgdata:/var/lib/postgresql/data # Persistent data
|
||||||
|
- ./scripts/init.sql:/docker-entrypoint-initdb.d/init.sql # Init scripts
|
||||||
|
```
|
||||||
|
|
||||||
|
## Container Security
|
||||||
|
|
||||||
|
### Dockerfile Hardening
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# 1. Use specific tags (never :latest)
|
||||||
|
FROM node:22.12-alpine3.20
|
||||||
|
|
||||||
|
# 2. Run as non-root
|
||||||
|
RUN addgroup -g 1001 -S app && adduser -S app -u 1001
|
||||||
|
USER app
|
||||||
|
|
||||||
|
# 3. Drop capabilities (in compose)
|
||||||
|
# 4. Read-only root filesystem where possible
|
||||||
|
# 5. No secrets in image layers
|
||||||
|
```
|
||||||
|
|
||||||
|
### Compose Security
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
app:
|
||||||
|
security_opt:
|
||||||
|
- no-new-privileges:true
|
||||||
|
read_only: true
|
||||||
|
tmpfs:
|
||||||
|
- /tmp
|
||||||
|
- /app/.cache
|
||||||
|
cap_drop:
|
||||||
|
- ALL
|
||||||
|
cap_add:
|
||||||
|
- NET_BIND_SERVICE # Only if binding to ports < 1024
|
||||||
|
```
|
||||||
|
|
||||||
|
### Secret Management
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# GOOD: Use environment variables (injected at runtime)
|
||||||
|
services:
|
||||||
|
app:
|
||||||
|
env_file:
|
||||||
|
- .env # Never commit .env to git
|
||||||
|
environment:
|
||||||
|
- API_KEY # Inherits from host environment
|
||||||
|
|
||||||
|
# GOOD: Docker secrets (Swarm mode)
|
||||||
|
secrets:
|
||||||
|
db_password:
|
||||||
|
file: ./secrets/db_password.txt
|
||||||
|
|
||||||
|
services:
|
||||||
|
db:
|
||||||
|
secrets:
|
||||||
|
- db_password
|
||||||
|
|
||||||
|
# BAD: Hardcoded in image
|
||||||
|
# ENV API_KEY=sk-proj-xxxxx # NEVER DO THIS
|
||||||
|
```
|
||||||
|
|
||||||
|
## .dockerignore
|
||||||
|
|
||||||
|
```
|
||||||
|
node_modules
|
||||||
|
.git
|
||||||
|
.env
|
||||||
|
.env.*
|
||||||
|
dist
|
||||||
|
coverage
|
||||||
|
*.log
|
||||||
|
.next
|
||||||
|
.cache
|
||||||
|
docker-compose*.yml
|
||||||
|
Dockerfile*
|
||||||
|
README.md
|
||||||
|
tests/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Debugging
|
||||||
|
|
||||||
|
### Common Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View logs
|
||||||
|
docker compose logs -f app # Follow app logs
|
||||||
|
docker compose logs --tail=50 db # Last 50 lines from db
|
||||||
|
|
||||||
|
# Execute commands in running container
|
||||||
|
docker compose exec app sh # Shell into app
|
||||||
|
docker compose exec db psql -U postgres # Connect to postgres
|
||||||
|
|
||||||
|
# Inspect
|
||||||
|
docker compose ps # Running services
|
||||||
|
docker compose top # Processes in each container
|
||||||
|
docker stats # Resource usage
|
||||||
|
|
||||||
|
# Rebuild
|
||||||
|
docker compose up --build # Rebuild images
|
||||||
|
docker compose build --no-cache app # Force full rebuild
|
||||||
|
|
||||||
|
# Clean up
|
||||||
|
docker compose down # Stop and remove containers
|
||||||
|
docker compose down -v # Also remove volumes (DESTRUCTIVE)
|
||||||
|
docker system prune # Remove unused images/containers
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debugging Network Issues
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check DNS resolution inside container
|
||||||
|
docker compose exec app nslookup db
|
||||||
|
|
||||||
|
# Check connectivity
|
||||||
|
docker compose exec app wget -qO- http://api:3000/health
|
||||||
|
|
||||||
|
# Inspect network
|
||||||
|
docker network ls
|
||||||
|
docker network inspect <project>_default
|
||||||
|
```
|
||||||
|
|
||||||
|
## Anti-Patterns
|
||||||
|
|
||||||
|
```
|
||||||
|
# BAD: Using docker compose in production without orchestration
|
||||||
|
# Use Kubernetes, ECS, or Docker Swarm for production multi-container workloads
|
||||||
|
|
||||||
|
# BAD: Storing data in containers without volumes
|
||||||
|
# Containers are ephemeral -- all data lost on restart without volumes
|
||||||
|
|
||||||
|
# BAD: Running as root
|
||||||
|
# Always create and use a non-root user
|
||||||
|
|
||||||
|
# BAD: Using :latest tag
|
||||||
|
# Pin to specific versions for reproducible builds
|
||||||
|
|
||||||
|
# BAD: One giant container with all services
|
||||||
|
# Separate concerns: one process per container
|
||||||
|
|
||||||
|
# BAD: Putting secrets in docker-compose.yml
|
||||||
|
# Use .env files (gitignored) or Docker secrets
|
||||||
|
```
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
- Setting up Docker Compose for local development
|
||||||
|
- Designing multi-container architectures
|
||||||
|
- Troubleshooting container issues
|
||||||
|
- Reviewing Dockerfiles for security
|
||||||
|
- Implementing container best practices
|
||||||
328
.kiro/skills/e2e-testing/SKILL.md
Normal file
328
.kiro/skills/e2e-testing/SKILL.md
Normal file
@@ -0,0 +1,328 @@
|
|||||||
|
---
|
||||||
|
name: e2e-testing
|
||||||
|
description: >
|
||||||
|
Playwright E2E testing patterns, Page Object Model, configuration, CI/CD integration, artifact management, and flaky test strategies.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
---
|
||||||
|
|
||||||
|
# E2E Testing Patterns
|
||||||
|
|
||||||
|
Comprehensive Playwright patterns for building stable, fast, and maintainable E2E test suites.
|
||||||
|
|
||||||
|
## Test File Organization
|
||||||
|
|
||||||
|
```
|
||||||
|
tests/
|
||||||
|
├── e2e/
|
||||||
|
│ ├── auth/
|
||||||
|
│ │ ├── login.spec.ts
|
||||||
|
│ │ ├── logout.spec.ts
|
||||||
|
│ │ └── register.spec.ts
|
||||||
|
│ ├── features/
|
||||||
|
│ │ ├── browse.spec.ts
|
||||||
|
│ │ ├── search.spec.ts
|
||||||
|
│ │ └── create.spec.ts
|
||||||
|
│ └── api/
|
||||||
|
│ └── endpoints.spec.ts
|
||||||
|
├── fixtures/
|
||||||
|
│ ├── auth.ts
|
||||||
|
│ └── data.ts
|
||||||
|
└── playwright.config.ts
|
||||||
|
```
|
||||||
|
|
||||||
|
## Page Object Model (POM)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { Page, Locator } from '@playwright/test'
|
||||||
|
|
||||||
|
export class ItemsPage {
|
||||||
|
readonly page: Page
|
||||||
|
readonly searchInput: Locator
|
||||||
|
readonly itemCards: Locator
|
||||||
|
readonly createButton: Locator
|
||||||
|
|
||||||
|
constructor(page: Page) {
|
||||||
|
this.page = page
|
||||||
|
this.searchInput = page.locator('[data-testid="search-input"]')
|
||||||
|
this.itemCards = page.locator('[data-testid="item-card"]')
|
||||||
|
this.createButton = page.locator('[data-testid="create-btn"]')
|
||||||
|
}
|
||||||
|
|
||||||
|
async goto() {
|
||||||
|
await this.page.goto('/items')
|
||||||
|
await this.page.waitForLoadState('networkidle')
|
||||||
|
}
|
||||||
|
|
||||||
|
async search(query: string) {
|
||||||
|
await this.searchInput.fill(query)
|
||||||
|
await this.page.waitForResponse(resp => resp.url().includes('/api/search'))
|
||||||
|
await this.page.waitForLoadState('networkidle')
|
||||||
|
}
|
||||||
|
|
||||||
|
async getItemCount() {
|
||||||
|
return await this.itemCards.count()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Structure
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { test, expect } from '@playwright/test'
|
||||||
|
import { ItemsPage } from '../../pages/ItemsPage'
|
||||||
|
|
||||||
|
test.describe('Item Search', () => {
|
||||||
|
let itemsPage: ItemsPage
|
||||||
|
|
||||||
|
test.beforeEach(async ({ page }) => {
|
||||||
|
itemsPage = new ItemsPage(page)
|
||||||
|
await itemsPage.goto()
|
||||||
|
})
|
||||||
|
|
||||||
|
test('should search by keyword', async ({ page }) => {
|
||||||
|
await itemsPage.search('test')
|
||||||
|
|
||||||
|
const count = await itemsPage.getItemCount()
|
||||||
|
expect(count).toBeGreaterThan(0)
|
||||||
|
|
||||||
|
await expect(itemsPage.itemCards.first()).toContainText(/test/i)
|
||||||
|
await page.screenshot({ path: 'artifacts/search-results.png' })
|
||||||
|
})
|
||||||
|
|
||||||
|
test('should handle no results', async ({ page }) => {
|
||||||
|
await itemsPage.search('xyznonexistent123')
|
||||||
|
|
||||||
|
await expect(page.locator('[data-testid="no-results"]')).toBeVisible()
|
||||||
|
expect(await itemsPage.getItemCount()).toBe(0)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
## Playwright Configuration
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { defineConfig, devices } from '@playwright/test'
|
||||||
|
|
||||||
|
export default defineConfig({
|
||||||
|
testDir: './tests/e2e',
|
||||||
|
fullyParallel: true,
|
||||||
|
forbidOnly: !!process.env.CI,
|
||||||
|
retries: process.env.CI ? 2 : 0,
|
||||||
|
workers: process.env.CI ? 1 : undefined,
|
||||||
|
reporter: [
|
||||||
|
['html', { outputFolder: 'playwright-report' }],
|
||||||
|
['junit', { outputFile: 'playwright-results.xml' }],
|
||||||
|
['json', { outputFile: 'playwright-results.json' }]
|
||||||
|
],
|
||||||
|
use: {
|
||||||
|
baseURL: process.env.BASE_URL || 'http://localhost:3000',
|
||||||
|
trace: 'on-first-retry',
|
||||||
|
screenshot: 'only-on-failure',
|
||||||
|
video: 'retain-on-failure',
|
||||||
|
actionTimeout: 10000,
|
||||||
|
navigationTimeout: 30000,
|
||||||
|
},
|
||||||
|
projects: [
|
||||||
|
{ name: 'chromium', use: { ...devices['Desktop Chrome'] } },
|
||||||
|
{ name: 'firefox', use: { ...devices['Desktop Firefox'] } },
|
||||||
|
{ name: 'webkit', use: { ...devices['Desktop Safari'] } },
|
||||||
|
{ name: 'mobile-chrome', use: { ...devices['Pixel 5'] } },
|
||||||
|
],
|
||||||
|
webServer: {
|
||||||
|
command: 'npm run dev',
|
||||||
|
url: 'http://localhost:3000',
|
||||||
|
reuseExistingServer: !process.env.CI,
|
||||||
|
timeout: 120000,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
## Flaky Test Patterns
|
||||||
|
|
||||||
|
### Quarantine
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
test('flaky: complex search', async ({ page }) => {
|
||||||
|
test.fixme(true, 'Flaky - Issue #123')
|
||||||
|
// test code...
|
||||||
|
})
|
||||||
|
|
||||||
|
test('conditional skip', async ({ page }) => {
|
||||||
|
test.skip(process.env.CI, 'Flaky in CI - Issue #123')
|
||||||
|
// test code...
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### Identify Flakiness
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx playwright test tests/search.spec.ts --repeat-each=10
|
||||||
|
npx playwright test tests/search.spec.ts --retries=3
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Causes & Fixes
|
||||||
|
|
||||||
|
**Race conditions:**
|
||||||
|
```typescript
|
||||||
|
// Bad: assumes element is ready
|
||||||
|
await page.click('[data-testid="button"]')
|
||||||
|
|
||||||
|
// Good: auto-wait locator
|
||||||
|
await page.locator('[data-testid="button"]').click()
|
||||||
|
```
|
||||||
|
|
||||||
|
**Network timing:**
|
||||||
|
```typescript
|
||||||
|
// Bad: arbitrary timeout
|
||||||
|
await page.waitForTimeout(5000)
|
||||||
|
|
||||||
|
// Good: wait for specific condition
|
||||||
|
await page.waitForResponse(resp => resp.url().includes('/api/data'))
|
||||||
|
```
|
||||||
|
|
||||||
|
**Animation timing:**
|
||||||
|
```typescript
|
||||||
|
// Bad: click during animation
|
||||||
|
await page.click('[data-testid="menu-item"]')
|
||||||
|
|
||||||
|
// Good: wait for stability
|
||||||
|
await page.locator('[data-testid="menu-item"]').waitFor({ state: 'visible' })
|
||||||
|
await page.waitForLoadState('networkidle')
|
||||||
|
await page.locator('[data-testid="menu-item"]').click()
|
||||||
|
```
|
||||||
|
|
||||||
|
## Artifact Management
|
||||||
|
|
||||||
|
### Screenshots
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
await page.screenshot({ path: 'artifacts/after-login.png' })
|
||||||
|
await page.screenshot({ path: 'artifacts/full-page.png', fullPage: true })
|
||||||
|
await page.locator('[data-testid="chart"]').screenshot({ path: 'artifacts/chart.png' })
|
||||||
|
```
|
||||||
|
|
||||||
|
### Traces
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
await browser.startTracing(page, {
|
||||||
|
path: 'artifacts/trace.json',
|
||||||
|
screenshots: true,
|
||||||
|
snapshots: true,
|
||||||
|
})
|
||||||
|
// ... test actions ...
|
||||||
|
await browser.stopTracing()
|
||||||
|
```
|
||||||
|
|
||||||
|
### Video
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// In playwright.config.ts
|
||||||
|
use: {
|
||||||
|
video: 'retain-on-failure',
|
||||||
|
videosPath: 'artifacts/videos/'
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## CI/CD Integration
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# .github/workflows/e2e.yml
|
||||||
|
name: E2E Tests
|
||||||
|
on: [push, pull_request]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 20
|
||||||
|
- run: npm ci
|
||||||
|
- run: npx playwright install --with-deps
|
||||||
|
- run: npx playwright test
|
||||||
|
env:
|
||||||
|
BASE_URL: ${{ vars.STAGING_URL }}
|
||||||
|
- uses: actions/upload-artifact@v4
|
||||||
|
if: always()
|
||||||
|
with:
|
||||||
|
name: playwright-report
|
||||||
|
path: playwright-report/
|
||||||
|
retention-days: 30
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Report Template
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# E2E Test Report
|
||||||
|
|
||||||
|
**Date:** YYYY-MM-DD HH:MM
|
||||||
|
**Duration:** Xm Ys
|
||||||
|
**Status:** PASSING / FAILING
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
- Total: X | Passed: Y (Z%) | Failed: A | Flaky: B | Skipped: C
|
||||||
|
|
||||||
|
## Failed Tests
|
||||||
|
|
||||||
|
### test-name
|
||||||
|
**File:** `tests/e2e/feature.spec.ts:45`
|
||||||
|
**Error:** Expected element to be visible
|
||||||
|
**Screenshot:** artifacts/failed.png
|
||||||
|
**Recommended Fix:** [description]
|
||||||
|
|
||||||
|
## Artifacts
|
||||||
|
- HTML Report: playwright-report/index.html
|
||||||
|
- Screenshots: artifacts/*.png
|
||||||
|
- Videos: artifacts/videos/*.webm
|
||||||
|
- Traces: artifacts/*.zip
|
||||||
|
```
|
||||||
|
|
||||||
|
## Wallet / Web3 Testing
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
test('wallet connection', async ({ page, context }) => {
|
||||||
|
// Mock wallet provider
|
||||||
|
await context.addInitScript(() => {
|
||||||
|
window.ethereum = {
|
||||||
|
isMetaMask: true,
|
||||||
|
request: async ({ method }) => {
|
||||||
|
if (method === 'eth_requestAccounts')
|
||||||
|
return ['0x1234567890123456789012345678901234567890']
|
||||||
|
if (method === 'eth_chainId') return '0x1'
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
await page.goto('/')
|
||||||
|
await page.locator('[data-testid="connect-wallet"]').click()
|
||||||
|
await expect(page.locator('[data-testid="wallet-address"]')).toContainText('0x1234')
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
## Financial / Critical Flow Testing
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
test('trade execution', async ({ page }) => {
|
||||||
|
// Skip on production — real money
|
||||||
|
test.skip(process.env.NODE_ENV === 'production', 'Skip on production')
|
||||||
|
|
||||||
|
await page.goto('/markets/test-market')
|
||||||
|
await page.locator('[data-testid="position-yes"]').click()
|
||||||
|
await page.locator('[data-testid="trade-amount"]').fill('1.0')
|
||||||
|
|
||||||
|
// Verify preview
|
||||||
|
const preview = page.locator('[data-testid="trade-preview"]')
|
||||||
|
await expect(preview).toContainText('1.0')
|
||||||
|
|
||||||
|
// Confirm and wait for blockchain
|
||||||
|
await page.locator('[data-testid="confirm-trade"]').click()
|
||||||
|
await page.waitForResponse(
|
||||||
|
resp => resp.url().includes('/api/trade') && resp.status() === 200,
|
||||||
|
{ timeout: 30000 }
|
||||||
|
)
|
||||||
|
|
||||||
|
await expect(page.locator('[data-testid="trade-success"]')).toBeVisible()
|
||||||
|
})
|
||||||
|
```
|
||||||
644
.kiro/skills/frontend-patterns/SKILL.md
Normal file
644
.kiro/skills/frontend-patterns/SKILL.md
Normal file
@@ -0,0 +1,644 @@
|
|||||||
|
---
|
||||||
|
name: frontend-patterns
|
||||||
|
description: >
|
||||||
|
Frontend development patterns for React, Next.js, state management, performance optimization, and UI best practices.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
---
|
||||||
|
|
||||||
|
# Frontend Development Patterns
|
||||||
|
|
||||||
|
Modern frontend patterns for React, Next.js, and performant user interfaces.
|
||||||
|
|
||||||
|
## When to Activate
|
||||||
|
|
||||||
|
- Building React components (composition, props, rendering)
|
||||||
|
- Managing state (useState, useReducer, Zustand, Context)
|
||||||
|
- Implementing data fetching (SWR, React Query, server components)
|
||||||
|
- Optimizing performance (memoization, virtualization, code splitting)
|
||||||
|
- Working with forms (validation, controlled inputs, Zod schemas)
|
||||||
|
- Handling client-side routing and navigation
|
||||||
|
- Building accessible, responsive UI patterns
|
||||||
|
|
||||||
|
## Component Patterns
|
||||||
|
|
||||||
|
### Composition Over Inheritance
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Component composition
|
||||||
|
interface CardProps {
|
||||||
|
children: React.ReactNode
|
||||||
|
variant?: 'default' | 'outlined'
|
||||||
|
}
|
||||||
|
|
||||||
|
export function Card({ children, variant = 'default' }: CardProps) {
|
||||||
|
return <div className={`card card-${variant}`}>{children}</div>
|
||||||
|
}
|
||||||
|
|
||||||
|
export function CardHeader({ children }: { children: React.ReactNode }) {
|
||||||
|
return <div className="card-header">{children}</div>
|
||||||
|
}
|
||||||
|
|
||||||
|
export function CardBody({ children }: { children: React.ReactNode }) {
|
||||||
|
return <div className="card-body">{children}</div>
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
<Card>
|
||||||
|
<CardHeader>Title</CardHeader>
|
||||||
|
<CardBody>Content</CardBody>
|
||||||
|
</Card>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Compound Components
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
interface TabsContextValue {
|
||||||
|
activeTab: string
|
||||||
|
setActiveTab: (tab: string) => void
|
||||||
|
}
|
||||||
|
|
||||||
|
const TabsContext = createContext<TabsContextValue | undefined>(undefined)
|
||||||
|
|
||||||
|
export function Tabs({ children, defaultTab }: {
|
||||||
|
children: React.ReactNode
|
||||||
|
defaultTab: string
|
||||||
|
}) {
|
||||||
|
const [activeTab, setActiveTab] = useState(defaultTab)
|
||||||
|
|
||||||
|
return (
|
||||||
|
<TabsContext.Provider value={{ activeTab, setActiveTab }}>
|
||||||
|
{children}
|
||||||
|
</TabsContext.Provider>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
export function TabList({ children }: { children: React.ReactNode }) {
|
||||||
|
return <div className="tab-list">{children}</div>
|
||||||
|
}
|
||||||
|
|
||||||
|
export function Tab({ id, children }: { id: string, children: React.ReactNode }) {
|
||||||
|
const context = useContext(TabsContext)
|
||||||
|
if (!context) throw new Error('Tab must be used within Tabs')
|
||||||
|
|
||||||
|
return (
|
||||||
|
<button
|
||||||
|
className={context.activeTab === id ? 'active' : ''}
|
||||||
|
onClick={() => context.setActiveTab(id)}
|
||||||
|
>
|
||||||
|
{children}
|
||||||
|
</button>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
<Tabs defaultTab="overview">
|
||||||
|
<TabList>
|
||||||
|
<Tab id="overview">Overview</Tab>
|
||||||
|
<Tab id="details">Details</Tab>
|
||||||
|
</TabList>
|
||||||
|
</Tabs>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Render Props Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
interface DataLoaderProps<T> {
|
||||||
|
url: string
|
||||||
|
children: (data: T | null, loading: boolean, error: Error | null) => React.ReactNode
|
||||||
|
}
|
||||||
|
|
||||||
|
export function DataLoader<T>({ url, children }: DataLoaderProps<T>) {
|
||||||
|
const [data, setData] = useState<T | null>(null)
|
||||||
|
const [loading, setLoading] = useState(true)
|
||||||
|
const [error, setError] = useState<Error | null>(null)
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
fetch(url)
|
||||||
|
.then(res => res.json())
|
||||||
|
.then(setData)
|
||||||
|
.catch(setError)
|
||||||
|
.finally(() => setLoading(false))
|
||||||
|
}, [url])
|
||||||
|
|
||||||
|
return <>{children(data, loading, error)}</>
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
<DataLoader<Market[]> url="/api/markets">
|
||||||
|
{(markets, loading, error) => {
|
||||||
|
if (loading) return <Spinner />
|
||||||
|
if (error) return <Error error={error} />
|
||||||
|
return <MarketList markets={markets!} />
|
||||||
|
}}
|
||||||
|
</DataLoader>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Custom Hooks Patterns
|
||||||
|
|
||||||
|
### State Management Hook
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export function useToggle(initialValue = false): [boolean, () => void] {
|
||||||
|
const [value, setValue] = useState(initialValue)
|
||||||
|
|
||||||
|
const toggle = useCallback(() => {
|
||||||
|
setValue(v => !v)
|
||||||
|
}, [])
|
||||||
|
|
||||||
|
return [value, toggle]
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
const [isOpen, toggleOpen] = useToggle()
|
||||||
|
```
|
||||||
|
|
||||||
|
### Async Data Fetching Hook
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
interface UseQueryOptions<T> {
|
||||||
|
onSuccess?: (data: T) => void
|
||||||
|
onError?: (error: Error) => void
|
||||||
|
enabled?: boolean
|
||||||
|
}
|
||||||
|
|
||||||
|
export function useQuery<T>(
|
||||||
|
key: string,
|
||||||
|
fetcher: () => Promise<T>,
|
||||||
|
options?: UseQueryOptions<T>
|
||||||
|
) {
|
||||||
|
const [data, setData] = useState<T | null>(null)
|
||||||
|
const [error, setError] = useState<Error | null>(null)
|
||||||
|
const [loading, setLoading] = useState(false)
|
||||||
|
|
||||||
|
const refetch = useCallback(async () => {
|
||||||
|
setLoading(true)
|
||||||
|
setError(null)
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await fetcher()
|
||||||
|
setData(result)
|
||||||
|
options?.onSuccess?.(result)
|
||||||
|
} catch (err) {
|
||||||
|
const error = err as Error
|
||||||
|
setError(error)
|
||||||
|
options?.onError?.(error)
|
||||||
|
} finally {
|
||||||
|
setLoading(false)
|
||||||
|
}
|
||||||
|
}, [fetcher, options])
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (options?.enabled !== false) {
|
||||||
|
refetch()
|
||||||
|
}
|
||||||
|
}, [key, refetch, options?.enabled])
|
||||||
|
|
||||||
|
return { data, error, loading, refetch }
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
const { data: markets, loading, error, refetch } = useQuery(
|
||||||
|
'markets',
|
||||||
|
() => fetch('/api/markets').then(r => r.json()),
|
||||||
|
{
|
||||||
|
onSuccess: data => console.log('Fetched', data.length, 'markets'),
|
||||||
|
onError: err => console.error('Failed:', err)
|
||||||
|
}
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debounce Hook
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export function useDebounce<T>(value: T, delay: number): T {
|
||||||
|
const [debouncedValue, setDebouncedValue] = useState<T>(value)
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
const handler = setTimeout(() => {
|
||||||
|
setDebouncedValue(value)
|
||||||
|
}, delay)
|
||||||
|
|
||||||
|
return () => clearTimeout(handler)
|
||||||
|
}, [value, delay])
|
||||||
|
|
||||||
|
return debouncedValue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
const [searchQuery, setSearchQuery] = useState('')
|
||||||
|
const debouncedQuery = useDebounce(searchQuery, 500)
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (debouncedQuery) {
|
||||||
|
performSearch(debouncedQuery)
|
||||||
|
}
|
||||||
|
}, [debouncedQuery])
|
||||||
|
```
|
||||||
|
|
||||||
|
## State Management Patterns
|
||||||
|
|
||||||
|
### Context + Reducer Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
interface State {
|
||||||
|
markets: Market[]
|
||||||
|
selectedMarket: Market | null
|
||||||
|
loading: boolean
|
||||||
|
}
|
||||||
|
|
||||||
|
type Action =
|
||||||
|
| { type: 'SET_MARKETS'; payload: Market[] }
|
||||||
|
| { type: 'SELECT_MARKET'; payload: Market }
|
||||||
|
| { type: 'SET_LOADING'; payload: boolean }
|
||||||
|
|
||||||
|
function reducer(state: State, action: Action): State {
|
||||||
|
switch (action.type) {
|
||||||
|
case 'SET_MARKETS':
|
||||||
|
return { ...state, markets: action.payload }
|
||||||
|
case 'SELECT_MARKET':
|
||||||
|
return { ...state, selectedMarket: action.payload }
|
||||||
|
case 'SET_LOADING':
|
||||||
|
return { ...state, loading: action.payload }
|
||||||
|
default:
|
||||||
|
return state
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const MarketContext = createContext<{
|
||||||
|
state: State
|
||||||
|
dispatch: Dispatch<Action>
|
||||||
|
} | undefined>(undefined)
|
||||||
|
|
||||||
|
export function MarketProvider({ children }: { children: React.ReactNode }) {
|
||||||
|
const [state, dispatch] = useReducer(reducer, {
|
||||||
|
markets: [],
|
||||||
|
selectedMarket: null,
|
||||||
|
loading: false
|
||||||
|
})
|
||||||
|
|
||||||
|
return (
|
||||||
|
<MarketContext.Provider value={{ state, dispatch }}>
|
||||||
|
{children}
|
||||||
|
</MarketContext.Provider>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
export function useMarkets() {
|
||||||
|
const context = useContext(MarketContext)
|
||||||
|
if (!context) throw new Error('useMarkets must be used within MarketProvider')
|
||||||
|
return context
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance Optimization
|
||||||
|
|
||||||
|
### Memoization
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ useMemo for expensive computations
|
||||||
|
const sortedMarkets = useMemo(() => {
|
||||||
|
return markets.sort((a, b) => b.volume - a.volume)
|
||||||
|
}, [markets])
|
||||||
|
|
||||||
|
// ✅ useCallback for functions passed to children
|
||||||
|
const handleSearch = useCallback((query: string) => {
|
||||||
|
setSearchQuery(query)
|
||||||
|
}, [])
|
||||||
|
|
||||||
|
// ✅ React.memo for pure components
|
||||||
|
export const MarketCard = React.memo<MarketCardProps>(({ market }) => {
|
||||||
|
return (
|
||||||
|
<div className="market-card">
|
||||||
|
<h3>{market.name}</h3>
|
||||||
|
<p>{market.description}</p>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### Code Splitting & Lazy Loading
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { lazy, Suspense } from 'react'
|
||||||
|
|
||||||
|
// ✅ Lazy load heavy components
|
||||||
|
const HeavyChart = lazy(() => import('./HeavyChart'))
|
||||||
|
const ThreeJsBackground = lazy(() => import('./ThreeJsBackground'))
|
||||||
|
|
||||||
|
export function Dashboard() {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
<Suspense fallback={<ChartSkeleton />}>
|
||||||
|
<HeavyChart data={data} />
|
||||||
|
</Suspense>
|
||||||
|
|
||||||
|
<Suspense fallback={null}>
|
||||||
|
<ThreeJsBackground />
|
||||||
|
</Suspense>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Virtualization for Long Lists
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { useVirtualizer } from '@tanstack/react-virtual'
|
||||||
|
|
||||||
|
export function VirtualMarketList({ markets }: { markets: Market[] }) {
|
||||||
|
const parentRef = useRef<HTMLDivElement>(null)
|
||||||
|
|
||||||
|
const virtualizer = useVirtualizer({
|
||||||
|
count: markets.length,
|
||||||
|
getScrollElement: () => parentRef.current,
|
||||||
|
estimateSize: () => 100, // Estimated row height
|
||||||
|
overscan: 5 // Extra items to render
|
||||||
|
})
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div ref={parentRef} style={{ height: '600px', overflow: 'auto' }}>
|
||||||
|
<div
|
||||||
|
style={{
|
||||||
|
height: `${virtualizer.getTotalSize()}px`,
|
||||||
|
position: 'relative'
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
{virtualizer.getVirtualItems().map(virtualRow => (
|
||||||
|
<div
|
||||||
|
key={virtualRow.index}
|
||||||
|
style={{
|
||||||
|
position: 'absolute',
|
||||||
|
top: 0,
|
||||||
|
left: 0,
|
||||||
|
width: '100%',
|
||||||
|
height: `${virtualRow.size}px`,
|
||||||
|
transform: `translateY(${virtualRow.start}px)`
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<MarketCard market={markets[virtualRow.index]} />
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Form Handling Patterns
|
||||||
|
|
||||||
|
### Controlled Form with Validation
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
interface FormData {
|
||||||
|
name: string
|
||||||
|
description: string
|
||||||
|
endDate: string
|
||||||
|
}
|
||||||
|
|
||||||
|
interface FormErrors {
|
||||||
|
name?: string
|
||||||
|
description?: string
|
||||||
|
endDate?: string
|
||||||
|
}
|
||||||
|
|
||||||
|
export function CreateMarketForm() {
|
||||||
|
const [formData, setFormData] = useState<FormData>({
|
||||||
|
name: '',
|
||||||
|
description: '',
|
||||||
|
endDate: ''
|
||||||
|
})
|
||||||
|
|
||||||
|
const [errors, setErrors] = useState<FormErrors>({})
|
||||||
|
|
||||||
|
const validate = (): boolean => {
|
||||||
|
const newErrors: FormErrors = {}
|
||||||
|
|
||||||
|
if (!formData.name.trim()) {
|
||||||
|
newErrors.name = 'Name is required'
|
||||||
|
} else if (formData.name.length > 200) {
|
||||||
|
newErrors.name = 'Name must be under 200 characters'
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!formData.description.trim()) {
|
||||||
|
newErrors.description = 'Description is required'
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!formData.endDate) {
|
||||||
|
newErrors.endDate = 'End date is required'
|
||||||
|
}
|
||||||
|
|
||||||
|
setErrors(newErrors)
|
||||||
|
return Object.keys(newErrors).length === 0
|
||||||
|
}
|
||||||
|
|
||||||
|
const handleSubmit = async (e: React.FormEvent) => {
|
||||||
|
e.preventDefault()
|
||||||
|
|
||||||
|
if (!validate()) return
|
||||||
|
|
||||||
|
try {
|
||||||
|
await createMarket(formData)
|
||||||
|
// Success handling
|
||||||
|
} catch (error) {
|
||||||
|
// Error handling
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<form onSubmit={handleSubmit}>
|
||||||
|
<input
|
||||||
|
value={formData.name}
|
||||||
|
onChange={e => setFormData(prev => ({ ...prev, name: e.target.value }))}
|
||||||
|
placeholder="Market name"
|
||||||
|
/>
|
||||||
|
{errors.name && <span className="error">{errors.name}</span>}
|
||||||
|
|
||||||
|
{/* Other fields */}
|
||||||
|
|
||||||
|
<button type="submit">Create Market</button>
|
||||||
|
</form>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Boundary Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
interface ErrorBoundaryState {
|
||||||
|
hasError: boolean
|
||||||
|
error: Error | null
|
||||||
|
}
|
||||||
|
|
||||||
|
export class ErrorBoundary extends React.Component<
|
||||||
|
{ children: React.ReactNode },
|
||||||
|
ErrorBoundaryState
|
||||||
|
> {
|
||||||
|
state: ErrorBoundaryState = {
|
||||||
|
hasError: false,
|
||||||
|
error: null
|
||||||
|
}
|
||||||
|
|
||||||
|
static getDerivedStateFromError(error: Error): ErrorBoundaryState {
|
||||||
|
return { hasError: true, error }
|
||||||
|
}
|
||||||
|
|
||||||
|
componentDidCatch(error: Error, errorInfo: React.ErrorInfo) {
|
||||||
|
console.error('Error boundary caught:', error, errorInfo)
|
||||||
|
}
|
||||||
|
|
||||||
|
render() {
|
||||||
|
if (this.state.hasError) {
|
||||||
|
return (
|
||||||
|
<div className="error-fallback">
|
||||||
|
<h2>Something went wrong</h2>
|
||||||
|
<p>{this.state.error?.message}</p>
|
||||||
|
<button onClick={() => this.setState({ hasError: false })}>
|
||||||
|
Try again
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
return this.props.children
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
<ErrorBoundary>
|
||||||
|
<App />
|
||||||
|
</ErrorBoundary>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Animation Patterns
|
||||||
|
|
||||||
|
### Framer Motion Animations
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { motion, AnimatePresence } from 'framer-motion'
|
||||||
|
|
||||||
|
// ✅ List animations
|
||||||
|
export function AnimatedMarketList({ markets }: { markets: Market[] }) {
|
||||||
|
return (
|
||||||
|
<AnimatePresence>
|
||||||
|
{markets.map(market => (
|
||||||
|
<motion.div
|
||||||
|
key={market.id}
|
||||||
|
initial={{ opacity: 0, y: 20 }}
|
||||||
|
animate={{ opacity: 1, y: 0 }}
|
||||||
|
exit={{ opacity: 0, y: -20 }}
|
||||||
|
transition={{ duration: 0.3 }}
|
||||||
|
>
|
||||||
|
<MarketCard market={market} />
|
||||||
|
</motion.div>
|
||||||
|
))}
|
||||||
|
</AnimatePresence>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ Modal animations
|
||||||
|
export function Modal({ isOpen, onClose, children }: ModalProps) {
|
||||||
|
return (
|
||||||
|
<AnimatePresence>
|
||||||
|
{isOpen && (
|
||||||
|
<>
|
||||||
|
<motion.div
|
||||||
|
className="modal-overlay"
|
||||||
|
initial={{ opacity: 0 }}
|
||||||
|
animate={{ opacity: 1 }}
|
||||||
|
exit={{ opacity: 0 }}
|
||||||
|
onClick={onClose}
|
||||||
|
/>
|
||||||
|
<motion.div
|
||||||
|
className="modal-content"
|
||||||
|
initial={{ opacity: 0, scale: 0.9, y: 20 }}
|
||||||
|
animate={{ opacity: 1, scale: 1, y: 0 }}
|
||||||
|
exit={{ opacity: 0, scale: 0.9, y: 20 }}
|
||||||
|
>
|
||||||
|
{children}
|
||||||
|
</motion.div>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</AnimatePresence>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Accessibility Patterns
|
||||||
|
|
||||||
|
### Keyboard Navigation
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export function Dropdown({ options, onSelect }: DropdownProps) {
|
||||||
|
const [isOpen, setIsOpen] = useState(false)
|
||||||
|
const [activeIndex, setActiveIndex] = useState(0)
|
||||||
|
|
||||||
|
const handleKeyDown = (e: React.KeyboardEvent) => {
|
||||||
|
switch (e.key) {
|
||||||
|
case 'ArrowDown':
|
||||||
|
e.preventDefault()
|
||||||
|
setActiveIndex(i => Math.min(i + 1, options.length - 1))
|
||||||
|
break
|
||||||
|
case 'ArrowUp':
|
||||||
|
e.preventDefault()
|
||||||
|
setActiveIndex(i => Math.max(i - 1, 0))
|
||||||
|
break
|
||||||
|
case 'Enter':
|
||||||
|
e.preventDefault()
|
||||||
|
onSelect(options[activeIndex])
|
||||||
|
setIsOpen(false)
|
||||||
|
break
|
||||||
|
case 'Escape':
|
||||||
|
setIsOpen(false)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div
|
||||||
|
role="combobox"
|
||||||
|
aria-expanded={isOpen}
|
||||||
|
aria-haspopup="listbox"
|
||||||
|
onKeyDown={handleKeyDown}
|
||||||
|
>
|
||||||
|
{/* Dropdown implementation */}
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Focus Management
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export function Modal({ isOpen, onClose, children }: ModalProps) {
|
||||||
|
const modalRef = useRef<HTMLDivElement>(null)
|
||||||
|
const previousFocusRef = useRef<HTMLElement | null>(null)
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (isOpen) {
|
||||||
|
// Save currently focused element
|
||||||
|
previousFocusRef.current = document.activeElement as HTMLElement
|
||||||
|
|
||||||
|
// Focus modal
|
||||||
|
modalRef.current?.focus()
|
||||||
|
} else {
|
||||||
|
// Restore focus when closing
|
||||||
|
previousFocusRef.current?.focus()
|
||||||
|
}
|
||||||
|
}, [isOpen])
|
||||||
|
|
||||||
|
return isOpen ? (
|
||||||
|
<div
|
||||||
|
ref={modalRef}
|
||||||
|
role="dialog"
|
||||||
|
aria-modal="true"
|
||||||
|
tabIndex={-1}
|
||||||
|
onKeyDown={e => e.key === 'Escape' && onClose()}
|
||||||
|
>
|
||||||
|
{children}
|
||||||
|
</div>
|
||||||
|
) : null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Remember**: Modern frontend patterns enable maintainable, performant user interfaces. Choose patterns that fit your project complexity.
|
||||||
227
.kiro/skills/golang-patterns/SKILL.md
Normal file
227
.kiro/skills/golang-patterns/SKILL.md
Normal file
@@ -0,0 +1,227 @@
|
|||||||
|
---
|
||||||
|
name: golang-patterns
|
||||||
|
description: >
|
||||||
|
Go-specific design patterns and best practices including functional options,
|
||||||
|
small interfaces, dependency injection, concurrency patterns, error handling,
|
||||||
|
and package organization. Use when working with Go code to apply idiomatic
|
||||||
|
Go patterns.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
globs: ["**/*.go", "**/go.mod", "**/go.sum"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Go Patterns
|
||||||
|
|
||||||
|
> This skill provides comprehensive Go patterns extending common design principles with Go-specific idioms.
|
||||||
|
|
||||||
|
## Functional Options
|
||||||
|
|
||||||
|
Use the functional options pattern for flexible constructor configuration:
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Option func(*Server)
|
||||||
|
|
||||||
|
func WithPort(port int) Option {
|
||||||
|
return func(s *Server) { s.port = port }
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewServer(opts ...Option) *Server {
|
||||||
|
s := &Server{port: 8080}
|
||||||
|
for _, opt := range opts {
|
||||||
|
opt(s)
|
||||||
|
}
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Benefits:**
|
||||||
|
- Backward compatible API evolution
|
||||||
|
- Optional parameters with defaults
|
||||||
|
- Self-documenting configuration
|
||||||
|
|
||||||
|
## Small Interfaces
|
||||||
|
|
||||||
|
Define interfaces where they are used, not where they are implemented.
|
||||||
|
|
||||||
|
**Principle:** Accept interfaces, return structs
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Good: Small, focused interface defined at point of use
|
||||||
|
type UserStore interface {
|
||||||
|
GetUser(id string) (*User, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
func ProcessUser(store UserStore, id string) error {
|
||||||
|
user, err := store.GetUser(id)
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Benefits:**
|
||||||
|
- Easier testing and mocking
|
||||||
|
- Loose coupling
|
||||||
|
- Clear dependencies
|
||||||
|
|
||||||
|
## Dependency Injection
|
||||||
|
|
||||||
|
Use constructor functions to inject dependencies:
|
||||||
|
|
||||||
|
```go
|
||||||
|
func NewUserService(repo UserRepository, logger Logger) *UserService {
|
||||||
|
return &UserService{
|
||||||
|
repo: repo,
|
||||||
|
logger: logger,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pattern:**
|
||||||
|
- Constructor functions (New* prefix)
|
||||||
|
- Explicit dependencies as parameters
|
||||||
|
- Return concrete types
|
||||||
|
- Validate dependencies in constructor
|
||||||
|
|
||||||
|
## Concurrency Patterns
|
||||||
|
|
||||||
|
### Worker Pool
|
||||||
|
|
||||||
|
```go
|
||||||
|
func workerPool(jobs <-chan Job, results chan<- Result, workers int) {
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
for i := 0; i < workers; i++ {
|
||||||
|
wg.Add(1)
|
||||||
|
go func() {
|
||||||
|
defer wg.Done()
|
||||||
|
for job := range jobs {
|
||||||
|
results <- processJob(job)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
wg.Wait()
|
||||||
|
close(results)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Context Propagation
|
||||||
|
|
||||||
|
Always pass context as first parameter:
|
||||||
|
|
||||||
|
```go
|
||||||
|
func FetchUser(ctx context.Context, id string) (*User, error) {
|
||||||
|
// Check context cancellation
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return nil, ctx.Err()
|
||||||
|
default:
|
||||||
|
}
|
||||||
|
// ... fetch logic
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### Error Wrapping
|
||||||
|
|
||||||
|
```go
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to fetch user %s: %w", id, err)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Errors
|
||||||
|
|
||||||
|
```go
|
||||||
|
type ValidationError struct {
|
||||||
|
Field string
|
||||||
|
Msg string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *ValidationError) Error() string {
|
||||||
|
return fmt.Sprintf("%s: %s", e.Field, e.Msg)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Sentinel Errors
|
||||||
|
|
||||||
|
```go
|
||||||
|
var (
|
||||||
|
ErrNotFound = errors.New("not found")
|
||||||
|
ErrInvalid = errors.New("invalid input")
|
||||||
|
)
|
||||||
|
|
||||||
|
// Check with errors.Is
|
||||||
|
if errors.Is(err, ErrNotFound) {
|
||||||
|
// handle not found
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Package Organization
|
||||||
|
|
||||||
|
### Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
project/
|
||||||
|
├── cmd/ # Main applications
|
||||||
|
│ └── server/
|
||||||
|
│ └── main.go
|
||||||
|
├── internal/ # Private application code
|
||||||
|
│ ├── domain/ # Business logic
|
||||||
|
│ ├── handler/ # HTTP handlers
|
||||||
|
│ └── repository/ # Data access
|
||||||
|
└── pkg/ # Public libraries
|
||||||
|
```
|
||||||
|
|
||||||
|
### Naming Conventions
|
||||||
|
|
||||||
|
- Package names: lowercase, single word
|
||||||
|
- Avoid stutter: `user.User` not `user.UserModel`
|
||||||
|
- Use `internal/` for private code
|
||||||
|
- Keep `main` package minimal
|
||||||
|
|
||||||
|
## Testing Patterns
|
||||||
|
|
||||||
|
### Table-Driven Tests
|
||||||
|
|
||||||
|
```go
|
||||||
|
func TestValidate(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
input string
|
||||||
|
wantErr bool
|
||||||
|
}{
|
||||||
|
{"valid", "test@example.com", false},
|
||||||
|
{"invalid", "not-an-email", true},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
err := Validate(tt.input)
|
||||||
|
if (err != nil) != tt.wantErr {
|
||||||
|
t.Errorf("got error %v, wantErr %v", err, tt.wantErr)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Helpers
|
||||||
|
|
||||||
|
```go
|
||||||
|
func testDB(t *testing.T) *sql.DB {
|
||||||
|
t.Helper()
|
||||||
|
db, err := sql.Open("sqlite3", ":memory:")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("failed to open test db: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { db.Close() })
|
||||||
|
return db
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
- Designing Go APIs and packages
|
||||||
|
- Implementing concurrent systems
|
||||||
|
- Structuring Go projects
|
||||||
|
- Writing idiomatic Go code
|
||||||
|
- Refactoring Go codebases
|
||||||
332
.kiro/skills/golang-testing/SKILL.md
Normal file
332
.kiro/skills/golang-testing/SKILL.md
Normal file
@@ -0,0 +1,332 @@
|
|||||||
|
---
|
||||||
|
name: golang-testing
|
||||||
|
description: >
|
||||||
|
Go testing best practices including table-driven tests, test helpers,
|
||||||
|
benchmarking, race detection, coverage analysis, and integration testing
|
||||||
|
patterns. Use when writing or improving Go tests.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
globs: ["**/*.go", "**/go.mod", "**/go.sum"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Go Testing
|
||||||
|
|
||||||
|
> This skill provides comprehensive Go testing patterns extending common testing principles with Go-specific idioms.
|
||||||
|
|
||||||
|
## Testing Framework
|
||||||
|
|
||||||
|
Use the standard `go test` with **table-driven tests** as the primary pattern.
|
||||||
|
|
||||||
|
### Table-Driven Tests
|
||||||
|
|
||||||
|
The idiomatic Go testing pattern:
|
||||||
|
|
||||||
|
```go
|
||||||
|
func TestValidateEmail(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
email string
|
||||||
|
wantErr bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "valid email",
|
||||||
|
email: "user@example.com",
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "missing @",
|
||||||
|
email: "userexample.com",
|
||||||
|
wantErr: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "empty string",
|
||||||
|
email: "",
|
||||||
|
wantErr: true,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
err := ValidateEmail(tt.email)
|
||||||
|
if (err != nil) != tt.wantErr {
|
||||||
|
t.Errorf("ValidateEmail(%q) error = %v, wantErr %v",
|
||||||
|
tt.email, err, tt.wantErr)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Benefits:**
|
||||||
|
- Easy to add new test cases
|
||||||
|
- Clear test case documentation
|
||||||
|
- Parallel test execution with `t.Parallel()`
|
||||||
|
- Isolated subtests with `t.Run()`
|
||||||
|
|
||||||
|
## Test Helpers
|
||||||
|
|
||||||
|
Use `t.Helper()` to mark helper functions:
|
||||||
|
|
||||||
|
```go
|
||||||
|
func assertNoError(t *testing.T, err error) {
|
||||||
|
t.Helper()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func assertEqual(t *testing.T, got, want interface{}) {
|
||||||
|
t.Helper()
|
||||||
|
if !reflect.DeepEqual(got, want) {
|
||||||
|
t.Errorf("got %v, want %v", got, want)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Benefits:**
|
||||||
|
- Correct line numbers in test failures
|
||||||
|
- Reusable test utilities
|
||||||
|
- Cleaner test code
|
||||||
|
|
||||||
|
## Test Fixtures
|
||||||
|
|
||||||
|
Use `t.Cleanup()` for resource cleanup:
|
||||||
|
|
||||||
|
```go
|
||||||
|
func testDB(t *testing.T) *sql.DB {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
db, err := sql.Open("sqlite3", ":memory:")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("failed to open test db: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cleanup runs after test completes
|
||||||
|
t.Cleanup(func() {
|
||||||
|
if err := db.Close(); err != nil {
|
||||||
|
t.Errorf("failed to close db: %v", err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
return db
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUserRepository(t *testing.T) {
|
||||||
|
db := testDB(t)
|
||||||
|
repo := NewUserRepository(db)
|
||||||
|
// ... test logic
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Race Detection
|
||||||
|
|
||||||
|
Always run tests with the `-race` flag to detect data races:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
go test -race ./...
|
||||||
|
```
|
||||||
|
|
||||||
|
**In CI/CD:**
|
||||||
|
```yaml
|
||||||
|
- name: Test with race detector
|
||||||
|
run: go test -race -timeout 5m ./...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why:**
|
||||||
|
- Detects concurrent access bugs
|
||||||
|
- Prevents production race conditions
|
||||||
|
- Minimal performance overhead in tests
|
||||||
|
|
||||||
|
## Coverage Analysis
|
||||||
|
|
||||||
|
### Basic Coverage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
go test -cover ./...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Detailed Coverage Report
|
||||||
|
|
||||||
|
```bash
|
||||||
|
go test -coverprofile=coverage.out ./...
|
||||||
|
go tool cover -html=coverage.out
|
||||||
|
```
|
||||||
|
|
||||||
|
### Coverage Thresholds
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Fail if coverage below 80%
|
||||||
|
go test -cover ./... | grep -E 'coverage: [0-7][0-9]\.[0-9]%' && exit 1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Benchmarking
|
||||||
|
|
||||||
|
```go
|
||||||
|
func BenchmarkValidateEmail(b *testing.B) {
|
||||||
|
email := "user@example.com"
|
||||||
|
|
||||||
|
b.ResetTimer()
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
ValidateEmail(email)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Run benchmarks:**
|
||||||
|
```bash
|
||||||
|
go test -bench=. -benchmem
|
||||||
|
```
|
||||||
|
|
||||||
|
**Compare benchmarks:**
|
||||||
|
```bash
|
||||||
|
go test -bench=. -benchmem > old.txt
|
||||||
|
# make changes
|
||||||
|
go test -bench=. -benchmem > new.txt
|
||||||
|
benchstat old.txt new.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
## Mocking
|
||||||
|
|
||||||
|
### Interface-Based Mocking
|
||||||
|
|
||||||
|
```go
|
||||||
|
type UserRepository interface {
|
||||||
|
GetUser(id string) (*User, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
type mockUserRepository struct {
|
||||||
|
users map[string]*User
|
||||||
|
err error
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockUserRepository) GetUser(id string) (*User, error) {
|
||||||
|
if m.err != nil {
|
||||||
|
return nil, m.err
|
||||||
|
}
|
||||||
|
return m.users[id], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUserService(t *testing.T) {
|
||||||
|
mock := &mockUserRepository{
|
||||||
|
users: map[string]*User{
|
||||||
|
"1": {ID: "1", Name: "Alice"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
service := NewUserService(mock)
|
||||||
|
// ... test logic
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration Tests
|
||||||
|
|
||||||
|
### Build Tags
|
||||||
|
|
||||||
|
```go
|
||||||
|
//go:build integration
|
||||||
|
// +build integration
|
||||||
|
|
||||||
|
package user_test
|
||||||
|
|
||||||
|
func TestUserRepository_Integration(t *testing.T) {
|
||||||
|
// ... integration test
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Run integration tests:**
|
||||||
|
```bash
|
||||||
|
go test -tags=integration ./...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Containers
|
||||||
|
|
||||||
|
```go
|
||||||
|
func TestWithPostgres(t *testing.T) {
|
||||||
|
if testing.Short() {
|
||||||
|
t.Skip("skipping integration test")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Setup test container
|
||||||
|
ctx := context.Background()
|
||||||
|
container, err := testcontainers.GenericContainer(ctx, ...)
|
||||||
|
assertNoError(t, err)
|
||||||
|
|
||||||
|
t.Cleanup(func() {
|
||||||
|
container.Terminate(ctx)
|
||||||
|
})
|
||||||
|
|
||||||
|
// ... test logic
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Organization
|
||||||
|
|
||||||
|
### File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
package/
|
||||||
|
├── user.go
|
||||||
|
├── user_test.go # Unit tests
|
||||||
|
├── user_integration_test.go # Integration tests
|
||||||
|
└── testdata/ # Test fixtures
|
||||||
|
└── users.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Package Naming
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Black-box testing (external perspective)
|
||||||
|
package user_test
|
||||||
|
|
||||||
|
// White-box testing (internal access)
|
||||||
|
package user
|
||||||
|
```
|
||||||
|
|
||||||
|
## Common Patterns
|
||||||
|
|
||||||
|
### Testing HTTP Handlers
|
||||||
|
|
||||||
|
```go
|
||||||
|
func TestUserHandler(t *testing.T) {
|
||||||
|
req := httptest.NewRequest("GET", "/users/1", nil)
|
||||||
|
rec := httptest.NewRecorder()
|
||||||
|
|
||||||
|
handler := NewUserHandler(mockRepo)
|
||||||
|
handler.ServeHTTP(rec, req)
|
||||||
|
|
||||||
|
assertEqual(t, rec.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing with Context
|
||||||
|
|
||||||
|
```go
|
||||||
|
func TestWithTimeout(t *testing.T) {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
err := SlowOperation(ctx)
|
||||||
|
if !errors.Is(err, context.DeadlineExceeded) {
|
||||||
|
t.Errorf("expected timeout error, got %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **Use `t.Parallel()`** for independent tests
|
||||||
|
2. **Use `testing.Short()`** to skip slow tests
|
||||||
|
3. **Use `t.TempDir()`** for temporary directories
|
||||||
|
4. **Use `t.Setenv()`** for environment variables
|
||||||
|
5. **Avoid `init()`** in test files
|
||||||
|
6. **Keep tests focused** - one behavior per test
|
||||||
|
7. **Use meaningful test names** - describe what's being tested
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
- Writing new Go tests
|
||||||
|
- Improving test coverage
|
||||||
|
- Setting up test infrastructure
|
||||||
|
- Debugging flaky tests
|
||||||
|
- Optimizing test performance
|
||||||
|
- Implementing integration tests
|
||||||
161
.kiro/skills/postgres-patterns/SKILL.md
Normal file
161
.kiro/skills/postgres-patterns/SKILL.md
Normal file
@@ -0,0 +1,161 @@
|
|||||||
|
---
|
||||||
|
name: postgres-patterns
|
||||||
|
description: >
|
||||||
|
PostgreSQL database patterns for query optimization, schema design, indexing,
|
||||||
|
and security. Quick reference for common patterns, index types, data types,
|
||||||
|
and anti-pattern detection. Based on Supabase best practices.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
credit: Supabase team (MIT License)
|
||||||
|
---
|
||||||
|
|
||||||
|
# PostgreSQL Patterns
|
||||||
|
|
||||||
|
Quick reference for PostgreSQL best practices. For detailed guidance, use the `database-reviewer` agent.
|
||||||
|
|
||||||
|
## When to Activate
|
||||||
|
|
||||||
|
- Writing SQL queries or migrations
|
||||||
|
- Designing database schemas
|
||||||
|
- Troubleshooting slow queries
|
||||||
|
- Implementing Row Level Security
|
||||||
|
- Setting up connection pooling
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
### Index Cheat Sheet
|
||||||
|
|
||||||
|
| Query Pattern | Index Type | Example |
|
||||||
|
|--------------|------------|---------|
|
||||||
|
| `WHERE col = value` | B-tree (default) | `CREATE INDEX idx ON t (col)` |
|
||||||
|
| `WHERE col > value` | B-tree | `CREATE INDEX idx ON t (col)` |
|
||||||
|
| `WHERE a = x AND b > y` | Composite | `CREATE INDEX idx ON t (a, b)` |
|
||||||
|
| `WHERE jsonb @> '{}'` | GIN | `CREATE INDEX idx ON t USING gin (col)` |
|
||||||
|
| `WHERE tsv @@ query` | GIN | `CREATE INDEX idx ON t USING gin (col)` |
|
||||||
|
| Time-series ranges | BRIN | `CREATE INDEX idx ON t USING brin (col)` |
|
||||||
|
|
||||||
|
### Data Type Quick Reference
|
||||||
|
|
||||||
|
| Use Case | Correct Type | Avoid |
|
||||||
|
|----------|-------------|-------|
|
||||||
|
| IDs | `bigint` | `int`, random UUID |
|
||||||
|
| Strings | `text` | `varchar(255)` |
|
||||||
|
| Timestamps | `timestamptz` | `timestamp` |
|
||||||
|
| Money | `numeric(10,2)` | `float` |
|
||||||
|
| Flags | `boolean` | `varchar`, `int` |
|
||||||
|
|
||||||
|
### Common Patterns
|
||||||
|
|
||||||
|
**Composite Index Order:**
|
||||||
|
```sql
|
||||||
|
-- Equality columns first, then range columns
|
||||||
|
CREATE INDEX idx ON orders (status, created_at);
|
||||||
|
-- Works for: WHERE status = 'pending' AND created_at > '2024-01-01'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Covering Index:**
|
||||||
|
```sql
|
||||||
|
CREATE INDEX idx ON users (email) INCLUDE (name, created_at);
|
||||||
|
-- Avoids table lookup for SELECT email, name, created_at
|
||||||
|
```
|
||||||
|
|
||||||
|
**Partial Index:**
|
||||||
|
```sql
|
||||||
|
CREATE INDEX idx ON users (email) WHERE deleted_at IS NULL;
|
||||||
|
-- Smaller index, only includes active users
|
||||||
|
```
|
||||||
|
|
||||||
|
**RLS Policy (Optimized):**
|
||||||
|
```sql
|
||||||
|
CREATE POLICY policy ON orders
|
||||||
|
USING ((SELECT auth.uid()) = user_id); -- Wrap in SELECT!
|
||||||
|
```
|
||||||
|
|
||||||
|
**UPSERT:**
|
||||||
|
```sql
|
||||||
|
INSERT INTO settings (user_id, key, value)
|
||||||
|
VALUES (123, 'theme', 'dark')
|
||||||
|
ON CONFLICT (user_id, key)
|
||||||
|
DO UPDATE SET value = EXCLUDED.value;
|
||||||
|
```
|
||||||
|
|
||||||
|
**Cursor Pagination:**
|
||||||
|
```sql
|
||||||
|
SELECT * FROM products WHERE id > $last_id ORDER BY id LIMIT 20;
|
||||||
|
-- O(1) vs OFFSET which is O(n)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Queue Processing:**
|
||||||
|
```sql
|
||||||
|
UPDATE jobs SET status = 'processing'
|
||||||
|
WHERE id = (
|
||||||
|
SELECT id FROM jobs WHERE status = 'pending'
|
||||||
|
ORDER BY created_at LIMIT 1
|
||||||
|
FOR UPDATE SKIP LOCKED
|
||||||
|
) RETURNING *;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Anti-Pattern Detection
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Find unindexed foreign keys
|
||||||
|
SELECT conrelid::regclass, a.attname
|
||||||
|
FROM pg_constraint c
|
||||||
|
JOIN pg_attribute a ON a.attrelid = c.conrelid AND a.attnum = ANY(c.conkey)
|
||||||
|
WHERE c.contype = 'f'
|
||||||
|
AND NOT EXISTS (
|
||||||
|
SELECT 1 FROM pg_index i
|
||||||
|
WHERE i.indrelid = c.conrelid AND a.attnum = ANY(i.indkey)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Find slow queries
|
||||||
|
SELECT query, mean_exec_time, calls
|
||||||
|
FROM pg_stat_statements
|
||||||
|
WHERE mean_exec_time > 100
|
||||||
|
ORDER BY mean_exec_time DESC;
|
||||||
|
|
||||||
|
-- Check table bloat
|
||||||
|
SELECT relname, n_dead_tup, last_vacuum
|
||||||
|
FROM pg_stat_user_tables
|
||||||
|
WHERE n_dead_tup > 1000
|
||||||
|
ORDER BY n_dead_tup DESC;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuration Template
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Connection limits (adjust for RAM)
|
||||||
|
ALTER SYSTEM SET max_connections = 100;
|
||||||
|
ALTER SYSTEM SET work_mem = '8MB';
|
||||||
|
|
||||||
|
-- Timeouts
|
||||||
|
ALTER SYSTEM SET idle_in_transaction_session_timeout = '30s';
|
||||||
|
ALTER SYSTEM SET statement_timeout = '30s';
|
||||||
|
|
||||||
|
-- Monitoring
|
||||||
|
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
|
||||||
|
|
||||||
|
-- Security defaults
|
||||||
|
REVOKE ALL ON SCHEMA public FROM public;
|
||||||
|
|
||||||
|
SELECT pg_reload_conf();
|
||||||
|
```
|
||||||
|
|
||||||
|
## Related
|
||||||
|
|
||||||
|
- Agent: `database-reviewer` - Full database review workflow
|
||||||
|
- Skill: `backend-patterns` - API and backend patterns
|
||||||
|
- Skill: `database-migrations` - Safe schema changes
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
- Writing SQL queries
|
||||||
|
- Designing database schemas
|
||||||
|
- Optimizing query performance
|
||||||
|
- Implementing Row Level Security
|
||||||
|
- Troubleshooting database issues
|
||||||
|
- Setting up PostgreSQL configuration
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Based on Supabase Agent Skills (credit: Supabase team) (MIT License)*
|
||||||
428
.kiro/skills/python-patterns/SKILL.md
Normal file
428
.kiro/skills/python-patterns/SKILL.md
Normal file
@@ -0,0 +1,428 @@
|
|||||||
|
---
|
||||||
|
name: python-patterns
|
||||||
|
description: >
|
||||||
|
Python-specific design patterns and best practices including protocols,
|
||||||
|
dataclasses, context managers, decorators, async/await, type hints, and
|
||||||
|
package organization. Use when working with Python code to apply Pythonic
|
||||||
|
patterns.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
globs: ["**/*.py", "**/*.pyi"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Python Patterns
|
||||||
|
|
||||||
|
> This skill provides comprehensive Python patterns extending common design principles with Python-specific idioms.
|
||||||
|
|
||||||
|
## Protocol (Duck Typing)
|
||||||
|
|
||||||
|
Use `Protocol` for structural subtyping (duck typing with type hints):
|
||||||
|
|
||||||
|
```python
|
||||||
|
from typing import Protocol
|
||||||
|
|
||||||
|
class Repository(Protocol):
|
||||||
|
def find_by_id(self, id: str) -> dict | None: ...
|
||||||
|
def save(self, entity: dict) -> dict: ...
|
||||||
|
|
||||||
|
# Any class with these methods satisfies the protocol
|
||||||
|
class UserRepository:
|
||||||
|
def find_by_id(self, id: str) -> dict | None:
|
||||||
|
# implementation
|
||||||
|
pass
|
||||||
|
|
||||||
|
def save(self, entity: dict) -> dict:
|
||||||
|
# implementation
|
||||||
|
pass
|
||||||
|
|
||||||
|
def process_entity(repo: Repository, id: str) -> None:
|
||||||
|
entity = repo.find_by_id(id)
|
||||||
|
# ... process
|
||||||
|
```
|
||||||
|
|
||||||
|
**Benefits:**
|
||||||
|
- Type safety without inheritance
|
||||||
|
- Flexible, loosely coupled code
|
||||||
|
- Easy testing and mocking
|
||||||
|
|
||||||
|
## Dataclasses as DTOs
|
||||||
|
|
||||||
|
Use `dataclass` for data transfer objects and value objects:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class CreateUserRequest:
|
||||||
|
name: str
|
||||||
|
email: str
|
||||||
|
age: Optional[int] = None
|
||||||
|
tags: list[str] = field(default_factory=list)
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class User:
|
||||||
|
"""Immutable user entity"""
|
||||||
|
id: str
|
||||||
|
name: str
|
||||||
|
email: str
|
||||||
|
```
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- Auto-generated `__init__`, `__repr__`, `__eq__`
|
||||||
|
- `frozen=True` for immutability
|
||||||
|
- `field()` for complex defaults
|
||||||
|
- Type hints for validation
|
||||||
|
|
||||||
|
## Context Managers
|
||||||
|
|
||||||
|
Use context managers (`with` statement) for resource management:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from contextlib import contextmanager
|
||||||
|
from typing import Generator
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def database_transaction(db) -> Generator[None, None, None]:
|
||||||
|
"""Context manager for database transactions"""
|
||||||
|
try:
|
||||||
|
yield
|
||||||
|
db.commit()
|
||||||
|
except Exception:
|
||||||
|
db.rollback()
|
||||||
|
raise
|
||||||
|
|
||||||
|
# Usage
|
||||||
|
with database_transaction(db):
|
||||||
|
db.execute("INSERT INTO users ...")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Class-based context manager:**
|
||||||
|
|
||||||
|
```python
|
||||||
|
class FileProcessor:
|
||||||
|
def __init__(self, filename: str):
|
||||||
|
self.filename = filename
|
||||||
|
self.file = None
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
self.file = open(self.filename, 'r')
|
||||||
|
return self.file
|
||||||
|
|
||||||
|
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||||
|
if self.file:
|
||||||
|
self.file.close()
|
||||||
|
return False # Don't suppress exceptions
|
||||||
|
```
|
||||||
|
|
||||||
|
## Generators
|
||||||
|
|
||||||
|
Use generators for lazy evaluation and memory-efficient iteration:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def read_large_file(filename: str):
|
||||||
|
"""Generator for reading large files line by line"""
|
||||||
|
with open(filename, 'r') as f:
|
||||||
|
for line in f:
|
||||||
|
yield line.strip()
|
||||||
|
|
||||||
|
# Memory-efficient processing
|
||||||
|
for line in read_large_file('huge.txt'):
|
||||||
|
process(line)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Generator expressions:**
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Instead of list comprehension
|
||||||
|
squares = (x**2 for x in range(1000000)) # Lazy evaluation
|
||||||
|
|
||||||
|
# Pipeline pattern
|
||||||
|
numbers = (x for x in range(100))
|
||||||
|
evens = (x for x in numbers if x % 2 == 0)
|
||||||
|
squares = (x**2 for x in evens)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Decorators
|
||||||
|
|
||||||
|
### Function Decorators
|
||||||
|
|
||||||
|
```python
|
||||||
|
from functools import wraps
|
||||||
|
import time
|
||||||
|
|
||||||
|
def timing(func):
|
||||||
|
"""Decorator to measure execution time"""
|
||||||
|
@wraps(func)
|
||||||
|
def wrapper(*args, **kwargs):
|
||||||
|
start = time.time()
|
||||||
|
result = func(*args, **kwargs)
|
||||||
|
end = time.time()
|
||||||
|
print(f"{func.__name__} took {end - start:.2f}s")
|
||||||
|
return result
|
||||||
|
return wrapper
|
||||||
|
|
||||||
|
@timing
|
||||||
|
def slow_function():
|
||||||
|
time.sleep(1)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Class Decorators
|
||||||
|
|
||||||
|
```python
|
||||||
|
def singleton(cls):
|
||||||
|
"""Decorator to make a class a singleton"""
|
||||||
|
instances = {}
|
||||||
|
|
||||||
|
@wraps(cls)
|
||||||
|
def get_instance(*args, **kwargs):
|
||||||
|
if cls not in instances:
|
||||||
|
instances[cls] = cls(*args, **kwargs)
|
||||||
|
return instances[cls]
|
||||||
|
|
||||||
|
return get_instance
|
||||||
|
|
||||||
|
@singleton
|
||||||
|
class Config:
|
||||||
|
pass
|
||||||
|
```
|
||||||
|
|
||||||
|
## Async/Await
|
||||||
|
|
||||||
|
### Async Functions
|
||||||
|
|
||||||
|
```python
|
||||||
|
import asyncio
|
||||||
|
from typing import List
|
||||||
|
|
||||||
|
async def fetch_user(user_id: str) -> dict:
|
||||||
|
"""Async function for I/O-bound operations"""
|
||||||
|
await asyncio.sleep(0.1) # Simulate network call
|
||||||
|
return {"id": user_id, "name": "Alice"}
|
||||||
|
|
||||||
|
async def fetch_all_users(user_ids: List[str]) -> List[dict]:
|
||||||
|
"""Concurrent execution with asyncio.gather"""
|
||||||
|
tasks = [fetch_user(uid) for uid in user_ids]
|
||||||
|
return await asyncio.gather(*tasks)
|
||||||
|
|
||||||
|
# Run async code
|
||||||
|
asyncio.run(fetch_all_users(["1", "2", "3"]))
|
||||||
|
```
|
||||||
|
|
||||||
|
### Async Context Managers
|
||||||
|
|
||||||
|
```python
|
||||||
|
class AsyncDatabase:
|
||||||
|
async def __aenter__(self):
|
||||||
|
await self.connect()
|
||||||
|
return self
|
||||||
|
|
||||||
|
async def __aexit__(self, exc_type, exc_val, exc_tb):
|
||||||
|
await self.disconnect()
|
||||||
|
|
||||||
|
async with AsyncDatabase() as db:
|
||||||
|
await db.query("SELECT * FROM users")
|
||||||
|
```
|
||||||
|
|
||||||
|
## Type Hints
|
||||||
|
|
||||||
|
### Advanced Type Hints
|
||||||
|
|
||||||
|
```python
|
||||||
|
from typing import TypeVar, Generic, Callable, ParamSpec, Concatenate
|
||||||
|
|
||||||
|
T = TypeVar('T')
|
||||||
|
P = ParamSpec('P')
|
||||||
|
|
||||||
|
class Repository(Generic[T]):
|
||||||
|
"""Generic repository pattern"""
|
||||||
|
def __init__(self, entity_type: type[T]):
|
||||||
|
self.entity_type = entity_type
|
||||||
|
|
||||||
|
def find_by_id(self, id: str) -> T | None:
|
||||||
|
# implementation
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Type-safe decorator
|
||||||
|
def log_call(func: Callable[P, T]) -> Callable[P, T]:
|
||||||
|
@wraps(func)
|
||||||
|
def wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
|
||||||
|
print(f"Calling {func.__name__}")
|
||||||
|
return func(*args, **kwargs)
|
||||||
|
return wrapper
|
||||||
|
```
|
||||||
|
|
||||||
|
### Union Types (Python 3.10+)
|
||||||
|
|
||||||
|
```python
|
||||||
|
def process(value: str | int | None) -> str:
|
||||||
|
match value:
|
||||||
|
case str():
|
||||||
|
return value.upper()
|
||||||
|
case int():
|
||||||
|
return str(value)
|
||||||
|
case None:
|
||||||
|
return "empty"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Dependency Injection
|
||||||
|
|
||||||
|
### Constructor Injection
|
||||||
|
|
||||||
|
```python
|
||||||
|
class UserService:
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
repository: Repository,
|
||||||
|
logger: Logger,
|
||||||
|
cache: Cache | None = None
|
||||||
|
):
|
||||||
|
self.repository = repository
|
||||||
|
self.logger = logger
|
||||||
|
self.cache = cache
|
||||||
|
|
||||||
|
def get_user(self, user_id: str) -> User | None:
|
||||||
|
if self.cache:
|
||||||
|
cached = self.cache.get(user_id)
|
||||||
|
if cached:
|
||||||
|
return cached
|
||||||
|
|
||||||
|
user = self.repository.find_by_id(user_id)
|
||||||
|
if user and self.cache:
|
||||||
|
self.cache.set(user_id, user)
|
||||||
|
|
||||||
|
return user
|
||||||
|
```
|
||||||
|
|
||||||
|
## Package Organization
|
||||||
|
|
||||||
|
### Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
project/
|
||||||
|
├── src/
|
||||||
|
│ └── mypackage/
|
||||||
|
│ ├── __init__.py
|
||||||
|
│ ├── domain/ # Business logic
|
||||||
|
│ │ ├── __init__.py
|
||||||
|
│ │ └── models.py
|
||||||
|
│ ├── services/ # Application services
|
||||||
|
│ │ ├── __init__.py
|
||||||
|
│ │ └── user_service.py
|
||||||
|
│ └── infrastructure/ # External dependencies
|
||||||
|
│ ├── __init__.py
|
||||||
|
│ └── database.py
|
||||||
|
├── tests/
|
||||||
|
│ ├── unit/
|
||||||
|
│ └── integration/
|
||||||
|
├── pyproject.toml
|
||||||
|
└── README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
### Module Exports
|
||||||
|
|
||||||
|
```python
|
||||||
|
# __init__.py
|
||||||
|
from .models import User, Product
|
||||||
|
from .services import UserService
|
||||||
|
|
||||||
|
__all__ = ['User', 'Product', 'UserService']
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### Custom Exceptions
|
||||||
|
|
||||||
|
```python
|
||||||
|
class DomainError(Exception):
|
||||||
|
"""Base exception for domain errors"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class UserNotFoundError(DomainError):
|
||||||
|
"""Raised when user is not found"""
|
||||||
|
def __init__(self, user_id: str):
|
||||||
|
self.user_id = user_id
|
||||||
|
super().__init__(f"User {user_id} not found")
|
||||||
|
|
||||||
|
class ValidationError(DomainError):
|
||||||
|
"""Raised when validation fails"""
|
||||||
|
def __init__(self, field: str, message: str):
|
||||||
|
self.field = field
|
||||||
|
self.message = message
|
||||||
|
super().__init__(f"{field}: {message}")
|
||||||
|
```
|
||||||
|
|
||||||
|
### Exception Groups (Python 3.11+)
|
||||||
|
|
||||||
|
```python
|
||||||
|
try:
|
||||||
|
# Multiple operations
|
||||||
|
pass
|
||||||
|
except* ValueError as eg:
|
||||||
|
# Handle all ValueError instances
|
||||||
|
for exc in eg.exceptions:
|
||||||
|
print(f"ValueError: {exc}")
|
||||||
|
except* TypeError as eg:
|
||||||
|
# Handle all TypeError instances
|
||||||
|
for exc in eg.exceptions:
|
||||||
|
print(f"TypeError: {exc}")
|
||||||
|
```
|
||||||
|
|
||||||
|
## Property Decorators
|
||||||
|
|
||||||
|
```python
|
||||||
|
class User:
|
||||||
|
def __init__(self, name: str):
|
||||||
|
self._name = name
|
||||||
|
self._email = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
"""Read-only property"""
|
||||||
|
return self._name
|
||||||
|
|
||||||
|
@property
|
||||||
|
def email(self) -> str | None:
|
||||||
|
return self._email
|
||||||
|
|
||||||
|
@email.setter
|
||||||
|
def email(self, value: str) -> None:
|
||||||
|
if '@' not in value:
|
||||||
|
raise ValueError("Invalid email")
|
||||||
|
self._email = value
|
||||||
|
```
|
||||||
|
|
||||||
|
## Functional Programming
|
||||||
|
|
||||||
|
### Higher-Order Functions
|
||||||
|
|
||||||
|
```python
|
||||||
|
from functools import reduce
|
||||||
|
from typing import Callable, TypeVar
|
||||||
|
|
||||||
|
T = TypeVar('T')
|
||||||
|
U = TypeVar('U')
|
||||||
|
|
||||||
|
def pipe(*functions: Callable) -> Callable:
|
||||||
|
"""Compose functions left to right"""
|
||||||
|
def inner(arg):
|
||||||
|
return reduce(lambda x, f: f(x), functions, arg)
|
||||||
|
return inner
|
||||||
|
|
||||||
|
# Usage
|
||||||
|
process = pipe(
|
||||||
|
str.strip,
|
||||||
|
str.lower,
|
||||||
|
lambda s: s.replace(' ', '_')
|
||||||
|
)
|
||||||
|
result = process(" Hello World ") # "hello_world"
|
||||||
|
```
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
- Designing Python APIs and packages
|
||||||
|
- Implementing async/concurrent systems
|
||||||
|
- Structuring Python projects
|
||||||
|
- Writing Pythonic code
|
||||||
|
- Refactoring Python codebases
|
||||||
|
- Type-safe Python development
|
||||||
497
.kiro/skills/python-testing/SKILL.md
Normal file
497
.kiro/skills/python-testing/SKILL.md
Normal file
@@ -0,0 +1,497 @@
|
|||||||
|
---
|
||||||
|
name: python-testing
|
||||||
|
description: >
|
||||||
|
Python testing best practices using pytest including fixtures, parametrization,
|
||||||
|
mocking, coverage analysis, async testing, and test organization. Use when
|
||||||
|
writing or improving Python tests.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
globs: ["**/*.py", "**/*.pyi"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Python Testing
|
||||||
|
|
||||||
|
> This skill provides comprehensive Python testing patterns using pytest as the primary testing framework.
|
||||||
|
|
||||||
|
## Testing Framework
|
||||||
|
|
||||||
|
Use **pytest** as the testing framework for its powerful features and clean syntax.
|
||||||
|
|
||||||
|
### Basic Test Structure
|
||||||
|
|
||||||
|
```python
|
||||||
|
def test_user_creation():
|
||||||
|
"""Test that a user can be created with valid data"""
|
||||||
|
user = User(name="Alice", email="alice@example.com")
|
||||||
|
|
||||||
|
assert user.name == "Alice"
|
||||||
|
assert user.email == "alice@example.com"
|
||||||
|
assert user.is_active is True
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Discovery
|
||||||
|
|
||||||
|
pytest automatically discovers tests following these conventions:
|
||||||
|
- Files: `test_*.py` or `*_test.py`
|
||||||
|
- Functions: `test_*`
|
||||||
|
- Classes: `Test*` (without `__init__`)
|
||||||
|
- Methods: `test_*`
|
||||||
|
|
||||||
|
## Fixtures
|
||||||
|
|
||||||
|
Fixtures provide reusable test setup and teardown:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import pytest
|
||||||
|
from sqlalchemy import create_engine
|
||||||
|
from sqlalchemy.orm import sessionmaker
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def db_session():
|
||||||
|
"""Provide a database session for tests"""
|
||||||
|
engine = create_engine("sqlite:///:memory:")
|
||||||
|
Session = sessionmaker(bind=engine)
|
||||||
|
session = Session()
|
||||||
|
|
||||||
|
# Setup
|
||||||
|
Base.metadata.create_all(engine)
|
||||||
|
|
||||||
|
yield session
|
||||||
|
|
||||||
|
# Teardown
|
||||||
|
session.close()
|
||||||
|
|
||||||
|
def test_user_repository(db_session):
|
||||||
|
"""Test using the db_session fixture"""
|
||||||
|
repo = UserRepository(db_session)
|
||||||
|
user = repo.create(name="Alice", email="alice@example.com")
|
||||||
|
|
||||||
|
assert user.id is not None
|
||||||
|
```
|
||||||
|
|
||||||
|
### Fixture Scopes
|
||||||
|
|
||||||
|
```python
|
||||||
|
@pytest.fixture(scope="function") # Default: per test
|
||||||
|
def user():
|
||||||
|
return User(name="Alice")
|
||||||
|
|
||||||
|
@pytest.fixture(scope="class") # Per test class
|
||||||
|
def database():
|
||||||
|
db = Database()
|
||||||
|
db.connect()
|
||||||
|
yield db
|
||||||
|
db.disconnect()
|
||||||
|
|
||||||
|
@pytest.fixture(scope="module") # Per module
|
||||||
|
def app():
|
||||||
|
return create_app()
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session") # Once per test session
|
||||||
|
def config():
|
||||||
|
return load_config()
|
||||||
|
```
|
||||||
|
|
||||||
|
### Fixture Dependencies
|
||||||
|
|
||||||
|
```python
|
||||||
|
@pytest.fixture
|
||||||
|
def database():
|
||||||
|
db = Database()
|
||||||
|
db.connect()
|
||||||
|
yield db
|
||||||
|
db.disconnect()
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def user_repository(database):
|
||||||
|
"""Fixture that depends on database fixture"""
|
||||||
|
return UserRepository(database)
|
||||||
|
|
||||||
|
def test_create_user(user_repository):
|
||||||
|
user = user_repository.create(name="Alice")
|
||||||
|
assert user.id is not None
|
||||||
|
```
|
||||||
|
|
||||||
|
## Parametrization
|
||||||
|
|
||||||
|
Test multiple inputs with `@pytest.mark.parametrize`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("email,expected", [
|
||||||
|
("user@example.com", True),
|
||||||
|
("invalid-email", False),
|
||||||
|
("", False),
|
||||||
|
("user@", False),
|
||||||
|
("@example.com", False),
|
||||||
|
])
|
||||||
|
def test_email_validation(email, expected):
|
||||||
|
result = validate_email(email)
|
||||||
|
assert result == expected
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multiple Parameters
|
||||||
|
|
||||||
|
```python
|
||||||
|
@pytest.mark.parametrize("name,age,valid", [
|
||||||
|
("Alice", 25, True),
|
||||||
|
("Bob", 17, False),
|
||||||
|
("", 25, False),
|
||||||
|
("Charlie", -1, False),
|
||||||
|
])
|
||||||
|
def test_user_validation(name, age, valid):
|
||||||
|
result = validate_user(name, age)
|
||||||
|
assert result == valid
|
||||||
|
```
|
||||||
|
|
||||||
|
### Parametrize with IDs
|
||||||
|
|
||||||
|
```python
|
||||||
|
@pytest.mark.parametrize("input,expected", [
|
||||||
|
("hello", "HELLO"),
|
||||||
|
("world", "WORLD"),
|
||||||
|
], ids=["lowercase", "another_lowercase"])
|
||||||
|
def test_uppercase(input, expected):
|
||||||
|
assert input.upper() == expected
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Markers
|
||||||
|
|
||||||
|
Use markers for test categorization and selective execution:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
@pytest.mark.unit
|
||||||
|
def test_calculate_total():
|
||||||
|
"""Fast unit test"""
|
||||||
|
assert calculate_total([1, 2, 3]) == 6
|
||||||
|
|
||||||
|
@pytest.mark.integration
|
||||||
|
def test_database_connection():
|
||||||
|
"""Slower integration test"""
|
||||||
|
db = Database()
|
||||||
|
assert db.connect() is True
|
||||||
|
|
||||||
|
@pytest.mark.slow
|
||||||
|
def test_large_dataset():
|
||||||
|
"""Very slow test"""
|
||||||
|
process_million_records()
|
||||||
|
|
||||||
|
@pytest.mark.skip(reason="Not implemented yet")
|
||||||
|
def test_future_feature():
|
||||||
|
pass
|
||||||
|
|
||||||
|
@pytest.mark.skipif(sys.version_info < (3, 10), reason="Requires Python 3.10+")
|
||||||
|
def test_new_syntax():
|
||||||
|
pass
|
||||||
|
```
|
||||||
|
|
||||||
|
**Run specific markers:**
|
||||||
|
```bash
|
||||||
|
pytest -m unit # Run only unit tests
|
||||||
|
pytest -m "not slow" # Skip slow tests
|
||||||
|
pytest -m "unit or integration" # Run unit OR integration
|
||||||
|
```
|
||||||
|
|
||||||
|
## Mocking
|
||||||
|
|
||||||
|
### Using unittest.mock
|
||||||
|
|
||||||
|
```python
|
||||||
|
from unittest.mock import Mock, patch, MagicMock
|
||||||
|
|
||||||
|
def test_user_service_with_mock():
|
||||||
|
"""Test with mock repository"""
|
||||||
|
mock_repo = Mock()
|
||||||
|
mock_repo.find_by_id.return_value = User(id="1", name="Alice")
|
||||||
|
|
||||||
|
service = UserService(mock_repo)
|
||||||
|
user = service.get_user("1")
|
||||||
|
|
||||||
|
assert user.name == "Alice"
|
||||||
|
mock_repo.find_by_id.assert_called_once_with("1")
|
||||||
|
|
||||||
|
@patch('myapp.services.EmailService')
|
||||||
|
def test_send_notification(mock_email_service):
|
||||||
|
"""Test with patched dependency"""
|
||||||
|
service = NotificationService()
|
||||||
|
service.send("user@example.com", "Hello")
|
||||||
|
|
||||||
|
mock_email_service.send.assert_called_once()
|
||||||
|
```
|
||||||
|
|
||||||
|
### pytest-mock Plugin
|
||||||
|
|
||||||
|
```python
|
||||||
|
def test_with_mocker(mocker):
|
||||||
|
"""Using pytest-mock plugin"""
|
||||||
|
mock_repo = mocker.Mock()
|
||||||
|
mock_repo.find_by_id.return_value = User(id="1", name="Alice")
|
||||||
|
|
||||||
|
service = UserService(mock_repo)
|
||||||
|
user = service.get_user("1")
|
||||||
|
|
||||||
|
assert user.name == "Alice"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Coverage Analysis
|
||||||
|
|
||||||
|
### Basic Coverage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pytest --cov=src --cov-report=term-missing
|
||||||
|
```
|
||||||
|
|
||||||
|
### HTML Coverage Report
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pytest --cov=src --cov-report=html
|
||||||
|
open htmlcov/index.html
|
||||||
|
```
|
||||||
|
|
||||||
|
### Coverage Configuration
|
||||||
|
|
||||||
|
```ini
|
||||||
|
# pytest.ini or pyproject.toml
|
||||||
|
[tool.pytest.ini_options]
|
||||||
|
addopts = """
|
||||||
|
--cov=src
|
||||||
|
--cov-report=term-missing
|
||||||
|
--cov-report=html
|
||||||
|
--cov-fail-under=80
|
||||||
|
"""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Branch Coverage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pytest --cov=src --cov-branch
|
||||||
|
```
|
||||||
|
|
||||||
|
## Async Testing
|
||||||
|
|
||||||
|
### Testing Async Functions
|
||||||
|
|
||||||
|
```python
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_async_fetch_user():
|
||||||
|
"""Test async function"""
|
||||||
|
user = await fetch_user("1")
|
||||||
|
assert user.name == "Alice"
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
async def async_client():
|
||||||
|
"""Async fixture"""
|
||||||
|
client = AsyncClient()
|
||||||
|
await client.connect()
|
||||||
|
yield client
|
||||||
|
await client.disconnect()
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_with_async_fixture(async_client):
|
||||||
|
result = await async_client.get("/users/1")
|
||||||
|
assert result.status == 200
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Organization
|
||||||
|
|
||||||
|
### Directory Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
tests/
|
||||||
|
├── unit/
|
||||||
|
│ ├── test_models.py
|
||||||
|
│ ├── test_services.py
|
||||||
|
│ └── test_utils.py
|
||||||
|
├── integration/
|
||||||
|
│ ├── test_database.py
|
||||||
|
│ └── test_api.py
|
||||||
|
├── conftest.py # Shared fixtures
|
||||||
|
└── pytest.ini # Configuration
|
||||||
|
```
|
||||||
|
|
||||||
|
### conftest.py
|
||||||
|
|
||||||
|
```python
|
||||||
|
# tests/conftest.py
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
def app():
|
||||||
|
"""Application fixture available to all tests"""
|
||||||
|
return create_app()
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def client(app):
|
||||||
|
"""Test client fixture"""
|
||||||
|
return app.test_client()
|
||||||
|
|
||||||
|
def pytest_configure(config):
|
||||||
|
"""Register custom markers"""
|
||||||
|
config.addinivalue_line("markers", "unit: Unit tests")
|
||||||
|
config.addinivalue_line("markers", "integration: Integration tests")
|
||||||
|
config.addinivalue_line("markers", "slow: Slow tests")
|
||||||
|
```
|
||||||
|
|
||||||
|
## Assertions
|
||||||
|
|
||||||
|
### Basic Assertions
|
||||||
|
|
||||||
|
```python
|
||||||
|
def test_assertions():
|
||||||
|
assert value == expected
|
||||||
|
assert value != other
|
||||||
|
assert value > 0
|
||||||
|
assert value in collection
|
||||||
|
assert isinstance(value, str)
|
||||||
|
```
|
||||||
|
|
||||||
|
### pytest Assertions with Better Error Messages
|
||||||
|
|
||||||
|
```python
|
||||||
|
def test_with_context():
|
||||||
|
"""pytest provides detailed assertion introspection"""
|
||||||
|
result = calculate_total([1, 2, 3])
|
||||||
|
expected = 6
|
||||||
|
|
||||||
|
# pytest shows: assert 5 == 6
|
||||||
|
assert result == expected
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Assertion Messages
|
||||||
|
|
||||||
|
```python
|
||||||
|
def test_with_message():
|
||||||
|
result = process_data(input_data)
|
||||||
|
assert result.is_valid, f"Expected valid result, got errors: {result.errors}"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Approximate Comparisons
|
||||||
|
|
||||||
|
```python
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
def test_float_comparison():
|
||||||
|
result = 0.1 + 0.2
|
||||||
|
assert result == pytest.approx(0.3)
|
||||||
|
|
||||||
|
# With tolerance
|
||||||
|
assert result == pytest.approx(0.3, abs=1e-9)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Exception Testing
|
||||||
|
|
||||||
|
```python
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
def test_raises_exception():
|
||||||
|
"""Test that function raises expected exception"""
|
||||||
|
with pytest.raises(ValueError):
|
||||||
|
validate_age(-1)
|
||||||
|
|
||||||
|
def test_exception_message():
|
||||||
|
"""Test exception message"""
|
||||||
|
with pytest.raises(ValueError, match="Age must be positive"):
|
||||||
|
validate_age(-1)
|
||||||
|
|
||||||
|
def test_exception_details():
|
||||||
|
"""Capture and inspect exception"""
|
||||||
|
with pytest.raises(ValidationError) as exc_info:
|
||||||
|
validate_user(name="", age=-1)
|
||||||
|
|
||||||
|
assert "name" in exc_info.value.errors
|
||||||
|
assert "age" in exc_info.value.errors
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Helpers
|
||||||
|
|
||||||
|
```python
|
||||||
|
# tests/helpers.py
|
||||||
|
def assert_user_equal(actual, expected):
|
||||||
|
"""Custom assertion helper"""
|
||||||
|
assert actual.id == expected.id
|
||||||
|
assert actual.name == expected.name
|
||||||
|
assert actual.email == expected.email
|
||||||
|
|
||||||
|
def create_test_user(**kwargs):
|
||||||
|
"""Test data factory"""
|
||||||
|
defaults = {
|
||||||
|
"name": "Test User",
|
||||||
|
"email": "test@example.com",
|
||||||
|
"age": 25,
|
||||||
|
}
|
||||||
|
defaults.update(kwargs)
|
||||||
|
return User(**defaults)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Property-Based Testing
|
||||||
|
|
||||||
|
Using `hypothesis` for property-based testing:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from hypothesis import given, strategies as st
|
||||||
|
|
||||||
|
@given(st.integers(), st.integers())
|
||||||
|
def test_addition_commutative(a, b):
|
||||||
|
"""Test that addition is commutative"""
|
||||||
|
assert a + b == b + a
|
||||||
|
|
||||||
|
@given(st.lists(st.integers()))
|
||||||
|
def test_sort_idempotent(lst):
|
||||||
|
"""Test that sorting twice gives same result"""
|
||||||
|
sorted_once = sorted(lst)
|
||||||
|
sorted_twice = sorted(sorted_once)
|
||||||
|
assert sorted_once == sorted_twice
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **One assertion per test** (when possible)
|
||||||
|
2. **Use descriptive test names** - describe what's being tested
|
||||||
|
3. **Arrange-Act-Assert pattern** - clear test structure
|
||||||
|
4. **Use fixtures for setup** - avoid duplication
|
||||||
|
5. **Mock external dependencies** - keep tests fast and isolated
|
||||||
|
6. **Test edge cases** - empty inputs, None, boundaries
|
||||||
|
7. **Use parametrize** - test multiple scenarios efficiently
|
||||||
|
8. **Keep tests independent** - no shared state between tests
|
||||||
|
|
||||||
|
## Running Tests
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run all tests
|
||||||
|
pytest
|
||||||
|
|
||||||
|
# Run specific file
|
||||||
|
pytest tests/test_user.py
|
||||||
|
|
||||||
|
# Run specific test
|
||||||
|
pytest tests/test_user.py::test_create_user
|
||||||
|
|
||||||
|
# Run with verbose output
|
||||||
|
pytest -v
|
||||||
|
|
||||||
|
# Run with output capture disabled
|
||||||
|
pytest -s
|
||||||
|
|
||||||
|
# Run in parallel (requires pytest-xdist)
|
||||||
|
pytest -n auto
|
||||||
|
|
||||||
|
# Run only failed tests from last run
|
||||||
|
pytest --lf
|
||||||
|
|
||||||
|
# Run failed tests first
|
||||||
|
pytest --ff
|
||||||
|
```
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
- Writing new Python tests
|
||||||
|
- Improving test coverage
|
||||||
|
- Setting up pytest infrastructure
|
||||||
|
- Debugging flaky tests
|
||||||
|
- Implementing integration tests
|
||||||
|
- Testing async Python code
|
||||||
173
.kiro/skills/search-first/SKILL.md
Normal file
173
.kiro/skills/search-first/SKILL.md
Normal file
@@ -0,0 +1,173 @@
|
|||||||
|
---
|
||||||
|
name: search-first
|
||||||
|
description: >
|
||||||
|
Research-before-coding workflow. Search for existing tools, libraries, and
|
||||||
|
patterns before writing custom code. Systematizes the "search for existing
|
||||||
|
solutions before implementing" approach. Use when starting new features or
|
||||||
|
adding functionality.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
---
|
||||||
|
|
||||||
|
# /search-first — Research Before You Code
|
||||||
|
|
||||||
|
Systematizes the "search for existing solutions before implementing" workflow.
|
||||||
|
|
||||||
|
## Trigger
|
||||||
|
|
||||||
|
Use this skill when:
|
||||||
|
- Starting a new feature that likely has existing solutions
|
||||||
|
- Adding a dependency or integration
|
||||||
|
- The user asks "add X functionality" and you're about to write code
|
||||||
|
- Before creating a new utility, helper, or abstraction
|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────┐
|
||||||
|
│ 1. NEED ANALYSIS │
|
||||||
|
│ Define what functionality is needed │
|
||||||
|
│ Identify language/framework constraints │
|
||||||
|
├─────────────────────────────────────────────┤
|
||||||
|
│ 2. PARALLEL SEARCH (researcher agent) │
|
||||||
|
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||||
|
│ │ npm / │ │ MCP / │ │ GitHub / │ │
|
||||||
|
│ │ PyPI │ │ Skills │ │ Web │ │
|
||||||
|
│ └──────────┘ └──────────┘ └──────────┘ │
|
||||||
|
├─────────────────────────────────────────────┤
|
||||||
|
│ 3. EVALUATE │
|
||||||
|
│ Score candidates (functionality, maint, │
|
||||||
|
│ community, docs, license, deps) │
|
||||||
|
├─────────────────────────────────────────────┤
|
||||||
|
│ 4. DECIDE │
|
||||||
|
│ ┌─────────┐ ┌──────────┐ ┌─────────┐ │
|
||||||
|
│ │ Adopt │ │ Extend │ │ Build │ │
|
||||||
|
│ │ as-is │ │ /Wrap │ │ Custom │ │
|
||||||
|
│ └─────────┘ └──────────┘ └─────────┘ │
|
||||||
|
├─────────────────────────────────────────────┤
|
||||||
|
│ 5. IMPLEMENT │
|
||||||
|
│ Install package / Configure MCP / │
|
||||||
|
│ Write minimal custom code │
|
||||||
|
└─────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## Decision Matrix
|
||||||
|
|
||||||
|
| Signal | Action |
|
||||||
|
|--------|--------|
|
||||||
|
| Exact match, well-maintained, MIT/Apache | **Adopt** — install and use directly |
|
||||||
|
| Partial match, good foundation | **Extend** — install + write thin wrapper |
|
||||||
|
| Multiple weak matches | **Compose** — combine 2-3 small packages |
|
||||||
|
| Nothing suitable found | **Build** — write custom, but informed by research |
|
||||||
|
|
||||||
|
## How to Use
|
||||||
|
|
||||||
|
### Quick Mode (inline)
|
||||||
|
|
||||||
|
Before writing a utility or adding functionality, mentally run through:
|
||||||
|
|
||||||
|
0. Does this already exist in the repo? → Search through relevant modules/tests first
|
||||||
|
1. Is this a common problem? → Search npm/PyPI
|
||||||
|
2. Is there an MCP for this? → Check MCP configuration and search
|
||||||
|
3. Is there a skill for this? → Check available skills
|
||||||
|
4. Is there a GitHub implementation/template? → Run GitHub code search for maintained OSS before writing net-new code
|
||||||
|
|
||||||
|
### Full Mode (subagent)
|
||||||
|
|
||||||
|
For non-trivial functionality, delegate to a research-focused subagent:
|
||||||
|
|
||||||
|
```
|
||||||
|
Invoke subagent with prompt:
|
||||||
|
"Research existing tools for: [DESCRIPTION]
|
||||||
|
Language/framework: [LANG]
|
||||||
|
Constraints: [ANY]
|
||||||
|
|
||||||
|
Search: npm/PyPI, MCP servers, skills, GitHub
|
||||||
|
Return: Structured comparison with recommendation"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Search Shortcuts by Category
|
||||||
|
|
||||||
|
### Development Tooling
|
||||||
|
- Linting → `eslint`, `ruff`, `textlint`, `markdownlint`
|
||||||
|
- Formatting → `prettier`, `black`, `gofmt`
|
||||||
|
- Testing → `jest`, `pytest`, `go test`
|
||||||
|
- Pre-commit → `husky`, `lint-staged`, `pre-commit`
|
||||||
|
|
||||||
|
### AI/LLM Integration
|
||||||
|
- Claude SDK → Check for latest docs
|
||||||
|
- Prompt management → Check MCP servers
|
||||||
|
- Document processing → `unstructured`, `pdfplumber`, `mammoth`
|
||||||
|
|
||||||
|
### Data & APIs
|
||||||
|
- HTTP clients → `httpx` (Python), `ky`/`got` (Node)
|
||||||
|
- Validation → `zod` (TS), `pydantic` (Python)
|
||||||
|
- Database → Check for MCP servers first
|
||||||
|
|
||||||
|
### Content & Publishing
|
||||||
|
- Markdown processing → `remark`, `unified`, `markdown-it`
|
||||||
|
- Image optimization → `sharp`, `imagemin`
|
||||||
|
|
||||||
|
## Integration Points
|
||||||
|
|
||||||
|
### With planner agent
|
||||||
|
The planner should invoke researcher before Phase 1 (Architecture Review):
|
||||||
|
- Researcher identifies available tools
|
||||||
|
- Planner incorporates them into the implementation plan
|
||||||
|
- Avoids "reinventing the wheel" in the plan
|
||||||
|
|
||||||
|
### With architect agent
|
||||||
|
The architect should consult researcher for:
|
||||||
|
- Technology stack decisions
|
||||||
|
- Integration pattern discovery
|
||||||
|
- Existing reference architectures
|
||||||
|
|
||||||
|
### With iterative-retrieval skill
|
||||||
|
Combine for progressive discovery:
|
||||||
|
- Cycle 1: Broad search (npm, PyPI, MCP)
|
||||||
|
- Cycle 2: Evaluate top candidates in detail
|
||||||
|
- Cycle 3: Test compatibility with project constraints
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Example 1: "Add dead link checking"
|
||||||
|
```
|
||||||
|
Need: Check markdown files for broken links
|
||||||
|
Search: npm "markdown dead link checker"
|
||||||
|
Found: textlint-rule-no-dead-link (score: 9/10)
|
||||||
|
Action: ADOPT — npm install textlint-rule-no-dead-link
|
||||||
|
Result: Zero custom code, battle-tested solution
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 2: "Add HTTP client wrapper"
|
||||||
|
```
|
||||||
|
Need: Resilient HTTP client with retries and timeout handling
|
||||||
|
Search: npm "http client retry", PyPI "httpx retry"
|
||||||
|
Found: got (Node) with retry plugin, httpx (Python) with built-in retry
|
||||||
|
Action: ADOPT — use got/httpx directly with retry config
|
||||||
|
Result: Zero custom code, production-proven libraries
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 3: "Add config file linter"
|
||||||
|
```
|
||||||
|
Need: Validate project config files against a schema
|
||||||
|
Search: npm "config linter schema", "json schema validator cli"
|
||||||
|
Found: ajv-cli (score: 8/10)
|
||||||
|
Action: ADOPT + EXTEND — install ajv-cli, write project-specific schema
|
||||||
|
Result: 1 package + 1 schema file, no custom validation logic
|
||||||
|
```
|
||||||
|
|
||||||
|
## Anti-Patterns
|
||||||
|
|
||||||
|
- **Jumping to code**: Writing a utility without checking if one exists
|
||||||
|
- **Ignoring MCP**: Not checking if an MCP server already provides the capability
|
||||||
|
- **Over-customizing**: Wrapping a library so heavily it loses its benefits
|
||||||
|
- **Dependency bloat**: Installing a massive package for one small feature
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
- Starting new features
|
||||||
|
- Adding dependencies or integrations
|
||||||
|
- Before writing utilities or helpers
|
||||||
|
- When evaluating technology choices
|
||||||
|
- Planning architecture decisions
|
||||||
497
.kiro/skills/security-review/SKILL.md
Normal file
497
.kiro/skills/security-review/SKILL.md
Normal file
@@ -0,0 +1,497 @@
|
|||||||
|
---
|
||||||
|
name: security-review
|
||||||
|
description: >
|
||||||
|
Use this skill when adding authentication, handling user input, working with secrets, creating API endpoints, or implementing payment/sensitive features. Provides comprehensive security checklist and patterns.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
---
|
||||||
|
|
||||||
|
# Security Review Skill
|
||||||
|
|
||||||
|
This skill ensures all code follows security best practices and identifies potential vulnerabilities.
|
||||||
|
|
||||||
|
## When to Activate
|
||||||
|
|
||||||
|
- Implementing authentication or authorization
|
||||||
|
- Handling user input or file uploads
|
||||||
|
- Creating new API endpoints
|
||||||
|
- Working with secrets or credentials
|
||||||
|
- Implementing payment features
|
||||||
|
- Storing or transmitting sensitive data
|
||||||
|
- Integrating third-party APIs
|
||||||
|
|
||||||
|
## Security Checklist
|
||||||
|
|
||||||
|
### 1. Secrets Management
|
||||||
|
|
||||||
|
#### ❌ NEVER Do This
|
||||||
|
```typescript
|
||||||
|
const apiKey = "sk-proj-xxxxx" // Hardcoded secret
|
||||||
|
const dbPassword = "password123" // In source code
|
||||||
|
```
|
||||||
|
|
||||||
|
#### ✅ ALWAYS Do This
|
||||||
|
```typescript
|
||||||
|
const apiKey = process.env.OPENAI_API_KEY
|
||||||
|
const dbUrl = process.env.DATABASE_URL
|
||||||
|
|
||||||
|
// Verify secrets exist
|
||||||
|
if (!apiKey) {
|
||||||
|
throw new Error('OPENAI_API_KEY not configured')
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Verification Steps
|
||||||
|
- [ ] No hardcoded API keys, tokens, or passwords
|
||||||
|
- [ ] All secrets in environment variables
|
||||||
|
- [ ] `.env.local` in .gitignore
|
||||||
|
- [ ] No secrets in git history
|
||||||
|
- [ ] Production secrets in hosting platform (Vercel, Railway)
|
||||||
|
|
||||||
|
### 2. Input Validation
|
||||||
|
|
||||||
|
#### Always Validate User Input
|
||||||
|
```typescript
|
||||||
|
import { z } from 'zod'
|
||||||
|
|
||||||
|
// Define validation schema
|
||||||
|
const CreateUserSchema = z.object({
|
||||||
|
email: z.string().email(),
|
||||||
|
name: z.string().min(1).max(100),
|
||||||
|
age: z.number().int().min(0).max(150)
|
||||||
|
})
|
||||||
|
|
||||||
|
// Validate before processing
|
||||||
|
export async function createUser(input: unknown) {
|
||||||
|
try {
|
||||||
|
const validated = CreateUserSchema.parse(input)
|
||||||
|
return await db.users.create(validated)
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof z.ZodError) {
|
||||||
|
return { success: false, errors: error.errors }
|
||||||
|
}
|
||||||
|
throw error
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### File Upload Validation
|
||||||
|
```typescript
|
||||||
|
function validateFileUpload(file: File) {
|
||||||
|
// Size check (5MB max)
|
||||||
|
const maxSize = 5 * 1024 * 1024
|
||||||
|
if (file.size > maxSize) {
|
||||||
|
throw new Error('File too large (max 5MB)')
|
||||||
|
}
|
||||||
|
|
||||||
|
// Type check
|
||||||
|
const allowedTypes = ['image/jpeg', 'image/png', 'image/gif']
|
||||||
|
if (!allowedTypes.includes(file.type)) {
|
||||||
|
throw new Error('Invalid file type')
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extension check
|
||||||
|
const allowedExtensions = ['.jpg', '.jpeg', '.png', '.gif']
|
||||||
|
const extension = file.name.toLowerCase().match(/\.[^.]+$/)?.[0]
|
||||||
|
if (!extension || !allowedExtensions.includes(extension)) {
|
||||||
|
throw new Error('Invalid file extension')
|
||||||
|
}
|
||||||
|
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Verification Steps
|
||||||
|
- [ ] All user inputs validated with schemas
|
||||||
|
- [ ] File uploads restricted (size, type, extension)
|
||||||
|
- [ ] No direct use of user input in queries
|
||||||
|
- [ ] Whitelist validation (not blacklist)
|
||||||
|
- [ ] Error messages don't leak sensitive info
|
||||||
|
|
||||||
|
### 3. SQL Injection Prevention
|
||||||
|
|
||||||
|
#### ❌ NEVER Concatenate SQL
|
||||||
|
```typescript
|
||||||
|
// DANGEROUS - SQL Injection vulnerability
|
||||||
|
const query = `SELECT * FROM users WHERE email = '${userEmail}'`
|
||||||
|
await db.query(query)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### ✅ ALWAYS Use Parameterized Queries
|
||||||
|
```typescript
|
||||||
|
// Safe - parameterized query
|
||||||
|
const { data } = await supabase
|
||||||
|
.from('users')
|
||||||
|
.select('*')
|
||||||
|
.eq('email', userEmail)
|
||||||
|
|
||||||
|
// Or with raw SQL
|
||||||
|
await db.query(
|
||||||
|
'SELECT * FROM users WHERE email = $1',
|
||||||
|
[userEmail]
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Verification Steps
|
||||||
|
- [ ] All database queries use parameterized queries
|
||||||
|
- [ ] No string concatenation in SQL
|
||||||
|
- [ ] ORM/query builder used correctly
|
||||||
|
- [ ] Supabase queries properly sanitized
|
||||||
|
|
||||||
|
### 4. Authentication & Authorization
|
||||||
|
|
||||||
|
#### JWT Token Handling
|
||||||
|
```typescript
|
||||||
|
// ❌ WRONG: localStorage (vulnerable to XSS)
|
||||||
|
localStorage.setItem('token', token)
|
||||||
|
|
||||||
|
// ✅ CORRECT: httpOnly cookies
|
||||||
|
res.setHeader('Set-Cookie',
|
||||||
|
`token=${token}; HttpOnly; Secure; SameSite=Strict; Max-Age=3600`)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Authorization Checks
|
||||||
|
```typescript
|
||||||
|
export async function deleteUser(userId: string, requesterId: string) {
|
||||||
|
// ALWAYS verify authorization first
|
||||||
|
const requester = await db.users.findUnique({
|
||||||
|
where: { id: requesterId }
|
||||||
|
})
|
||||||
|
|
||||||
|
if (requester.role !== 'admin') {
|
||||||
|
return NextResponse.json(
|
||||||
|
{ error: 'Unauthorized' },
|
||||||
|
{ status: 403 }
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Proceed with deletion
|
||||||
|
await db.users.delete({ where: { id: userId } })
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Row Level Security (Supabase)
|
||||||
|
```sql
|
||||||
|
-- Enable RLS on all tables
|
||||||
|
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
|
||||||
|
|
||||||
|
-- Users can only view their own data
|
||||||
|
CREATE POLICY "Users view own data"
|
||||||
|
ON users FOR SELECT
|
||||||
|
USING (auth.uid() = id);
|
||||||
|
|
||||||
|
-- Users can only update their own data
|
||||||
|
CREATE POLICY "Users update own data"
|
||||||
|
ON users FOR UPDATE
|
||||||
|
USING (auth.uid() = id);
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Verification Steps
|
||||||
|
- [ ] Tokens stored in httpOnly cookies (not localStorage)
|
||||||
|
- [ ] Authorization checks before sensitive operations
|
||||||
|
- [ ] Row Level Security enabled in Supabase
|
||||||
|
- [ ] Role-based access control implemented
|
||||||
|
- [ ] Session management secure
|
||||||
|
|
||||||
|
### 5. XSS Prevention
|
||||||
|
|
||||||
|
#### Sanitize HTML
|
||||||
|
```typescript
|
||||||
|
import DOMPurify from 'isomorphic-dompurify'
|
||||||
|
|
||||||
|
// ALWAYS sanitize user-provided HTML
|
||||||
|
function renderUserContent(html: string) {
|
||||||
|
const clean = DOMPurify.sanitize(html, {
|
||||||
|
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'p'],
|
||||||
|
ALLOWED_ATTR: []
|
||||||
|
})
|
||||||
|
return <div dangerouslySetInnerHTML={{ __html: clean }} />
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Content Security Policy
|
||||||
|
```typescript
|
||||||
|
// next.config.js
|
||||||
|
const securityHeaders = [
|
||||||
|
{
|
||||||
|
key: 'Content-Security-Policy',
|
||||||
|
value: `
|
||||||
|
default-src 'self';
|
||||||
|
script-src 'self' 'unsafe-eval' 'unsafe-inline';
|
||||||
|
style-src 'self' 'unsafe-inline';
|
||||||
|
img-src 'self' data: https:;
|
||||||
|
font-src 'self';
|
||||||
|
connect-src 'self' https://api.example.com;
|
||||||
|
`.replace(/\s{2,}/g, ' ').trim()
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Verification Steps
|
||||||
|
- [ ] User-provided HTML sanitized
|
||||||
|
- [ ] CSP headers configured
|
||||||
|
- [ ] No unvalidated dynamic content rendering
|
||||||
|
- [ ] React's built-in XSS protection used
|
||||||
|
|
||||||
|
### 6. CSRF Protection
|
||||||
|
|
||||||
|
#### CSRF Tokens
|
||||||
|
```typescript
|
||||||
|
import { csrf } from '@/lib/csrf'
|
||||||
|
|
||||||
|
export async function POST(request: Request) {
|
||||||
|
const token = request.headers.get('X-CSRF-Token')
|
||||||
|
|
||||||
|
if (!csrf.verify(token)) {
|
||||||
|
return NextResponse.json(
|
||||||
|
{ error: 'Invalid CSRF token' },
|
||||||
|
{ status: 403 }
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Process request
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### SameSite Cookies
|
||||||
|
```typescript
|
||||||
|
res.setHeader('Set-Cookie',
|
||||||
|
`session=${sessionId}; HttpOnly; Secure; SameSite=Strict`)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Verification Steps
|
||||||
|
- [ ] CSRF tokens on state-changing operations
|
||||||
|
- [ ] SameSite=Strict on all cookies
|
||||||
|
- [ ] Double-submit cookie pattern implemented
|
||||||
|
|
||||||
|
### 7. Rate Limiting
|
||||||
|
|
||||||
|
#### API Rate Limiting
|
||||||
|
```typescript
|
||||||
|
import rateLimit from 'express-rate-limit'
|
||||||
|
|
||||||
|
const limiter = rateLimit({
|
||||||
|
windowMs: 15 * 60 * 1000, // 15 minutes
|
||||||
|
max: 100, // 100 requests per window
|
||||||
|
message: 'Too many requests'
|
||||||
|
})
|
||||||
|
|
||||||
|
// Apply to routes
|
||||||
|
app.use('/api/', limiter)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Expensive Operations
|
||||||
|
```typescript
|
||||||
|
// Aggressive rate limiting for searches
|
||||||
|
const searchLimiter = rateLimit({
|
||||||
|
windowMs: 60 * 1000, // 1 minute
|
||||||
|
max: 10, // 10 requests per minute
|
||||||
|
message: 'Too many search requests'
|
||||||
|
})
|
||||||
|
|
||||||
|
app.use('/api/search', searchLimiter)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Verification Steps
|
||||||
|
- [ ] Rate limiting on all API endpoints
|
||||||
|
- [ ] Stricter limits on expensive operations
|
||||||
|
- [ ] IP-based rate limiting
|
||||||
|
- [ ] User-based rate limiting (authenticated)
|
||||||
|
|
||||||
|
### 8. Sensitive Data Exposure
|
||||||
|
|
||||||
|
#### Logging
|
||||||
|
```typescript
|
||||||
|
// ❌ WRONG: Logging sensitive data
|
||||||
|
console.log('User login:', { email, password })
|
||||||
|
console.log('Payment:', { cardNumber, cvv })
|
||||||
|
|
||||||
|
// ✅ CORRECT: Redact sensitive data
|
||||||
|
console.log('User login:', { email, userId })
|
||||||
|
console.log('Payment:', { last4: card.last4, userId })
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Error Messages
|
||||||
|
```typescript
|
||||||
|
// ❌ WRONG: Exposing internal details
|
||||||
|
catch (error) {
|
||||||
|
return NextResponse.json(
|
||||||
|
{ error: error.message, stack: error.stack },
|
||||||
|
{ status: 500 }
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ CORRECT: Generic error messages
|
||||||
|
catch (error) {
|
||||||
|
console.error('Internal error:', error)
|
||||||
|
return NextResponse.json(
|
||||||
|
{ error: 'An error occurred. Please try again.' },
|
||||||
|
{ status: 500 }
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Verification Steps
|
||||||
|
- [ ] No passwords, tokens, or secrets in logs
|
||||||
|
- [ ] Error messages generic for users
|
||||||
|
- [ ] Detailed errors only in server logs
|
||||||
|
- [ ] No stack traces exposed to users
|
||||||
|
|
||||||
|
### 9. Blockchain Security (Solana)
|
||||||
|
|
||||||
|
#### Wallet Verification
|
||||||
|
```typescript
|
||||||
|
import { verify } from '@solana/web3.js'
|
||||||
|
|
||||||
|
async function verifyWalletOwnership(
|
||||||
|
publicKey: string,
|
||||||
|
signature: string,
|
||||||
|
message: string
|
||||||
|
) {
|
||||||
|
try {
|
||||||
|
const isValid = verify(
|
||||||
|
Buffer.from(message),
|
||||||
|
Buffer.from(signature, 'base64'),
|
||||||
|
Buffer.from(publicKey, 'base64')
|
||||||
|
)
|
||||||
|
return isValid
|
||||||
|
} catch (error) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Transaction Verification
|
||||||
|
```typescript
|
||||||
|
async function verifyTransaction(transaction: Transaction) {
|
||||||
|
// Verify recipient
|
||||||
|
if (transaction.to !== expectedRecipient) {
|
||||||
|
throw new Error('Invalid recipient')
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify amount
|
||||||
|
if (transaction.amount > maxAmount) {
|
||||||
|
throw new Error('Amount exceeds limit')
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify user has sufficient balance
|
||||||
|
const balance = await getBalance(transaction.from)
|
||||||
|
if (balance < transaction.amount) {
|
||||||
|
throw new Error('Insufficient balance')
|
||||||
|
}
|
||||||
|
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Verification Steps
|
||||||
|
- [ ] Wallet signatures verified
|
||||||
|
- [ ] Transaction details validated
|
||||||
|
- [ ] Balance checks before transactions
|
||||||
|
- [ ] No blind transaction signing
|
||||||
|
|
||||||
|
### 10. Dependency Security
|
||||||
|
|
||||||
|
#### Regular Updates
|
||||||
|
```bash
|
||||||
|
# Check for vulnerabilities
|
||||||
|
npm audit
|
||||||
|
|
||||||
|
# Fix automatically fixable issues
|
||||||
|
npm audit fix
|
||||||
|
|
||||||
|
# Update dependencies
|
||||||
|
npm update
|
||||||
|
|
||||||
|
# Check for outdated packages
|
||||||
|
npm outdated
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Lock Files
|
||||||
|
```bash
|
||||||
|
# ALWAYS commit lock files
|
||||||
|
git add package-lock.json
|
||||||
|
|
||||||
|
# Use in CI/CD for reproducible builds
|
||||||
|
npm ci # Instead of npm install
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Verification Steps
|
||||||
|
- [ ] Dependencies up to date
|
||||||
|
- [ ] No known vulnerabilities (npm audit clean)
|
||||||
|
- [ ] Lock files committed
|
||||||
|
- [ ] Dependabot enabled on GitHub
|
||||||
|
- [ ] Regular security updates
|
||||||
|
|
||||||
|
## Security Testing
|
||||||
|
|
||||||
|
### Automated Security Tests
|
||||||
|
```typescript
|
||||||
|
// Test authentication
|
||||||
|
test('requires authentication', async () => {
|
||||||
|
const response = await fetch('/api/protected')
|
||||||
|
expect(response.status).toBe(401)
|
||||||
|
})
|
||||||
|
|
||||||
|
// Test authorization
|
||||||
|
test('requires admin role', async () => {
|
||||||
|
const response = await fetch('/api/admin', {
|
||||||
|
headers: { Authorization: `Bearer ${userToken}` }
|
||||||
|
})
|
||||||
|
expect(response.status).toBe(403)
|
||||||
|
})
|
||||||
|
|
||||||
|
// Test input validation
|
||||||
|
test('rejects invalid input', async () => {
|
||||||
|
const response = await fetch('/api/users', {
|
||||||
|
method: 'POST',
|
||||||
|
body: JSON.stringify({ email: 'not-an-email' })
|
||||||
|
})
|
||||||
|
expect(response.status).toBe(400)
|
||||||
|
})
|
||||||
|
|
||||||
|
// Test rate limiting
|
||||||
|
test('enforces rate limits', async () => {
|
||||||
|
const requests = Array(101).fill(null).map(() =>
|
||||||
|
fetch('/api/endpoint')
|
||||||
|
)
|
||||||
|
|
||||||
|
const responses = await Promise.all(requests)
|
||||||
|
const tooManyRequests = responses.filter(r => r.status === 429)
|
||||||
|
|
||||||
|
expect(tooManyRequests.length).toBeGreaterThan(0)
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
## Pre-Deployment Security Checklist
|
||||||
|
|
||||||
|
Before ANY production deployment:
|
||||||
|
|
||||||
|
- [ ] **Secrets**: No hardcoded secrets, all in env vars
|
||||||
|
- [ ] **Input Validation**: All user inputs validated
|
||||||
|
- [ ] **SQL Injection**: All queries parameterized
|
||||||
|
- [ ] **XSS**: User content sanitized
|
||||||
|
- [ ] **CSRF**: Protection enabled
|
||||||
|
- [ ] **Authentication**: Proper token handling
|
||||||
|
- [ ] **Authorization**: Role checks in place
|
||||||
|
- [ ] **Rate Limiting**: Enabled on all endpoints
|
||||||
|
- [ ] **HTTPS**: Enforced in production
|
||||||
|
- [ ] **Security Headers**: CSP, X-Frame-Options configured
|
||||||
|
- [ ] **Error Handling**: No sensitive data in errors
|
||||||
|
- [ ] **Logging**: No sensitive data logged
|
||||||
|
- [ ] **Dependencies**: Up to date, no vulnerabilities
|
||||||
|
- [ ] **Row Level Security**: Enabled in Supabase
|
||||||
|
- [ ] **CORS**: Properly configured
|
||||||
|
- [ ] **File Uploads**: Validated (size, type)
|
||||||
|
- [ ] **Wallet Signatures**: Verified (if blockchain)
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [OWASP Top 10](https://owasp.org/www-project-top-ten/)
|
||||||
|
- [Next.js Security](https://nextjs.org/docs/security)
|
||||||
|
- [Supabase Security](https://supabase.com/docs/guides/auth)
|
||||||
|
- [Web Security Academy](https://portswigger.net/web-security)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Remember**: Security is not optional. One vulnerability can compromise the entire platform. When in doubt, err on the side of caution.
|
||||||
414
.kiro/skills/tdd-workflow/SKILL.md
Normal file
414
.kiro/skills/tdd-workflow/SKILL.md
Normal file
@@ -0,0 +1,414 @@
|
|||||||
|
---
|
||||||
|
name: tdd-workflow
|
||||||
|
description: >
|
||||||
|
Use this skill when writing new features, fixing bugs, or refactoring code.
|
||||||
|
Enforces test-driven development with 80%+ coverage including unit, integration, and E2E tests.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
version: "1.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Test-Driven Development Workflow
|
||||||
|
|
||||||
|
This skill ensures all code development follows TDD principles with comprehensive test coverage.
|
||||||
|
|
||||||
|
## When to Activate
|
||||||
|
|
||||||
|
- Writing new features or functionality
|
||||||
|
- Fixing bugs or issues
|
||||||
|
- Refactoring existing code
|
||||||
|
- Adding API endpoints
|
||||||
|
- Creating new components
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
### 1. Tests BEFORE Code
|
||||||
|
ALWAYS write tests first, then implement code to make tests pass.
|
||||||
|
|
||||||
|
### 2. Coverage Requirements
|
||||||
|
- Minimum 80% coverage (unit + integration + E2E)
|
||||||
|
- All edge cases covered
|
||||||
|
- Error scenarios tested
|
||||||
|
- Boundary conditions verified
|
||||||
|
|
||||||
|
### 3. Test Types
|
||||||
|
|
||||||
|
#### Unit Tests
|
||||||
|
- Individual functions and utilities
|
||||||
|
- Component logic
|
||||||
|
- Pure functions
|
||||||
|
- Helpers and utilities
|
||||||
|
|
||||||
|
#### Integration Tests
|
||||||
|
- API endpoints
|
||||||
|
- Database operations
|
||||||
|
- Service interactions
|
||||||
|
- External API calls
|
||||||
|
|
||||||
|
#### E2E Tests (Playwright)
|
||||||
|
- Critical user flows
|
||||||
|
- Complete workflows
|
||||||
|
- Browser automation
|
||||||
|
- UI interactions
|
||||||
|
|
||||||
|
## TDD Workflow Steps
|
||||||
|
|
||||||
|
### Step 1: Write User Journeys
|
||||||
|
```
|
||||||
|
As a [role], I want to [action], so that [benefit]
|
||||||
|
|
||||||
|
Example:
|
||||||
|
As a user, I want to search for markets semantically,
|
||||||
|
so that I can find relevant markets even without exact keywords.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Generate Test Cases
|
||||||
|
For each user journey, create comprehensive test cases:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
describe('Semantic Search', () => {
|
||||||
|
it('returns relevant markets for query', async () => {
|
||||||
|
// Test implementation
|
||||||
|
})
|
||||||
|
|
||||||
|
it('handles empty query gracefully', async () => {
|
||||||
|
// Test edge case
|
||||||
|
})
|
||||||
|
|
||||||
|
it('falls back to substring search when Redis unavailable', async () => {
|
||||||
|
// Test fallback behavior
|
||||||
|
})
|
||||||
|
|
||||||
|
it('sorts results by similarity score', async () => {
|
||||||
|
// Test sorting logic
|
||||||
|
})
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Run Tests (They Should Fail)
|
||||||
|
```bash
|
||||||
|
npm test
|
||||||
|
# Tests should fail - we haven't implemented yet
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Implement Code
|
||||||
|
Write minimal code to make tests pass:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Implementation guided by tests
|
||||||
|
export async function searchMarkets(query: string) {
|
||||||
|
// Implementation here
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Run Tests Again
|
||||||
|
```bash
|
||||||
|
npm test
|
||||||
|
# Tests should now pass
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 6: Refactor
|
||||||
|
Improve code quality while keeping tests green:
|
||||||
|
- Remove duplication
|
||||||
|
- Improve naming
|
||||||
|
- Optimize performance
|
||||||
|
- Enhance readability
|
||||||
|
|
||||||
|
### Step 7: Verify Coverage
|
||||||
|
```bash
|
||||||
|
npm run test:coverage
|
||||||
|
# Verify 80%+ coverage achieved
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing Patterns
|
||||||
|
|
||||||
|
### Unit Test Pattern (Jest/Vitest)
|
||||||
|
```typescript
|
||||||
|
import { render, screen, fireEvent } from '@testing-library/react'
|
||||||
|
import { Button } from './Button'
|
||||||
|
|
||||||
|
describe('Button Component', () => {
|
||||||
|
it('renders with correct text', () => {
|
||||||
|
render(<Button>Click me</Button>)
|
||||||
|
expect(screen.getByText('Click me')).toBeInTheDocument()
|
||||||
|
})
|
||||||
|
|
||||||
|
it('calls onClick when clicked', () => {
|
||||||
|
const handleClick = jest.fn()
|
||||||
|
render(<Button onClick={handleClick}>Click</Button>)
|
||||||
|
|
||||||
|
fireEvent.click(screen.getByRole('button'))
|
||||||
|
|
||||||
|
expect(handleClick).toHaveBeenCalledTimes(1)
|
||||||
|
})
|
||||||
|
|
||||||
|
it('is disabled when disabled prop is true', () => {
|
||||||
|
render(<Button disabled>Click</Button>)
|
||||||
|
expect(screen.getByRole('button')).toBeDisabled()
|
||||||
|
})
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### API Integration Test Pattern
|
||||||
|
```typescript
|
||||||
|
import { NextRequest } from 'next/server'
|
||||||
|
import { GET } from './route'
|
||||||
|
|
||||||
|
describe('GET /api/markets', () => {
|
||||||
|
it('returns markets successfully', async () => {
|
||||||
|
const request = new NextRequest('http://localhost/api/markets')
|
||||||
|
const response = await GET(request)
|
||||||
|
const data = await response.json()
|
||||||
|
|
||||||
|
expect(response.status).toBe(200)
|
||||||
|
expect(data.success).toBe(true)
|
||||||
|
expect(Array.isArray(data.data)).toBe(true)
|
||||||
|
})
|
||||||
|
|
||||||
|
it('validates query parameters', async () => {
|
||||||
|
const request = new NextRequest('http://localhost/api/markets?limit=invalid')
|
||||||
|
const response = await GET(request)
|
||||||
|
|
||||||
|
expect(response.status).toBe(400)
|
||||||
|
})
|
||||||
|
|
||||||
|
it('handles database errors gracefully', async () => {
|
||||||
|
// Mock database failure
|
||||||
|
const request = new NextRequest('http://localhost/api/markets')
|
||||||
|
// Test error handling
|
||||||
|
})
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### E2E Test Pattern (Playwright)
|
||||||
|
```typescript
|
||||||
|
import { test, expect } from '@playwright/test'
|
||||||
|
|
||||||
|
test('user can search and filter markets', async ({ page }) => {
|
||||||
|
// Navigate to markets page
|
||||||
|
await page.goto('/')
|
||||||
|
await page.click('a[href="/markets"]')
|
||||||
|
|
||||||
|
// Verify page loaded
|
||||||
|
await expect(page.locator('h1')).toContainText('Markets')
|
||||||
|
|
||||||
|
// Search for markets
|
||||||
|
await page.fill('input[placeholder="Search markets"]', 'election')
|
||||||
|
|
||||||
|
// Wait for debounce and results
|
||||||
|
await page.waitForTimeout(600)
|
||||||
|
|
||||||
|
// Verify search results displayed
|
||||||
|
const results = page.locator('[data-testid="market-card"]')
|
||||||
|
await expect(results).toHaveCount(5, { timeout: 5000 })
|
||||||
|
|
||||||
|
// Verify results contain search term
|
||||||
|
const firstResult = results.first()
|
||||||
|
await expect(firstResult).toContainText('election', { ignoreCase: true })
|
||||||
|
|
||||||
|
// Filter by status
|
||||||
|
await page.click('button:has-text("Active")')
|
||||||
|
|
||||||
|
// Verify filtered results
|
||||||
|
await expect(results).toHaveCount(3)
|
||||||
|
})
|
||||||
|
|
||||||
|
test('user can create a new market', async ({ page }) => {
|
||||||
|
// Login first
|
||||||
|
await page.goto('/creator-dashboard')
|
||||||
|
|
||||||
|
// Fill market creation form
|
||||||
|
await page.fill('input[name="name"]', 'Test Market')
|
||||||
|
await page.fill('textarea[name="description"]', 'Test description')
|
||||||
|
await page.fill('input[name="endDate"]', '2025-12-31')
|
||||||
|
|
||||||
|
// Submit form
|
||||||
|
await page.click('button[type="submit"]')
|
||||||
|
|
||||||
|
// Verify success message
|
||||||
|
await expect(page.locator('text=Market created successfully')).toBeVisible()
|
||||||
|
|
||||||
|
// Verify redirect to market page
|
||||||
|
await expect(page).toHaveURL(/\/markets\/test-market/)
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test File Organization
|
||||||
|
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
├── components/
|
||||||
|
│ ├── Button/
|
||||||
|
│ │ ├── Button.tsx
|
||||||
|
│ │ ├── Button.test.tsx # Unit tests
|
||||||
|
│ │ └── Button.stories.tsx # Storybook
|
||||||
|
│ └── MarketCard/
|
||||||
|
│ ├── MarketCard.tsx
|
||||||
|
│ └── MarketCard.test.tsx
|
||||||
|
├── app/
|
||||||
|
│ └── api/
|
||||||
|
│ └── markets/
|
||||||
|
│ ├── route.ts
|
||||||
|
│ └── route.test.ts # Integration tests
|
||||||
|
└── e2e/
|
||||||
|
├── markets.spec.ts # E2E tests
|
||||||
|
├── trading.spec.ts
|
||||||
|
└── auth.spec.ts
|
||||||
|
```
|
||||||
|
|
||||||
|
## Mocking External Services
|
||||||
|
|
||||||
|
### Supabase Mock
|
||||||
|
```typescript
|
||||||
|
jest.mock('@/lib/supabase', () => ({
|
||||||
|
supabase: {
|
||||||
|
from: jest.fn(() => ({
|
||||||
|
select: jest.fn(() => ({
|
||||||
|
eq: jest.fn(() => Promise.resolve({
|
||||||
|
data: [{ id: 1, name: 'Test Market' }],
|
||||||
|
error: null
|
||||||
|
}))
|
||||||
|
}))
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
```
|
||||||
|
|
||||||
|
### Redis Mock
|
||||||
|
```typescript
|
||||||
|
jest.mock('@/lib/redis', () => ({
|
||||||
|
searchMarketsByVector: jest.fn(() => Promise.resolve([
|
||||||
|
{ slug: 'test-market', similarity_score: 0.95 }
|
||||||
|
])),
|
||||||
|
checkRedisHealth: jest.fn(() => Promise.resolve({ connected: true }))
|
||||||
|
}))
|
||||||
|
```
|
||||||
|
|
||||||
|
### OpenAI Mock
|
||||||
|
```typescript
|
||||||
|
jest.mock('@/lib/openai', () => ({
|
||||||
|
generateEmbedding: jest.fn(() => Promise.resolve(
|
||||||
|
new Array(1536).fill(0.1) // Mock 1536-dim embedding
|
||||||
|
))
|
||||||
|
}))
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Coverage Verification
|
||||||
|
|
||||||
|
### Run Coverage Report
|
||||||
|
```bash
|
||||||
|
npm run test:coverage
|
||||||
|
```
|
||||||
|
|
||||||
|
### Coverage Thresholds
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"jest": {
|
||||||
|
"coverageThresholds": {
|
||||||
|
"global": {
|
||||||
|
"branches": 80,
|
||||||
|
"functions": 80,
|
||||||
|
"lines": 80,
|
||||||
|
"statements": 80
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Common Testing Mistakes to Avoid
|
||||||
|
|
||||||
|
### ❌ WRONG: Testing Implementation Details
|
||||||
|
```typescript
|
||||||
|
// Don't test internal state
|
||||||
|
expect(component.state.count).toBe(5)
|
||||||
|
```
|
||||||
|
|
||||||
|
### ✅ CORRECT: Test User-Visible Behavior
|
||||||
|
```typescript
|
||||||
|
// Test what users see
|
||||||
|
expect(screen.getByText('Count: 5')).toBeInTheDocument()
|
||||||
|
```
|
||||||
|
|
||||||
|
### ❌ WRONG: Brittle Selectors
|
||||||
|
```typescript
|
||||||
|
// Breaks easily
|
||||||
|
await page.click('.css-class-xyz')
|
||||||
|
```
|
||||||
|
|
||||||
|
### ✅ CORRECT: Semantic Selectors
|
||||||
|
```typescript
|
||||||
|
// Resilient to changes
|
||||||
|
await page.click('button:has-text("Submit")')
|
||||||
|
await page.click('[data-testid="submit-button"]')
|
||||||
|
```
|
||||||
|
|
||||||
|
### ❌ WRONG: No Test Isolation
|
||||||
|
```typescript
|
||||||
|
// Tests depend on each other
|
||||||
|
test('creates user', () => { /* ... */ })
|
||||||
|
test('updates same user', () => { /* depends on previous test */ })
|
||||||
|
```
|
||||||
|
|
||||||
|
### ✅ CORRECT: Independent Tests
|
||||||
|
```typescript
|
||||||
|
// Each test sets up its own data
|
||||||
|
test('creates user', () => {
|
||||||
|
const user = createTestUser()
|
||||||
|
// Test logic
|
||||||
|
})
|
||||||
|
|
||||||
|
test('updates user', () => {
|
||||||
|
const user = createTestUser()
|
||||||
|
// Update logic
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
## Continuous Testing
|
||||||
|
|
||||||
|
### Watch Mode During Development
|
||||||
|
```bash
|
||||||
|
npm test -- --watch
|
||||||
|
# Tests run automatically on file changes
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pre-Commit Hook
|
||||||
|
```bash
|
||||||
|
# Runs before every commit
|
||||||
|
npm test && npm run lint
|
||||||
|
```
|
||||||
|
|
||||||
|
### CI/CD Integration
|
||||||
|
```yaml
|
||||||
|
# GitHub Actions
|
||||||
|
- name: Run Tests
|
||||||
|
run: npm test -- --coverage
|
||||||
|
- name: Upload Coverage
|
||||||
|
uses: codecov/codecov-action@v3
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **Write Tests First** - Always TDD
|
||||||
|
2. **One Assert Per Test** - Focus on single behavior
|
||||||
|
3. **Descriptive Test Names** - Explain what's tested
|
||||||
|
4. **Arrange-Act-Assert** - Clear test structure
|
||||||
|
5. **Mock External Dependencies** - Isolate unit tests
|
||||||
|
6. **Test Edge Cases** - Null, undefined, empty, large
|
||||||
|
7. **Test Error Paths** - Not just happy paths
|
||||||
|
8. **Keep Tests Fast** - Unit tests < 50ms each
|
||||||
|
9. **Clean Up After Tests** - No side effects
|
||||||
|
10. **Review Coverage Reports** - Identify gaps
|
||||||
|
|
||||||
|
## Success Metrics
|
||||||
|
|
||||||
|
- 80%+ code coverage achieved
|
||||||
|
- All tests passing (green)
|
||||||
|
- No skipped or disabled tests
|
||||||
|
- Fast test execution (< 30s for unit tests)
|
||||||
|
- E2E tests cover critical user flows
|
||||||
|
- Tests catch bugs before production
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Remember**: Tests are not optional. They are the safety net that enables confident refactoring, rapid development, and production reliability.
|
||||||
128
.kiro/skills/verification-loop/SKILL.md
Normal file
128
.kiro/skills/verification-loop/SKILL.md
Normal file
@@ -0,0 +1,128 @@
|
|||||||
|
---
|
||||||
|
name: verification-loop
|
||||||
|
description: >
|
||||||
|
A comprehensive verification system for Kiro sessions.
|
||||||
|
metadata:
|
||||||
|
origin: ECC
|
||||||
|
---
|
||||||
|
|
||||||
|
# Verification Loop Skill
|
||||||
|
|
||||||
|
A comprehensive verification system for Kiro sessions.
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
Invoke this skill:
|
||||||
|
- After completing a feature or significant code change
|
||||||
|
- Before creating a PR
|
||||||
|
- When you want to ensure quality gates pass
|
||||||
|
- After refactoring
|
||||||
|
|
||||||
|
## Verification Phases
|
||||||
|
|
||||||
|
### Phase 1: Build Verification
|
||||||
|
```bash
|
||||||
|
# Check if project builds
|
||||||
|
npm run build 2>&1 | tail -20
|
||||||
|
# OR
|
||||||
|
pnpm build 2>&1 | tail -20
|
||||||
|
```
|
||||||
|
|
||||||
|
If build fails, STOP and fix before continuing.
|
||||||
|
|
||||||
|
### Phase 2: Type Check
|
||||||
|
```bash
|
||||||
|
# TypeScript projects
|
||||||
|
npx tsc --noEmit 2>&1 | head -30
|
||||||
|
|
||||||
|
# Python projects
|
||||||
|
pyright . 2>&1 | head -30
|
||||||
|
```
|
||||||
|
|
||||||
|
Report all type errors. Fix critical ones before continuing.
|
||||||
|
|
||||||
|
### Phase 3: Lint Check
|
||||||
|
```bash
|
||||||
|
# JavaScript/TypeScript
|
||||||
|
npm run lint 2>&1 | head -30
|
||||||
|
|
||||||
|
# Python
|
||||||
|
ruff check . 2>&1 | head -30
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 4: Test Suite
|
||||||
|
```bash
|
||||||
|
# Run tests with coverage
|
||||||
|
npm run test -- --coverage 2>&1 | tail -50
|
||||||
|
|
||||||
|
# Check coverage threshold
|
||||||
|
# Target: 80% minimum
|
||||||
|
```
|
||||||
|
|
||||||
|
Report:
|
||||||
|
- Total tests: X
|
||||||
|
- Passed: X
|
||||||
|
- Failed: X
|
||||||
|
- Coverage: X%
|
||||||
|
|
||||||
|
### Phase 5: Security Scan
|
||||||
|
```bash
|
||||||
|
# Check for secrets
|
||||||
|
grep -rn "sk-" --include="*.ts" --include="*.js" . 2>/dev/null | head -10
|
||||||
|
grep -rn "api_key" --include="*.ts" --include="*.js" . 2>/dev/null | head -10
|
||||||
|
|
||||||
|
# Check for console.log
|
||||||
|
grep -rn "console.log" --include="*.ts" --include="*.tsx" src/ 2>/dev/null | head -10
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 6: Diff Review
|
||||||
|
```bash
|
||||||
|
# Show what changed
|
||||||
|
git diff --stat
|
||||||
|
git diff HEAD~1 --name-only
|
||||||
|
```
|
||||||
|
|
||||||
|
Review each changed file for:
|
||||||
|
- Unintended changes
|
||||||
|
- Missing error handling
|
||||||
|
- Potential edge cases
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
After running all phases, produce a verification report:
|
||||||
|
|
||||||
|
```
|
||||||
|
VERIFICATION REPORT
|
||||||
|
==================
|
||||||
|
|
||||||
|
Build: [PASS/FAIL]
|
||||||
|
Types: [PASS/FAIL] (X errors)
|
||||||
|
Lint: [PASS/FAIL] (X warnings)
|
||||||
|
Tests: [PASS/FAIL] (X/Y passed, Z% coverage)
|
||||||
|
Security: [PASS/FAIL] (X issues)
|
||||||
|
Diff: [X files changed]
|
||||||
|
|
||||||
|
Overall: [READY/NOT READY] for PR
|
||||||
|
|
||||||
|
Issues to Fix:
|
||||||
|
1. ...
|
||||||
|
2. ...
|
||||||
|
```
|
||||||
|
|
||||||
|
## Continuous Mode
|
||||||
|
|
||||||
|
For long sessions, run verification every 15 minutes or after major changes:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
Set a mental checkpoint:
|
||||||
|
- After completing each function
|
||||||
|
- After finishing a component
|
||||||
|
- Before moving to next task
|
||||||
|
|
||||||
|
Run: /verify
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration with Hooks
|
||||||
|
|
||||||
|
This skill complements postToolUse hooks but provides deeper verification.
|
||||||
|
Hooks catch issues immediately; this skill provides comprehensive review.
|
||||||
Reference in New Issue
Block a user