Update example skills and rename 'artifacts-builder' (#112)

* Export updated examples

* Rename 'artifacts-builder' to 'web-artifacts-builder'
This commit is contained in:
Keith Lazuka
2025-11-17 16:34:29 -05:00
committed by GitHub
parent e5c60158df
commit 0f77e501e6
20 changed files with 1051 additions and 2287 deletions

View File

@@ -28,7 +28,6 @@
"strict": false, "strict": false,
"skills": [ "skills": [
"./algorithmic-art", "./algorithmic-art",
"./artifacts-builder",
"./brand-guidelines", "./brand-guidelines",
"./canvas-design", "./canvas-design",
"./frontend-design", "./frontend-design",
@@ -37,6 +36,7 @@
"./skill-creator", "./skill-creator",
"./slack-gif-creator", "./slack-gif-creator",
"./theme-factory", "./theme-factory",
"./web-artifacts-builder",
"./webapp-testing" "./webapp-testing"
] ]
} }

View File

@@ -8,7 +8,7 @@ license: Complete terms in LICENSE.txt
## Overview ## Overview
To create high-quality MCP (Model Context Protocol) servers that enable LLMs to effectively interact with external services, use this skill. An MCP server provides tools that allow LLMs to access external services and APIs. The quality of an MCP server is measured by how well it enables LLMs to accomplish real-world tasks using the tools provided. Create MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. The quality of an MCP server is measured by how well it enables LLMs to accomplish real-world tasks.
--- ---
@@ -20,222 +20,131 @@ Creating a high-quality MCP server involves four main phases:
### Phase 1: Deep Research and Planning ### Phase 1: Deep Research and Planning
#### 1.1 Understand Agent-Centric Design Principles #### 1.1 Understand Modern MCP Design
Before diving into implementation, understand how to design tools for AI agents by reviewing these principles: **API Coverage vs. Workflow Tools:**
Balance comprehensive API endpoint coverage with specialized workflow tools. Workflow tools can be more convenient for specific tasks, while comprehensive coverage gives agents flexibility to compose operations. Performance varies by client—some clients benefit from code execution that combines basic tools, while others work better with higher-level workflows. When uncertain, prioritize comprehensive API coverage.
**Build for Workflows, Not Just API Endpoints:** **Tool Naming and Discoverability:**
- Don't simply wrap existing API endpoints - build thoughtful, high-impact workflow tools Clear, descriptive tool names help agents find the right tools quickly. Use consistent prefixes (e.g., `github_create_issue`, `github_list_repos`) and action-oriented naming.
- Consolidate related operations (e.g., `schedule_event` that both checks availability and creates event)
- Focus on tools that enable complete tasks, not just individual API calls
- Consider what workflows agents actually need to accomplish
**Optimize for Limited Context:** **Context Management:**
- Agents have constrained context windows - make every token count Agents benefit from concise tool descriptions and the ability to filter/paginate results. Design tools that return focused, relevant data. Some clients support code execution which can help agents filter and process data efficiently.
- Return high-signal information, not exhaustive data dumps
- Provide "concise" vs "detailed" response format options
- Default to human-readable identifiers over technical codes (names over IDs)
- Consider the agent's context budget as a scarce resource
**Design Actionable Error Messages:** **Actionable Error Messages:**
- Error messages should guide agents toward correct usage patterns Error messages should guide agents toward solutions with specific suggestions and next steps.
- Suggest specific next steps: "Try using filter='active_only' to reduce results"
- Make errors educational, not just diagnostic
- Help agents learn proper tool usage through clear feedback
**Follow Natural Task Subdivisions:** #### 1.2 Study MCP Protocol Documentation
- Tool names should reflect how humans think about tasks
- Group related tools with consistent prefixes for discoverability
- Design tools around natural workflows, not just API structure
**Use Evaluation-Driven Development:** **Navigate the MCP specification:**
- Create realistic evaluation scenarios early
- Let agent feedback drive tool improvements
- Prototype quickly and iterate based on actual agent performance
#### 1.3 Study MCP Protocol Documentation Start with the sitemap to find relevant pages: `https://modelcontextprotocol.io/sitemap.xml`
**Fetch the latest MCP protocol documentation:** Then fetch specific pages with `.md` suffix for markdown format (e.g., `https://modelcontextprotocol.io/specification/draft.md`).
Use WebFetch to load: `https://modelcontextprotocol.io/llms-full.txt` Key pages to review:
- Specification overview and architecture
- Transport mechanisms (streamable HTTP, stdio)
- Tool, resource, and prompt definitions
This comprehensive document contains the complete MCP specification and guidelines. #### 1.3 Study Framework Documentation
#### 1.4 Study Framework Documentation **Recommended stack:**
- **Language**: TypeScript (high-quality SDK support and good compatibility in many execution environments e.g. MCPB. Plus AI models are good at generating TypeScript code, benefiting from its broad usage, static typing and good linting tools)
- **Transport**: Streamable HTTP for remote servers, using stateless JSON (simpler to scale and maintain, as opposed to stateful sessions and streaming responses). stdio for local servers.
**Load and read the following reference files:** **Load framework documentation:**
- **MCP Best Practices**: [📋 View Best Practices](./reference/mcp_best_practices.md) - Core guidelines for all MCP servers - **MCP Best Practices**: [📋 View Best Practices](./reference/mcp_best_practices.md) - Core guidelines
**For Python implementations, also load:** **For TypeScript (recommended):**
- **Python SDK Documentation**: Use WebFetch to load `https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md` - **TypeScript SDK**: Use WebFetch to load `https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md`
- [🐍 Python Implementation Guide](./reference/python_mcp_server.md) - Python-specific best practices and examples - [⚡ TypeScript Guide](./reference/node_mcp_server.md) - TypeScript patterns and examples
**For Node/TypeScript implementations, also load:** **For Python:**
- **TypeScript SDK Documentation**: Use WebFetch to load `https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md` - **Python SDK**: Use WebFetch to load `https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md`
- [⚡ TypeScript Implementation Guide](./reference/node_mcp_server.md) - Node/TypeScript-specific best practices and examples - [🐍 Python Guide](./reference/python_mcp_server.md) - Python patterns and examples
#### 1.5 Exhaustively Study API Documentation #### 1.4 Plan Your Implementation
To integrate a service, read through **ALL** available API documentation: **Understand the API:**
- Official API reference documentation Review the service's API documentation to identify key endpoints, authentication requirements, and data models. Use web search and WebFetch as needed.
- Authentication and authorization requirements
- Rate limiting and pagination patterns
- Error responses and status codes
- Available endpoints and their parameters
- Data models and schemas
**To gather comprehensive information, use web search and the WebFetch tool as needed.**
#### 1.6 Create a Comprehensive Implementation Plan
Based on your research, create a detailed plan that includes:
**Tool Selection:** **Tool Selection:**
- List the most valuable endpoints/operations to implement Prioritize comprehensive API coverage. List endpoints to implement, starting with the most common operations.
- Prioritize tools that enable the most common and important use cases
- Consider which tools work together to enable complex workflows
**Shared Utilities and Helpers:**
- Identify common API request patterns
- Plan pagination helpers
- Design filtering and formatting utilities
- Plan error handling strategies
**Input/Output Design:**
- Define input validation models (Pydantic for Python, Zod for TypeScript)
- Design consistent response formats (e.g., JSON or Markdown), and configurable levels of detail (e.g., Detailed or Concise)
- Plan for large-scale usage (thousands of users/resources)
- Implement character limits and truncation strategies (e.g., 25,000 tokens)
**Error Handling Strategy:**
- Plan graceful failure modes
- Design clear, actionable, LLM-friendly, natural language error messages which prompt further action
- Consider rate limiting and timeout scenarios
- Handle authentication and authorization errors
--- ---
### Phase 2: Implementation ### Phase 2: Implementation
Now that you have a comprehensive plan, begin implementation following language-specific best practices.
#### 2.1 Set Up Project Structure #### 2.1 Set Up Project Structure
**For Python:** See language-specific guides for project setup:
- Create a single `.py` file or organize into modules if complex (see [🐍 Python Guide](./reference/python_mcp_server.md)) - [⚡ TypeScript Guide](./reference/node_mcp_server.md) - Project structure, package.json, tsconfig.json
- Use the MCP Python SDK for tool registration - [🐍 Python Guide](./reference/python_mcp_server.md) - Module organization, dependencies
- Define Pydantic models for input validation
**For Node/TypeScript:** #### 2.2 Implement Core Infrastructure
- Create proper project structure (see [⚡ TypeScript Guide](./reference/node_mcp_server.md))
- Set up `package.json` and `tsconfig.json`
- Use MCP TypeScript SDK
- Define Zod schemas for input validation
#### 2.2 Implement Core Infrastructure First Create shared utilities:
- API client with authentication
- Error handling helpers
- Response formatting (JSON/Markdown)
- Pagination support
**To begin implementation, create shared utilities before implementing tools:** #### 2.3 Implement Tools
- API request helper functions
- Error handling utilities
- Response formatting functions (JSON and Markdown)
- Pagination helpers
- Authentication/token management
#### 2.3 Implement Tools Systematically For each tool:
For each tool in the plan: **Input Schema:**
- Use Zod (TypeScript) or Pydantic (Python)
- Include constraints and clear descriptions
- Add examples in field descriptions
**Define Input Schema:** **Output Schema:**
- Use Pydantic (Python) or Zod (TypeScript) for validation - Define `outputSchema` where possible for structured data
- Include proper constraints (min/max length, regex patterns, min/max values, ranges) - Use `structuredContent` in tool responses (TypeScript SDK feature)
- Provide clear, descriptive field descriptions - Helps clients understand and process tool outputs
- Include diverse examples in field descriptions
**Write Comprehensive Docstrings/Descriptions:** **Tool Description:**
- One-line summary of what the tool does - Concise summary of functionality
- Detailed explanation of purpose and functionality - Parameter descriptions
- Explicit parameter types with examples - Return type schema
- Complete return type schema
- Usage examples (when to use, when not to use)
- Error handling documentation, which outlines how to proceed given specific errors
**Implement Tool Logic:** **Implementation:**
- Use shared utilities to avoid code duplication - Async/await for I/O operations
- Follow async/await patterns for all I/O - Proper error handling with actionable messages
- Implement proper error handling - Support pagination where applicable
- Support multiple response formats (JSON and Markdown) - Return both text content and structured data when using modern SDKs
- Respect pagination parameters
- Check character limits and truncate appropriately
**Add Tool Annotations:** **Annotations:**
- `readOnlyHint`: true (for read-only operations) - `readOnlyHint`: true/false
- `destructiveHint`: false (for non-destructive operations) - `destructiveHint`: true/false
- `idempotentHint`: true (if repeated calls have same effect) - `idempotentHint`: true/false
- `openWorldHint`: true (if interacting with external systems) - `openWorldHint`: true/false
#### 2.4 Follow Language-Specific Best Practices
**At this point, load the appropriate language guide:**
**For Python: Load [🐍 Python Implementation Guide](./reference/python_mcp_server.md) and ensure the following:**
- Using MCP Python SDK with proper tool registration
- Pydantic v2 models with `model_config`
- Type hints throughout
- Async/await for all I/O operations
- Proper imports organization
- Module-level constants (CHARACTER_LIMIT, API_BASE_URL)
**For Node/TypeScript: Load [⚡ TypeScript Implementation Guide](./reference/node_mcp_server.md) and ensure the following:**
- Using `server.registerTool` properly
- Zod schemas with `.strict()`
- TypeScript strict mode enabled
- No `any` types - use proper types
- Explicit Promise<T> return types
- Build process configured (`npm run build`)
--- ---
### Phase 3: Review and Refine ### Phase 3: Review and Test
After initial implementation: #### 3.1 Code Quality
#### 3.1 Code Quality Review Review for:
- No duplicated code (DRY principle)
- Consistent error handling
- Full type coverage
- Clear tool descriptions
To ensure quality, review the code for: #### 3.2 Build and Test
- **DRY Principle**: No duplicated code between tools
- **Composability**: Shared logic extracted into functions
- **Consistency**: Similar operations return similar formats
- **Error Handling**: All external calls have error handling
- **Type Safety**: Full type coverage (Python type hints, TypeScript types)
- **Documentation**: Every tool has comprehensive docstrings/descriptions
#### 3.2 Test and Build **TypeScript:**
- Run `npm run build` to verify compilation
- Test with MCP Inspector: `npx @modelcontextprotocol/inspector`
**Important:** MCP servers are long-running processes that wait for requests over stdio/stdin or sse/http. Running them directly in your main process (e.g., `python server.py` or `node dist/index.js`) will cause your process to hang indefinitely. **Python:**
- Verify syntax: `python -m py_compile your_server.py`
- Test with MCP Inspector
**Safe ways to test the server:** See language-specific guides for detailed testing approaches and quality checklists.
- Use the evaluation harness (see Phase 4) - recommended approach
- Run the server in tmux to keep it outside your main process
- Use a timeout when testing: `timeout 5s python server.py`
**For Python:**
- Verify Python syntax: `python -m py_compile your_server.py`
- Check imports work correctly by reviewing the file
- To manually test: Run server in tmux, then test with evaluation harness in main process
- Or use the evaluation harness directly (it manages the server for stdio transport)
**For Node/TypeScript:**
- Run `npm run build` and ensure it completes without errors
- Verify dist/index.js is created
- To manually test: Run server in tmux, then test with evaluation harness in main process
- Or use the evaluation harness directly (it manages the server for stdio transport)
#### 3.3 Use Quality Checklist
To verify implementation quality, load the appropriate checklist from the language-specific guide:
- Python: see "Quality Checklist" in [🐍 Python Guide](./reference/python_mcp_server.md)
- Node/TypeScript: see "Quality Checklist" in [⚡ TypeScript Guide](./reference/node_mcp_server.md)
--- ---
@@ -247,7 +156,7 @@ After implementing your MCP server, create comprehensive evaluations to test its
#### 4.1 Understand Evaluation Purpose #### 4.1 Understand Evaluation Purpose
Evaluations test whether LLMs can effectively use your MCP server to answer realistic, complex questions. Use evaluations to test whether LLMs can effectively use your MCP server to answer realistic, complex questions.
#### 4.2 Create 10 Evaluation Questions #### 4.2 Create 10 Evaluation Questions
@@ -260,7 +169,7 @@ To create effective evaluations, follow the process outlined in the evaluation g
#### 4.3 Evaluation Requirements #### 4.3 Evaluation Requirements
Each question must be: Ensure each question is:
- **Independent**: Not dependent on other questions - **Independent**: Not dependent on other questions
- **Read-only**: Only non-destructive operations required - **Read-only**: Only non-destructive operations required
- **Complex**: Requiring multiple tool calls and deep exploration - **Complex**: Requiring multiple tool calls and deep exploration
@@ -291,13 +200,12 @@ Create an XML file with this structure:
Load these resources as needed during development: Load these resources as needed during development:
### Core MCP Documentation (Load First) ### Core MCP Documentation (Load First)
- **MCP Protocol**: Fetch from `https://modelcontextprotocol.io/llms-full.txt` - Complete MCP specification - **MCP Protocol**: Start with sitemap at `https://modelcontextprotocol.io/sitemap.xml`, then fetch specific pages with `.md` suffix
- [📋 MCP Best Practices](./reference/mcp_best_practices.md) - Universal MCP guidelines including: - [📋 MCP Best Practices](./reference/mcp_best_practices.md) - Universal MCP guidelines including:
- Server and tool naming conventions - Server and tool naming conventions
- Response format guidelines (JSON vs Markdown) - Response format guidelines (JSON vs Markdown)
- Pagination best practices - Pagination best practices
- Character limits and truncation strategies - Transport selection (streamable HTTP vs stdio)
- Tool development guidelines
- Security and error handling standards - Security and error handling standards
### SDK Documentation (Load During Phase 1/2) ### SDK Documentation (Load During Phase 1/2)

View File

@@ -1,10 +1,4 @@
# MCP Server Development Best Practices and Guidelines # MCP Server Best Practices
## Overview
This document compiles essential best practices and guidelines for building Model Context Protocol (MCP) servers. It covers naming conventions, tool design, response formats, pagination, error handling, security, and compliance requirements.
---
## Quick Reference ## Quick Reference
@@ -27,106 +21,77 @@ This document compiles essential best practices and guidelines for building Mode
- Return `has_more`, `next_offset`, `total_count` - Return `has_more`, `next_offset`, `total_count`
- Default to 20-50 items - Default to 20-50 items
### Character Limits ### Transport
- Set CHARACTER_LIMIT constant (typically 25,000) - **Streamable HTTP**: For remote servers, multi-client scenarios
- Truncate gracefully with clear messages - **stdio**: For local integrations, command-line tools
- Provide guidance on filtering - Avoid SSE (deprecated in favor of streamable HTTP)
--- ---
## Table of Contents ## Server Naming Conventions
1. Server Naming Conventions
2. Tool Naming and Design
3. Response Format Guidelines
4. Pagination Best Practices
5. Character Limits and Truncation
6. Tool Development Best Practices
7. Transport Best Practices
8. Testing Requirements
9. OAuth and Security Best Practices
10. Resource Management Best Practices
11. Prompt Management Best Practices
12. Error Handling Standards
13. Documentation Requirements
14. Compliance and Monitoring
--- Follow these standardized naming patterns:
## 1. Server Naming Conventions
Follow these standardized naming patterns for MCP servers:
**Python**: Use format `{service}_mcp` (lowercase with underscores) **Python**: Use format `{service}_mcp` (lowercase with underscores)
- Examples: `slack_mcp`, `github_mcp`, `jira_mcp`, `stripe_mcp` - Examples: `slack_mcp`, `github_mcp`, `jira_mcp`
**Node/TypeScript**: Use format `{service}-mcp-server` (lowercase with hyphens) **Node/TypeScript**: Use format `{service}-mcp-server` (lowercase with hyphens)
- Examples: `slack-mcp-server`, `github-mcp-server`, `jira-mcp-server` - Examples: `slack-mcp-server`, `github-mcp-server`, `jira-mcp-server`
The name should be: The name should be general, descriptive of the service being integrated, easy to infer from the task description, and without version numbers.
- General (not tied to specific features)
- Descriptive of the service/API being integrated
- Easy to infer from the task description
- Without version numbers or dates
--- ---
## 2. Tool Naming and Design ## Tool Naming and Design
### Tool Naming Best Practices ### Tool Naming
1. **Use snake_case**: `search_users`, `create_project`, `get_channel_info` 1. **Use snake_case**: `search_users`, `create_project`, `get_channel_info`
2. **Include service prefix**: Anticipate that your MCP server may be used alongside other MCP servers 2. **Include service prefix**: Anticipate that your MCP server may be used alongside other MCP servers
- Use `slack_send_message` instead of just `send_message` - Use `slack_send_message` instead of just `send_message`
- Use `github_create_issue` instead of just `create_issue` - Use `github_create_issue` instead of just `create_issue`
- Use `asana_list_tasks` instead of just `list_tasks`
3. **Be action-oriented**: Start with verbs (get, list, search, create, etc.) 3. **Be action-oriented**: Start with verbs (get, list, search, create, etc.)
4. **Be specific**: Avoid generic names that could conflict with other servers 4. **Be specific**: Avoid generic names that could conflict with other servers
5. **Maintain consistency**: Use consistent naming patterns within your server
### Tool Design Guidelines ### Tool Design
- Tool descriptions must narrowly and unambiguously describe functionality - Tool descriptions must narrowly and unambiguously describe functionality
- Descriptions must precisely match actual functionality - Descriptions must precisely match actual functionality
- Should not create confusion with other MCP servers - Provide tool annotations (readOnlyHint, destructiveHint, idempotentHint, openWorldHint)
- Should provide tool annotations (readOnlyHint, destructiveHint, idempotentHint, openWorldHint)
- Keep tool operations focused and atomic - Keep tool operations focused and atomic
--- ---
## 3. Response Format Guidelines ## Response Formats
All tools that return data should support multiple formats for flexibility: All tools that return data should support multiple formats:
### JSON Format (`response_format="json"`) ### JSON Format (`response_format="json"`)
- Machine-readable structured data - Machine-readable structured data
- Include all available fields and metadata - Include all available fields and metadata
- Consistent field names and types - Consistent field names and types
- Suitable for programmatic processing - Use for programmatic processing
- Use for when LLMs need to process data further
### Markdown Format (`response_format="markdown"`, typically default) ### Markdown Format (`response_format="markdown"`, typically default)
- Human-readable formatted text - Human-readable formatted text
- Use headers, lists, and formatting for clarity - Use headers, lists, and formatting for clarity
- Convert timestamps to human-readable format (e.g., "2024-01-15 10:30:00 UTC" instead of epoch) - Convert timestamps to human-readable format
- Show display names with IDs in parentheses (e.g., "@john.doe (U123456)") - Show display names with IDs in parentheses
- Omit verbose metadata (e.g., show only one profile image URL, not all sizes) - Omit verbose metadata
- Group related information logically
- Use for when presenting information to users
--- ---
## 4. Pagination Best Practices ## Pagination
For tools that list resources: For tools that list resources:
- **Always respect the `limit` parameter**: Never load all results when a limit is specified - **Always respect the `limit` parameter**
- **Implement pagination**: Use `offset` or cursor-based pagination - **Implement pagination**: Use `offset` or cursor-based pagination
- **Return pagination metadata**: Include `has_more`, `next_offset`/`next_cursor`, `total_count` - **Return pagination metadata**: Include `has_more`, `next_offset`/`next_cursor`, `total_count`
- **Never load all results into memory**: Especially important for large datasets - **Never load all results into memory**: Especially important for large datasets
- **Default to reasonable limits**: 20-50 items is typical - **Default to reasonable limits**: 20-50 items is typical
- **Include clear pagination info in responses**: Make it easy for LLMs to request more data
Example pagination response structure: Example pagination response:
```json ```json
{ {
"total": 150, "total": 150,
@@ -140,776 +105,145 @@ Example pagination response structure:
--- ---
## 5. Character Limits and Truncation ## Transport Options
To prevent overwhelming responses with too much data: ### Streamable HTTP
- **Define CHARACTER_LIMIT constant**: Typically 25,000 characters at module level **Best for**: Remote servers, web services, multi-client scenarios
- **Check response size before returning**: Measure the final response length
- **Truncate gracefully with clear indicators**: Let the LLM know data was truncated
- **Provide guidance on filtering**: Suggest how to use parameters to reduce results
- **Include truncation metadata**: Show what was truncated and how to get more
Example truncation handling:
```python
CHARACTER_LIMIT = 25000
if len(result) > CHARACTER_LIMIT:
truncated_data = data[:max(1, len(data) // 2)]
response["truncated"] = True
response["truncation_message"] = (
f"Response truncated from {len(data)} to {len(truncated_data)} items. "
f"Use 'offset' parameter or add filters to see more results."
)
```
---
## 6. Transport Options
MCP servers support multiple transport mechanisms for different deployment scenarios:
### Stdio Transport
**Best for**: Command-line tools, local integrations, subprocess execution
**Characteristics**: **Characteristics**:
- Standard input/output stream communication - Bidirectional communication over HTTP
- Simple setup, no network configuration needed
- Runs as a subprocess of the client
- Ideal for desktop applications and CLI tools
**Use when**:
- Building tools for local development environments
- Integrating with desktop applications (e.g., Claude Desktop)
- Creating command-line utilities
- Single-user, single-session scenarios
### HTTP Transport
**Best for**: Web services, remote access, multi-client scenarios
**Characteristics**:
- Request-response pattern over HTTP
- Supports multiple simultaneous clients - Supports multiple simultaneous clients
- Can be deployed as a web service - Can be deployed as a web service
- Requires network configuration and security considerations - Enables server-to-client notifications
**Use when**: **Use when**:
- Serving multiple clients simultaneously - Serving multiple clients simultaneously
- Deploying as a cloud service - Deploying as a cloud service
- Integration with web applications - Integration with web applications
- Need for load balancing or scaling
### Server-Sent Events (SSE) Transport ### stdio
**Best for**: Real-time updates, push notifications, streaming data **Best for**: Local integrations, command-line tools
**Characteristics**: **Characteristics**:
- One-way server-to-client streaming over HTTP - Standard input/output stream communication
- Enables real-time updates without polling - Simple setup, no network configuration needed
- Long-lived connections for continuous data flow - Runs as a subprocess of the client
- Built on standard HTTP infrastructure
**Use when**: **Use when**:
- Clients need real-time data updates - Building tools for local development environments
- Implementing push notifications - Integrating with desktop applications
- Streaming logs or monitoring data - Single-user, single-session scenarios
- Progressive result delivery for long operations
### Transport Selection Criteria **Note**: stdio servers should NOT log to stdout (use stderr for logging)
| Criterion | Stdio | HTTP | SSE | ### Transport Selection
|-----------|-------|------|-----|
| **Deployment** | Local | Remote | Remote | | Criterion | stdio | Streamable HTTP |
| **Clients** | Single | Multiple | Multiple | |-----------|-------|-----------------|
| **Communication** | Bidirectional | Request-Response | Server-Push | | **Deployment** | Local | Remote |
| **Complexity** | Low | Medium | Medium-High | | **Clients** | Single | Multiple |
| **Real-time** | No | No | Yes | | **Complexity** | Low | Medium |
| **Real-time** | No | Yes |
--- ---
## 7. Tool Development Best Practices ## Security Best Practices
### General Guidelines
1. Tool names should be descriptive and action-oriented
2. Use parameter validation with detailed JSON schemas
3. Include examples in tool descriptions
4. Implement proper error handling and validation
5. Use progress reporting for long operations
6. Keep tool operations focused and atomic
7. Document expected return value structures
8. Implement proper timeouts
9. Consider rate limiting for resource-intensive operations
10. Log tool usage for debugging and monitoring
### Security Considerations for Tools
#### Input Validation
- Validate all parameters against schema
- Sanitize file paths and system commands
- Validate URLs and external identifiers
- Check parameter sizes and ranges
- Prevent command injection
#### Access Control
- Implement authentication where needed
- Use appropriate authorization checks
- Audit tool usage
- Rate limit requests
- Monitor for abuse
#### Error Handling
- Don't expose internal errors to clients
- Log security-relevant errors
- Handle timeouts appropriately
- Clean up resources after errors
- Validate return values
### Tool Annotations
- Provide readOnlyHint and destructiveHint annotations
- Remember annotations are hints, not security guarantees
- Clients should not make security-critical decisions based solely on annotations
---
## 8. Transport Best Practices
### General Transport Guidelines
1. Handle connection lifecycle properly
2. Implement proper error handling
3. Use appropriate timeout values
4. Implement connection state management
5. Clean up resources on disconnection
### Security Best Practices for Transport
- Follow security considerations for DNS rebinding attacks
- Implement proper authentication mechanisms
- Validate message formats
- Handle malformed messages gracefully
### Stdio Transport Specific
- Local MCP servers should NOT log to stdout (interferes with protocol)
- Use stderr for logging messages
- Handle standard I/O streams properly
---
## 9. Testing Requirements
A comprehensive testing strategy should cover:
### Functional Testing
- Verify correct execution with valid/invalid inputs
### Integration Testing
- Test interaction with external systems
### Security Testing
- Validate auth, input sanitization, rate limiting
### Performance Testing
- Check behavior under load, timeouts
### Error Handling
- Ensure proper error reporting and cleanup
---
## 10. OAuth and Security Best Practices
### Authentication and Authorization ### Authentication and Authorization
MCP servers that connect to external services should implement proper authentication: **OAuth 2.1**:
**OAuth 2.1 Implementation:**
- Use secure OAuth 2.1 with certificates from recognized authorities - Use secure OAuth 2.1 with certificates from recognized authorities
- Validate access tokens before processing requests - Validate access tokens before processing requests
- Only accept tokens specifically intended for your server - Only accept tokens specifically intended for your server
- Reject tokens without proper audience claims
- Never pass through tokens received from MCP clients
**API Key Management:** **API Keys**:
- Store API keys in environment variables, never in code - Store API keys in environment variables, never in code
- Validate keys on server startup - Validate keys on server startup
- Provide clear error messages when authentication fails - Provide clear error messages when authentication fails
- Use secure transmission for sensitive credentials
### Input Validation and Security ### Input Validation
**Always validate inputs:**
- Sanitize file paths to prevent directory traversal - Sanitize file paths to prevent directory traversal
- Validate URLs and external identifiers - Validate URLs and external identifiers
- Check parameter sizes and ranges - Check parameter sizes and ranges
- Prevent command injection in system calls - Prevent command injection in system calls
- Use schema validation (Pydantic/Zod) for all inputs - Use schema validation (Pydantic/Zod) for all inputs
**Error handling security:** ### Error Handling
- Don't expose internal errors to clients - Don't expose internal errors to clients
- Log security-relevant errors server-side - Log security-relevant errors server-side
- Provide helpful but not revealing error messages - Provide helpful but not revealing error messages
- Clean up resources after errors - Clean up resources after errors
### Privacy and Data Protection ### DNS Rebinding Protection
**Data collection principles:** For streamable HTTP servers running locally:
- Only collect data strictly necessary for functionality - Enable DNS rebinding protection
- Don't collect extraneous conversation data - Validate the `Origin` header on all incoming connections
- Don't collect PII unless explicitly required for the tool's purpose - Bind to `127.0.0.1` rather than `0.0.0.0`
- Provide clear information about what data is accessed
**Data transmission:**
- Don't send data to servers outside your organization without disclosure
- Use secure transmission (HTTPS) for all network communication
- Validate certificates for external services
--- ---
## 11. Resource Management Best Practices ## Tool Annotations
1. Only suggest necessary resources Provide annotations to help clients understand tool behavior:
2. Use clear, descriptive names for roots
3. Handle resource boundaries properly | Annotation | Type | Default | Description |
4. Respect client control over resources |-----------|------|---------|-------------|
5. Use model-controlled primitives (tools) for automatic data exposure | `readOnlyHint` | boolean | false | Tool does not modify its environment |
| `destructiveHint` | boolean | true | Tool may perform destructive updates |
| `idempotentHint` | boolean | false | Repeated calls with same args have no additional effect |
| `openWorldHint` | boolean | true | Tool interacts with external entities |
**Important**: Annotations are hints, not security guarantees. Clients should not make security-critical decisions based solely on annotations.
--- ---
## 12. Prompt Management Best Practices ## Error Handling
- Clients should show users proposed prompts
- Users should be able to modify or reject prompts
- Clients should show users completions
- Users should be able to modify or reject completions
- Consider costs when using sampling
---
## 13. Error Handling Standards
- Use standard JSON-RPC error codes - Use standard JSON-RPC error codes
- Report tool errors within result objects (not protocol-level) - Report tool errors within result objects (not protocol-level errors)
- Provide helpful, specific error messages - Provide helpful, specific error messages with suggested next steps
- Don't expose internal implementation details - Don't expose internal implementation details
- Clean up resources properly on errors - Clean up resources properly on errors
Example error handling:
```typescript
try {
const result = performOperation();
return { content: [{ type: "text", text: result }] };
} catch (error) {
return {
isError: true,
content: [{
type: "text",
text: `Error: ${error.message}. Try using filter='active_only' to reduce results.`
}]
};
}
```
--- ---
## 14. Documentation Requirements ## Testing Requirements
Comprehensive testing should cover:
- **Functional testing**: Verify correct execution with valid/invalid inputs
- **Integration testing**: Test interaction with external systems
- **Security testing**: Validate auth, input sanitization, rate limiting
- **Performance testing**: Check behavior under load, timeouts
- **Error handling**: Ensure proper error reporting and cleanup
---
## Documentation Requirements
- Provide clear documentation of all tools and capabilities - Provide clear documentation of all tools and capabilities
- Include working examples (at least 3 per major feature) - Include working examples (at least 3 per major feature)
- Document security considerations - Document security considerations
- Specify required permissions and access levels - Specify required permissions and access levels
- Document rate limits and performance characteristics - Document rate limits and performance characteristics
---
## 15. Compliance and Monitoring
- Implement logging for debugging and monitoring
- Track tool usage patterns
- Monitor for potential abuse
- Maintain audit trails for security-relevant operations
- Be prepared for ongoing compliance reviews
---
## Summary
These best practices represent the comprehensive guidelines for building secure, efficient, and compliant MCP servers that work well within the ecosystem. Developers should follow these guidelines to ensure their MCP servers meet the standards for inclusion in the MCP directory and provide a safe, reliable experience for users.
----------
# Tools
> Enable LLMs to perform actions through your server
Tools are a powerful primitive in the Model Context Protocol (MCP) that enable servers to expose executable functionality to clients. Through tools, LLMs can interact with external systems, perform computations, and take actions in the real world.
<Note>
Tools are designed to be **model-controlled**, meaning that tools are exposed from servers to clients with the intention of the AI model being able to automatically invoke them (with a human in the loop to grant approval).
</Note>
## Overview
Tools in MCP allow servers to expose executable functions that can be invoked by clients and used by LLMs to perform actions. Key aspects of tools include:
* **Discovery**: Clients can obtain a list of available tools by sending a `tools/list` request
* **Invocation**: Tools are called using the `tools/call` request, where servers perform the requested operation and return results
* **Flexibility**: Tools can range from simple calculations to complex API interactions
Like [resources](/docs/concepts/resources), tools are identified by unique names and can include descriptions to guide their usage. However, unlike resources, tools represent dynamic operations that can modify state or interact with external systems.
## Tool definition structure
Each tool is defined with the following structure:
```typescript
{
name: string; // Unique identifier for the tool
description?: string; // Human-readable description
inputSchema: { // JSON Schema for the tool's parameters
type: "object",
properties: { ... } // Tool-specific parameters
},
annotations?: { // Optional hints about tool behavior
title?: string; // Human-readable title for the tool
readOnlyHint?: boolean; // If true, the tool does not modify its environment
destructiveHint?: boolean; // If true, the tool may perform destructive updates
idempotentHint?: boolean; // If true, repeated calls with same args have no additional effect
openWorldHint?: boolean; // If true, tool interacts with external entities
}
}
```
## Implementing tools
Here's an example of implementing a basic tool in an MCP server:
<Tabs>
<Tab title="TypeScript">
```typescript
const server = new Server({
name: "example-server",
version: "1.0.0"
}, {
capabilities: {
tools: {}
}
});
// Define available tools
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [{
name: "calculate_sum",
description: "Add two numbers together",
inputSchema: {
type: "object",
properties: {
a: { type: "number" },
b: { type: "number" }
},
required: ["a", "b"]
}
}]
};
});
// Handle tool execution
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "calculate_sum") {
const { a, b } = request.params.arguments;
return {
content: [
{
type: "text",
text: String(a + b)
}
]
};
}
throw new Error("Tool not found");
});
```
</Tab>
<Tab title="Python">
```python
app = Server("example-server")
@app.list_tools()
async def list_tools() -> list[types.Tool]:
return [
types.Tool(
name="calculate_sum",
description="Add two numbers together",
inputSchema={
"type": "object",
"properties": {
"a": {"type": "number"},
"b": {"type": "number"}
},
"required": ["a", "b"]
}
)
]
@app.call_tool()
async def call_tool(
name: str,
arguments: dict
) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
if name == "calculate_sum":
a = arguments["a"]
b = arguments["b"]
result = a + b
return [types.TextContent(type="text", text=str(result))]
raise ValueError(f"Tool not found: {name}")
```
</Tab>
</Tabs>
## Example tool patterns
Here are some examples of types of tools that a server could provide:
### System operations
Tools that interact with the local system:
```typescript
{
name: "execute_command",
description: "Run a shell command",
inputSchema: {
type: "object",
properties: {
command: { type: "string" },
args: { type: "array", items: { type: "string" } }
}
}
}
```
### API integrations
Tools that wrap external APIs:
```typescript
{
name: "github_create_issue",
description: "Create a GitHub issue",
inputSchema: {
type: "object",
properties: {
title: { type: "string" },
body: { type: "string" },
labels: { type: "array", items: { type: "string" } }
}
}
}
```
### Data processing
Tools that transform or analyze data:
```typescript
{
name: "analyze_csv",
description: "Analyze a CSV file",
inputSchema: {
type: "object",
properties: {
filepath: { type: "string" },
operations: {
type: "array",
items: {
enum: ["sum", "average", "count"]
}
}
}
}
}
```
## Best practices
When implementing tools:
1. Provide clear, descriptive names and descriptions
2. Use detailed JSON Schema definitions for parameters
3. Include examples in tool descriptions to demonstrate how the model should use them
4. Implement proper error handling and validation
5. Use progress reporting for long operations
6. Keep tool operations focused and atomic
7. Document expected return value structures
8. Implement proper timeouts
9. Consider rate limiting for resource-intensive operations
10. Log tool usage for debugging and monitoring
### Tool name conflicts
MCP client applications and MCP server proxies may encounter tool name conflicts when building their own tool lists. For example, two connected MCP servers `web1` and `web2` may both expose a tool named `search_web`.
Applications may disambiguiate tools with one of the following strategies (among others; not an exhaustive list):
* Concatenating a unique, user-defined server name with the tool name, e.g. `web1___search_web` and `web2___search_web`. This strategy may be preferable when unique server names are already provided by the user in a configuration file.
* Generating a random prefix for the tool name, e.g. `jrwxs___search_web` and `6cq52___search_web`. This strategy may be preferable in server proxies where user-defined unique names are not available.
* Using the server URI as a prefix for the tool name, e.g. `web1.example.com:search_web` and `web2.example.com:search_web`. This strategy may be suitable when working with remote MCP servers.
Note that the server-provided name from the initialization flow is not guaranteed to be unique and is not generally suitable for disambiguation purposes.
## Security considerations
When exposing tools:
### Input validation
* Validate all parameters against the schema
* Sanitize file paths and system commands
* Validate URLs and external identifiers
* Check parameter sizes and ranges
* Prevent command injection
### Access control
* Implement authentication where needed
* Use appropriate authorization checks
* Audit tool usage
* Rate limit requests
* Monitor for abuse
### Error handling
* Don't expose internal errors to clients
* Log security-relevant errors
* Handle timeouts appropriately
* Clean up resources after errors
* Validate return values
## Tool discovery and updates
MCP supports dynamic tool discovery:
1. Clients can list available tools at any time
2. Servers can notify clients when tools change using `notifications/tools/list_changed`
3. Tools can be added or removed during runtime
4. Tool definitions can be updated (though this should be done carefully)
## Error handling
Tool errors should be reported within the result object, not as MCP protocol-level errors. This allows the LLM to see and potentially handle the error. When a tool encounters an error:
1. Set `isError` to `true` in the result
2. Include error details in the `content` array
Here's an example of proper error handling for tools:
<Tabs>
<Tab title="TypeScript">
```typescript
try {
// Tool operation
const result = performOperation();
return {
content: [
{
type: "text",
text: `Operation successful: ${result}`
}
]
};
} catch (error) {
return {
isError: true,
content: [
{
type: "text",
text: `Error: ${error.message}`
}
]
};
}
```
</Tab>
<Tab title="Python">
```python
try:
# Tool operation
result = perform_operation()
return types.CallToolResult(
content=[
types.TextContent(
type="text",
text=f"Operation successful: {result}"
)
]
)
except Exception as error:
return types.CallToolResult(
isError=True,
content=[
types.TextContent(
type="text",
text=f"Error: {str(error)}"
)
]
)
```
</Tab>
</Tabs>
This approach allows the LLM to see that an error occurred and potentially take corrective action or request human intervention.
## Tool annotations
Tool annotations provide additional metadata about a tool's behavior, helping clients understand how to present and manage tools. These annotations are hints that describe the nature and impact of a tool, but should not be relied upon for security decisions.
### Purpose of tool annotations
Tool annotations serve several key purposes:
1. Provide UX-specific information without affecting model context
2. Help clients categorize and present tools appropriately
3. Convey information about a tool's potential side effects
4. Assist in developing intuitive interfaces for tool approval
### Available tool annotations
The MCP specification defines the following annotations for tools:
| Annotation | Type | Default | Description |
| ----------------- | ------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| `title` | string | - | A human-readable title for the tool, useful for UI display |
| `readOnlyHint` | boolean | false | If true, indicates the tool does not modify its environment |
| `destructiveHint` | boolean | true | If true, the tool may perform destructive updates (only meaningful when `readOnlyHint` is false) |
| `idempotentHint` | boolean | false | If true, calling the tool repeatedly with the same arguments has no additional effect (only meaningful when `readOnlyHint` is false) |
| `openWorldHint` | boolean | true | If true, the tool may interact with an "open world" of external entities |
### Example usage
Here's how to define tools with annotations for different scenarios:
```typescript
// A read-only search tool
{
name: "web_search",
description: "Search the web for information",
inputSchema: {
type: "object",
properties: {
query: { type: "string" }
},
required: ["query"]
},
annotations: {
title: "Web Search",
readOnlyHint: true,
openWorldHint: true
}
}
// A destructive file deletion tool
{
name: "delete_file",
description: "Delete a file from the filesystem",
inputSchema: {
type: "object",
properties: {
path: { type: "string" }
},
required: ["path"]
},
annotations: {
title: "Delete File",
readOnlyHint: false,
destructiveHint: true,
idempotentHint: true,
openWorldHint: false
}
}
// A non-destructive database record creation tool
{
name: "create_record",
description: "Create a new record in the database",
inputSchema: {
type: "object",
properties: {
table: { type: "string" },
data: { type: "object" }
},
required: ["table", "data"]
},
annotations: {
title: "Create Database Record",
readOnlyHint: false,
destructiveHint: false,
idempotentHint: false,
openWorldHint: false
}
}
```
### Integrating annotations in server implementation
<Tabs>
<Tab title="TypeScript">
```typescript
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [{
name: "calculate_sum",
description: "Add two numbers together",
inputSchema: {
type: "object",
properties: {
a: { type: "number" },
b: { type: "number" }
},
required: ["a", "b"]
},
annotations: {
title: "Calculate Sum",
readOnlyHint: true,
openWorldHint: false
}
}]
};
});
```
</Tab>
<Tab title="Python">
```python
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("example-server")
@mcp.tool(
annotations={
"title": "Calculate Sum",
"readOnlyHint": True,
"openWorldHint": False
}
)
async def calculate_sum(a: float, b: float) -> str:
"""Add two numbers together.
Args:
a: First number to add
b: Second number to add
"""
result = a + b
return str(result)
```
</Tab>
</Tabs>
### Best practices for tool annotations
1. **Be accurate about side effects**: Clearly indicate whether a tool modifies its environment and whether those modifications are destructive.
2. **Use descriptive titles**: Provide human-friendly titles that clearly describe the tool's purpose.
3. **Indicate idempotency properly**: Mark tools as idempotent only if repeated calls with the same arguments truly have no additional effect.
4. **Set appropriate open/closed world hints**: Indicate whether a tool interacts with a closed system (like a database) or an open system (like the web).
5. **Remember annotations are hints**: All properties in ToolAnnotations are hints and not guaranteed to provide a faithful description of tool behavior. Clients should never make security-critical decisions based solely on annotations.
## Testing tools
A comprehensive testing strategy for MCP tools should cover:
* **Functional testing**: Verify tools execute correctly with valid inputs and handle invalid inputs appropriately
* **Integration testing**: Test tool interaction with external systems using both real and mocked dependencies
* **Security testing**: Validate authentication, authorization, input sanitization, and rate limiting
* **Performance testing**: Check behavior under load, timeout handling, and resource cleanup
* **Error handling**: Ensure tools properly report errors through the MCP protocol and clean up resources

View File

@@ -11,9 +11,10 @@ This document provides Node/TypeScript-specific best practices and examples for
### Key Imports ### Key Imports
```typescript ```typescript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import express from "express";
import { z } from "zod"; import { z } from "zod";
import axios, { AxiosError } from "axios";
``` ```
### Server Initialization ### Server Initialization
@@ -26,9 +27,22 @@ const server = new McpServer({
### Tool Registration Pattern ### Tool Registration Pattern
```typescript ```typescript
server.registerTool("tool_name", {...config}, async (params) => { server.registerTool(
// Implementation "tool_name",
}); {
title: "Tool Display Name",
description: "What the tool does",
inputSchema: { param: z.string() },
outputSchema: { result: z.string() }
},
async ({ param }) => {
const output = { result: `Processed: ${param}` };
return {
content: [{ type: "text", text: JSON.stringify(output) }],
structuredContent: output // Modern pattern for structured data
};
}
);
``` ```
--- ---
@@ -41,6 +55,11 @@ The official MCP TypeScript SDK provides:
- Zod schema integration for runtime input validation - Zod schema integration for runtime input validation
- Type-safe tool handler implementations - Type-safe tool handler implementations
**IMPORTANT - Use Modern APIs Only:**
- **DO use**: `server.registerTool()`, `server.registerResource()`, `server.registerPrompt()`
- **DO NOT use**: Old deprecated APIs such as `server.tool()`, `server.setRequestHandler(ListToolsRequestSchema, ...)`, or manual handler registration
- The `register*` methods provide better type safety, automatic schema handling, and are the recommended approach
See the MCP SDK documentation in the references for complete details. See the MCP SDK documentation in the references for complete details.
## Server Naming Convention ## Server Naming Convention
@@ -204,55 +223,43 @@ Error Handling:
}; };
} }
// Format response based on requested format // Prepare structured output
let result: string; const output = {
total,
count: users.length,
offset: params.offset,
users: users.map((user: any) => ({
id: user.id,
name: user.name,
email: user.email,
...(user.team ? { team: user.team } : {}),
active: user.active ?? true
})),
has_more: total > params.offset + users.length,
...(total > params.offset + users.length ? {
next_offset: params.offset + users.length
} : {})
};
// Format text representation based on requested format
let textContent: string;
if (params.response_format === ResponseFormat.MARKDOWN) { if (params.response_format === ResponseFormat.MARKDOWN) {
// Human-readable markdown format const lines = [`# User Search Results: '${params.query}'`, "",
const lines: string[] = [`# User Search Results: '${params.query}'`, ""]; `Found ${total} users (showing ${users.length})`, ""];
lines.push(`Found ${total} users (showing ${users.length})`);
lines.push("");
for (const user of users) { for (const user of users) {
lines.push(`## ${user.name} (${user.id})`); lines.push(`## ${user.name} (${user.id})`);
lines.push(`- **Email**: ${user.email}`); lines.push(`- **Email**: ${user.email}`);
if (user.team) { if (user.team) lines.push(`- **Team**: ${user.team}`);
lines.push(`- **Team**: ${user.team}`);
}
lines.push(""); lines.push("");
} }
textContent = lines.join("\n");
result = lines.join("\n");
} else { } else {
// Machine-readable JSON format textContent = JSON.stringify(output, null, 2);
const response: any = {
total,
count: users.length,
offset: params.offset,
users: users.map((user: any) => ({
id: user.id,
name: user.name,
email: user.email,
...(user.team ? { team: user.team } : {}),
active: user.active ?? true
}))
};
// Add pagination info if there are more results
if (total > params.offset + users.length) {
response.has_more = true;
response.next_offset = params.offset + users.length;
}
result = JSON.stringify(response, null, 2);
} }
return { return {
content: [{ content: [{ type: "text", text: textContent }],
type: "text", structuredContent: output // Modern pattern for structured data
text: result
}]
}; };
} catch (error) { } catch (error) {
return { return {
@@ -695,27 +702,57 @@ server.registerTool(
); );
// Main function // Main function
async function main() { // For stdio (local):
// Verify environment variables if needed async function runStdio() {
if (!process.env.EXAMPLE_API_KEY) { if (!process.env.EXAMPLE_API_KEY) {
console.error("ERROR: EXAMPLE_API_KEY environment variable is required"); console.error("ERROR: EXAMPLE_API_KEY environment variable is required");
process.exit(1); process.exit(1);
} }
// Create transport
const transport = new StdioServerTransport(); const transport = new StdioServerTransport();
// Connect server to transport
await server.connect(transport); await server.connect(transport);
console.error("MCP server running via stdio");
console.error("Example MCP server running via stdio");
} }
// Run the server // For streamable HTTP (remote):
main().catch((error) => { async function runHTTP() {
console.error("Server error:", error); if (!process.env.EXAMPLE_API_KEY) {
process.exit(1); console.error("ERROR: EXAMPLE_API_KEY environment variable is required");
}); process.exit(1);
}
const app = express();
app.use(express.json());
app.post('/mcp', async (req, res) => {
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: undefined,
enableJsonResponse: true
});
res.on('close', () => transport.close());
await server.connect(transport);
await transport.handleRequest(req, res, req.body);
});
const port = parseInt(process.env.PORT || '3000');
app.listen(port, () => {
console.error(`MCP server running on http://localhost:${port}/mcp`);
});
}
// Choose transport based on environment
const transport = process.env.TRANSPORT || 'stdio';
if (transport === 'http') {
runHTTP().catch(error => {
console.error("Server error:", error);
process.exit(1);
});
} else {
runStdio().catch(error => {
console.error("Server error:", error);
process.exit(1);
});
}
``` ```
--- ---
@@ -777,30 +814,47 @@ server.registerResourceList(async () => {
- **Resources**: When data is relatively static or template-based - **Resources**: When data is relatively static or template-based
- **Tools**: When operations have side effects or complex workflows - **Tools**: When operations have side effects or complex workflows
### Multiple Transport Options ### Transport Options
The TypeScript SDK supports different transport mechanisms: The TypeScript SDK supports two main transport mechanisms:
#### Streamable HTTP (Recommended for Remote Servers)
```typescript
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import express from "express";
const app = express();
app.use(express.json());
app.post('/mcp', async (req, res) => {
// Create new transport for each request (stateless, prevents request ID collisions)
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: undefined,
enableJsonResponse: true
});
res.on('close', () => transport.close());
await server.connect(transport);
await transport.handleRequest(req, res, req.body);
});
app.listen(3000);
```
#### stdio (For Local Integrations)
```typescript ```typescript
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
// Stdio transport (default - for CLI tools) const transport = new StdioServerTransport();
const stdioTransport = new StdioServerTransport(); await server.connect(transport);
await server.connect(stdioTransport);
// SSE transport (for real-time web updates)
const sseTransport = new SSEServerTransport("/message", response);
await server.connect(sseTransport);
// HTTP transport (for web services)
// Configure based on your HTTP framework integration
``` ```
**Transport selection guide:** **Transport selection:**
- **Stdio**: Command-line tools, subprocess integration, local development - **Streamable HTTP**: Web services, remote access, multiple clients
- **HTTP**: Web services, remote access, multiple simultaneous clients - **stdio**: Command-line tools, local development, subprocess integration
- **SSE**: Real-time updates, server-push notifications, web dashboards
### Notification Support ### Notification Support
@@ -889,7 +943,7 @@ Before finalizing your Node/TypeScript MCP server implementation, ensure:
### Advanced Features (where applicable) ### Advanced Features (where applicable)
- [ ] Resources registered for appropriate data endpoints - [ ] Resources registered for appropriate data endpoints
- [ ] Appropriate transport configured (stdio, HTTP, SSE) - [ ] Appropriate transport configured (stdio or streamable HTTP)
- [ ] Notifications implemented for dynamic server capabilities - [ ] Notifications implemented for dynamic server capabilities
- [ ] Type-safe with SDK interfaces - [ ] Type-safe with SDK interfaces

View File

@@ -204,32 +204,6 @@ async def list_items(params: ListInput) -> str:
return json.dumps(response, indent=2) return json.dumps(response, indent=2)
``` ```
## Character Limits and Truncation
Add a CHARACTER_LIMIT constant to prevent overwhelming responses:
```python
# At module level
CHARACTER_LIMIT = 25000 # Maximum response size in characters
async def search_tool(params: SearchInput) -> str:
result = generate_response(data)
# Check character limit and truncate if needed
if len(result) > CHARACTER_LIMIT:
# Truncate data and add notice
truncated_data = data[:max(1, len(data) // 2)]
response["data"] = truncated_data
response["truncated"] = True
response["truncation_message"] = (
f"Response truncated from {len(data)} to {len(truncated_data)} items. "
f"Use 'offset' parameter or add filters to see more results."
)
result = json.dumps(response, indent=2)
return result
```
## Error Handling ## Error Handling
Provide clear, actionable error messages: Provide clear, actionable error messages:
@@ -377,7 +351,6 @@ mcp = FastMCP("example_mcp")
# Constants # Constants
API_BASE_URL = "https://api.example.com/v1" API_BASE_URL = "https://api.example.com/v1"
CHARACTER_LIMIT = 25000 # Maximum response size in characters
# Enums # Enums
class ResponseFormat(str, Enum): class ResponseFormat(str, Enum):
@@ -643,28 +616,23 @@ async def query_data(query: str, ctx: Context) -> str:
return format_results(results) return format_results(results)
``` ```
### Multiple Transport Options ### Transport Options
FastMCP supports different transport mechanisms: FastMCP supports two main transport mechanisms:
```python ```python
# Default: Stdio transport (for CLI tools) # stdio transport (for local tools) - default
if __name__ == "__main__": if __name__ == "__main__":
mcp.run() mcp.run()
# HTTP transport (for web services) # Streamable HTTP transport (for remote servers)
if __name__ == "__main__": if __name__ == "__main__":
mcp.run(transport="streamable_http", port=8000) mcp.run(transport="streamable_http", port=8000)
# SSE transport (for real-time updates)
if __name__ == "__main__":
mcp.run(transport="sse", port=8000)
``` ```
**Transport selection:** **Transport selection:**
- **Stdio**: Command-line tools, subprocess integration - **stdio**: Command-line tools, local integrations, subprocess execution
- **HTTP**: Web services, remote access, multiple clients - **Streamable HTTP**: Web services, remote access, multiple clients
- **SSE**: Real-time updates, push notifications
--- ---
@@ -733,12 +701,11 @@ Before finalizing your Python MCP server implementation, ensure:
- [ ] Resources registered for appropriate data endpoints - [ ] Resources registered for appropriate data endpoints
- [ ] Lifespan management implemented for persistent connections - [ ] Lifespan management implemented for persistent connections
- [ ] Structured output types used (TypedDict, Pydantic models) - [ ] Structured output types used (TypedDict, Pydantic models)
- [ ] Appropriate transport configured (stdio, HTTP, SSE) - [ ] Appropriate transport configured (stdio or streamable HTTP)
### Code Quality ### Code Quality
- [ ] File includes proper imports including Pydantic imports - [ ] File includes proper imports including Pydantic imports
- [ ] Pagination is properly implemented where applicable - [ ] Pagination is properly implemented where applicable
- [ ] Large responses check CHARACTER_LIMIT and truncate with clear messages
- [ ] Filtering options are provided for potentially large result sets - [ ] Filtering options are provided for potentially large result sets
- [ ] All async functions are properly defined with `async def` - [ ] All async functions are properly defined with `async def`
- [ ] HTTP client usage follows async patterns with proper context managers - [ ] HTTP client usage follows async patterns with proper context managers

View File

@@ -22,6 +22,28 @@ equipped with procedural knowledge that no model can fully possess.
3. Domain expertise - Company-specific knowledge, schemas, business logic 3. Domain expertise - Company-specific knowledge, schemas, business logic
4. Bundled resources - Scripts, references, and assets for complex and repetitive tasks 4. Bundled resources - Scripts, references, and assets for complex and repetitive tasks
## Core Principles
### Concise is Key
The context window is a public good. Skills share the context window with everything else Claude needs: system prompt, conversation history, other Skills' metadata, and the actual user request.
**Default assumption: Claude is already very smart.** Only add context Claude doesn't already have. Challenge each piece of information: "Does Claude really need this explanation?" and "Does this paragraph justify its token cost?"
Prefer concise examples over verbose explanations.
### Set Appropriate Degrees of Freedom
Match the level of specificity to the task's fragility and variability:
**High freedom (text-based instructions)**: Use when multiple approaches are valid, decisions depend on context, or heuristics guide the approach.
**Medium freedom (pseudocode or scripts with parameters)**: Use when a preferred pattern exists, some variation is acceptable, or configuration affects behavior.
**Low freedom (specific scripts, few parameters)**: Use when operations are fragile and error-prone, consistency is critical, or a specific sequence must be followed.
Think of Claude as exploring a path: a narrow bridge with cliffs needs specific guardrails (low freedom), while an open field allows many routes (high freedom).
### Anatomy of a Skill ### Anatomy of a Skill
Every skill consists of a required SKILL.md file and optional bundled resources: Every skill consists of a required SKILL.md file and optional bundled resources:
@@ -41,7 +63,10 @@ skill-name/
#### SKILL.md (required) #### SKILL.md (required)
**Metadata Quality:** The `name` and `description` in YAML frontmatter determine when Claude will use the skill. Be specific about what the skill does and when to use it. Use the third-person (e.g. "This skill should be used when..." instead of "Use this skill when..."). Every SKILL.md consists of:
- **Frontmatter** (YAML): Contains `name` and `description` fields. These are the only fields that Claude reads to determine when the skill gets used, thus it is very important to be clear and comprehensive in describing what the skill is, and when it should be used.
- **Body** (Markdown): Instructions and guidance for using the skill. Only loaded AFTER the skill triggers (if at all).
#### Bundled Resources (optional) #### Bundled Resources (optional)
@@ -74,19 +99,118 @@ Files not intended to be loaded into context, but rather used within the output
- **Use cases**: Templates, images, icons, boilerplate code, fonts, sample documents that get copied or modified - **Use cases**: Templates, images, icons, boilerplate code, fonts, sample documents that get copied or modified
- **Benefits**: Separates output resources from documentation, enables Claude to use files without loading them into context - **Benefits**: Separates output resources from documentation, enables Claude to use files without loading them into context
#### What to Not Include in a Skill
A skill should only contain essential files that directly support its functionality. Do NOT create extraneous documentation or auxiliary files, including:
- README.md
- INSTALLATION_GUIDE.md
- QUICK_REFERENCE.md
- CHANGELOG.md
- etc.
The skill should only contain the information needed for an AI agent to do the job at hand. It should not contain auxilary context about the process that went into creating it, setup and testing procedures, user-facing documentation, etc. Creating additional documentation files just adds clutter and confusion.
### Progressive Disclosure Design Principle ### Progressive Disclosure Design Principle
Skills use a three-level loading system to manage context efficiently: Skills use a three-level loading system to manage context efficiently:
1. **Metadata (name + description)** - Always in context (~100 words) 1. **Metadata (name + description)** - Always in context (~100 words)
2. **SKILL.md body** - When skill triggers (<5k words) 2. **SKILL.md body** - When skill triggers (<5k words)
3. **Bundled resources** - As needed by Claude (Unlimited*) 3. **Bundled resources** - As needed by Claude (Unlimited because scripts can be executed without reading into context window)
*Unlimited because scripts can be executed without reading into context window. #### Progressive Disclosure Patterns
Keep SKILL.md body to the essentials and under 500 lines to minimize context bloat. Split content into separate files when approaching this limit. When splitting out content into other files, it is very important to reference them from SKILL.md and describe clearly when to read them, to ensure the reader of the skill knows they exist and when to use them.
**Key principle:** When a skill supports multiple variations, frameworks, or options, keep only the core workflow and selection guidance in SKILL.md. Move variant-specific details (patterns, examples, configuration) into separate reference files.
**Pattern 1: High-level guide with references**
```markdown
# PDF Processing
## Quick start
Extract text with pdfplumber:
[code example]
## Advanced features
- **Form filling**: See [FORMS.md](FORMS.md) for complete guide
- **API reference**: See [REFERENCE.md](REFERENCE.md) for all methods
- **Examples**: See [EXAMPLES.md](EXAMPLES.md) for common patterns
```
Claude loads FORMS.md, REFERENCE.md, or EXAMPLES.md only when needed.
**Pattern 2: Domain-specific organization**
For Skills with multiple domains, organize content by domain to avoid loading irrelevant context:
```
bigquery-skill/
├── SKILL.md (overview and navigation)
└── reference/
├── finance.md (revenue, billing metrics)
├── sales.md (opportunities, pipeline)
├── product.md (API usage, features)
└── marketing.md (campaigns, attribution)
```
When a user asks about sales metrics, Claude only reads sales.md.
Similarly, for skills supporting multiple frameworks or variants, organize by variant:
```
cloud-deploy/
├── SKILL.md (workflow + provider selection)
└── references/
├── aws.md (AWS deployment patterns)
├── gcp.md (GCP deployment patterns)
└── azure.md (Azure deployment patterns)
```
When the user chooses AWS, Claude only reads aws.md.
**Pattern 3: Conditional details**
Show basic content, link to advanced content:
```markdown
# DOCX Processing
## Creating documents
Use docx-js for new documents. See [DOCX-JS.md](DOCX-JS.md).
## Editing documents
For simple edits, modify the XML directly.
**For tracked changes**: See [REDLINING.md](REDLINING.md)
**For OOXML details**: See [OOXML.md](OOXML.md)
```
Claude reads REDLINING.md or OOXML.md only when the user needs those features.
**Important guidelines:**
- **Avoid deeply nested references** - Keep references one level deep from SKILL.md. All reference files should link directly from SKILL.md.
- **Structure longer reference files** - For files longer than 100 lines, include a table of contents at the top so Claude can see the full scope when previewing.
## Skill Creation Process ## Skill Creation Process
To create a skill, follow the "Skill Creation Process" in order, skipping steps only if there is a clear reason why they are not applicable. Skill creation involves these steps:
1. Understand the skill with concrete examples
2. Plan reusable skill contents (scripts, references, assets)
3. Initialize the skill (run init_skill.py)
4. Edit the skill (implement resources and write SKILL.md)
5. Package the skill (run package_skill.py)
6. Iterate based on real usage
Follow these steps in order, skipping only if there is a clear reason why they are not applicable.
### Step 1: Understanding the Skill with Concrete Examples ### Step 1: Understanding the Skill with Concrete Examples
@@ -154,27 +278,48 @@ After initialization, customize or remove the generated SKILL.md and example fil
### Step 4: Edit the Skill ### Step 4: Edit the Skill
When editing the (newly-generated or existing) skill, remember that the skill is being created for another instance of Claude to use. Focus on including information that would be beneficial and non-obvious to Claude. Consider what procedural knowledge, domain-specific details, or reusable assets would help another Claude instance execute these tasks more effectively. When editing the (newly-generated or existing) skill, remember that the skill is being created for another instance of Claude to use. Include information that would be beneficial and non-obvious to Claude. Consider what procedural knowledge, domain-specific details, or reusable assets would help another Claude instance execute these tasks more effectively.
#### Learn Proven Design Patterns
Consult these helpful guides based on your skill's needs:
- **Multi-step processes**: See references/workflows.md for sequential workflows and conditional logic
- **Specific output formats or quality standards**: See references/output-patterns.md for template and example patterns
These files contain established best practices for effective skill design.
#### Start with Reusable Skill Contents #### Start with Reusable Skill Contents
To begin implementation, start with the reusable resources identified above: `scripts/`, `references/`, and `assets/` files. Note that this step may require user input. For example, when implementing a `brand-guidelines` skill, the user may need to provide brand assets or templates to store in `assets/`, or documentation to store in `references/`. To begin implementation, start with the reusable resources identified above: `scripts/`, `references/`, and `assets/` files. Note that this step may require user input. For example, when implementing a `brand-guidelines` skill, the user may need to provide brand assets or templates to store in `assets/`, or documentation to store in `references/`.
Also, delete any example files and directories not needed for the skill. The initialization script creates example files in `scripts/`, `references/`, and `assets/` to demonstrate structure, but most skills won't need all of them. Added scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion.
Any example files and directories not needed for the skill should be deleted. The initialization script creates example files in `scripts/`, `references/`, and `assets/` to demonstrate structure, but most skills won't need all of them.
#### Update SKILL.md #### Update SKILL.md
**Writing Style:** Write the entire skill using **imperative/infinitive form** (verb-first instructions), not second person. Use objective, instructional language (e.g., "To accomplish X, do Y" rather than "You should do X" or "If you need to do X"). This maintains consistency and clarity for AI consumption. **Writing Guidelines:** Always use imperative/infinitive form.
To complete SKILL.md, answer the following questions: ##### Frontmatter
1. What is the purpose of the skill, in a few sentences? Write the YAML frontmatter with `name` and `description`:
2. When should the skill be used?
3. In practice, how should Claude use the skill? All reusable skill contents developed above should be referenced so that Claude knows how to use them. - `name`: The skill name
- `description`: This is the primary triggering mechanism for your skill, and helps Claude understand when to use the skill.
- Include both what the Skill does and specific triggers/contexts for when to use it.
- Include all "when to use" information here - Not in the body. The body is only loaded after triggering, so "When to Use This Skill" sections in the body are not helpful to Claude.
- Example description for a `docx` skill: "Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. Use when Claude needs to work with professional documents (.docx files) for: (1) Creating new documents, (2) Modifying or editing content, (3) Working with tracked changes, (4) Adding comments, or any other document tasks"
Do not include any other fields in YAML frontmatter.
##### Body
Write instructions for using the skill and its bundled resources.
### Step 5: Packaging a Skill ### Step 5: Packaging a Skill
Once the skill is ready, it should be packaged into a distributable zip file that gets shared with the user. The packaging process automatically validates the skill first to ensure it meets all requirements: Once development of the skill is complete, it must be packaged into a distributable .skill file that gets shared with the user. The packaging process automatically validates the skill first to ensure it meets all requirements:
```bash ```bash
scripts/package_skill.py <path/to/skill-folder> scripts/package_skill.py <path/to/skill-folder>
@@ -189,12 +334,13 @@ scripts/package_skill.py <path/to/skill-folder> ./dist
The packaging script will: The packaging script will:
1. **Validate** the skill automatically, checking: 1. **Validate** the skill automatically, checking:
- YAML frontmatter format and required fields - YAML frontmatter format and required fields
- Skill naming conventions and directory structure - Skill naming conventions and directory structure
- Description completeness and quality - Description completeness and quality
- File organization and resource references - File organization and resource references
2. **Package** the skill if validation passes, creating a zip file named after the skill (e.g., `my-skill.zip`) that includes all files and maintains the proper directory structure for distribution. 2. **Package** the skill if validation passes, creating a .skill file named after the skill (e.g., `my-skill.skill`) that includes all files and maintains the proper directory structure for distribution. The .skill file is a zip file with a .skill extension.
If validation fails, the script will report the errors and exit without creating a package. Fix any validation errors and run the packaging command again. If validation fails, the script will report the errors and exit without creating a package. Fix any validation errors and run the packaging command again.
@@ -203,6 +349,7 @@ If validation fails, the script will report the errors and exit without creating
After testing the skill, users may request improvements. Often this happens right after using the skill, with fresh context of how the skill performed. After testing the skill, users may request improvements. Often this happens right after using the skill, with fresh context of how the skill performed.
**Iteration workflow:** **Iteration workflow:**
1. Use the skill on real tasks 1. Use the skill on real tasks
2. Notice struggles or inefficiencies 2. Notice struggles or inefficiencies
3. Identify how SKILL.md or bundled resources should be updated 3. Identify how SKILL.md or bundled resources should be updated

View File

@@ -0,0 +1,82 @@
# Output Patterns
Use these patterns when skills need to produce consistent, high-quality output.
## Template Pattern
Provide templates for output format. Match the level of strictness to your needs.
**For strict requirements (like API responses or data formats):**
```markdown
## Report structure
ALWAYS use this exact template structure:
# [Analysis Title]
## Executive summary
[One-paragraph overview of key findings]
## Key findings
- Finding 1 with supporting data
- Finding 2 with supporting data
- Finding 3 with supporting data
## Recommendations
1. Specific actionable recommendation
2. Specific actionable recommendation
```
**For flexible guidance (when adaptation is useful):**
```markdown
## Report structure
Here is a sensible default format, but use your best judgment:
# [Analysis Title]
## Executive summary
[Overview]
## Key findings
[Adapt sections based on what you discover]
## Recommendations
[Tailor to the specific context]
Adjust sections as needed for the specific analysis type.
```
## Examples Pattern
For skills where output quality depends on seeing examples, provide input/output pairs:
```markdown
## Commit message format
Generate commit messages following these examples:
**Example 1:**
Input: Added user authentication with JWT tokens
Output:
```
feat(auth): implement JWT-based authentication
Add login endpoint and token validation middleware
```
**Example 2:**
Input: Fixed bug where dates displayed incorrectly in reports
Output:
```
fix(reports): correct date formatting in timezone conversion
Use UTC timestamps consistently across report generation
```
Follow this style: type(scope): brief description, then detailed explanation.
```
Examples help Claude understand the desired style and level of detail more clearly than descriptions alone.

View File

@@ -0,0 +1,28 @@
# Workflow Patterns
## Sequential Workflows
For complex tasks, break operations into clear, sequential steps. It is often helpful to give Claude an overview of the process towards the beginning of SKILL.md:
```markdown
Filling a PDF form involves these steps:
1. Analyze the form (run analyze_form.py)
2. Create field mapping (edit fields.json)
3. Validate mapping (run validate_fields.py)
4. Fill the form (run fill_form.py)
5. Verify output (run verify_output.py)
```
## Conditional Workflows
For tasks with branching logic, guide Claude through decision points:
```markdown
1. Determine the modification type:
**Creating new content?** → Follow "Creation workflow" below
**Editing existing content?** → Follow "Editing workflow" below
2. Creation workflow: [steps]
3. Editing workflow: [steps]
```

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
""" """
Skill Packager - Creates a distributable zip file of a skill folder Skill Packager - Creates a distributable .skill file of a skill folder
Usage: Usage:
python utils/package_skill.py <path/to/skill-folder> [output-directory] python utils/package_skill.py <path/to/skill-folder> [output-directory]
@@ -18,14 +18,14 @@ from quick_validate import validate_skill
def package_skill(skill_path, output_dir=None): def package_skill(skill_path, output_dir=None):
""" """
Package a skill folder into a zip file. Package a skill folder into a .skill file.
Args: Args:
skill_path: Path to the skill folder skill_path: Path to the skill folder
output_dir: Optional output directory for the zip file (defaults to current directory) output_dir: Optional output directory for the .skill file (defaults to current directory)
Returns: Returns:
Path to the created zip file, or None if error Path to the created .skill file, or None if error
""" """
skill_path = Path(skill_path).resolve() skill_path = Path(skill_path).resolve()
@@ -61,11 +61,11 @@ def package_skill(skill_path, output_dir=None):
else: else:
output_path = Path.cwd() output_path = Path.cwd()
zip_filename = output_path / f"{skill_name}.zip" skill_filename = output_path / f"{skill_name}.skill"
# Create the zip file # Create the .skill file (zip format)
try: try:
with zipfile.ZipFile(zip_filename, 'w', zipfile.ZIP_DEFLATED) as zipf: with zipfile.ZipFile(skill_filename, 'w', zipfile.ZIP_DEFLATED) as zipf:
# Walk through the skill directory # Walk through the skill directory
for file_path in skill_path.rglob('*'): for file_path in skill_path.rglob('*'):
if file_path.is_file(): if file_path.is_file():
@@ -74,11 +74,11 @@ def package_skill(skill_path, output_dir=None):
zipf.write(file_path, arcname) zipf.write(file_path, arcname)
print(f" Added: {arcname}") print(f" Added: {arcname}")
print(f"\n✅ Successfully packaged skill to: {zip_filename}") print(f"\n✅ Successfully packaged skill to: {skill_filename}")
return zip_filename return skill_filename
except Exception as e: except Exception as e:
print(f"❌ Error creating zip file: {e}") print(f"❌ Error creating .skill file: {e}")
return None return None

View File

@@ -6,6 +6,7 @@ Quick validation script for skills - minimal version
import sys import sys
import os import os
import re import re
import yaml
from pathlib import Path from pathlib import Path
def validate_skill(skill_path): def validate_skill(skill_path):
@@ -27,31 +28,60 @@ def validate_skill(skill_path):
if not match: if not match:
return False, "Invalid frontmatter format" return False, "Invalid frontmatter format"
frontmatter = match.group(1) frontmatter_text = match.group(1)
# Parse YAML frontmatter
try:
frontmatter = yaml.safe_load(frontmatter_text)
if not isinstance(frontmatter, dict):
return False, "Frontmatter must be a YAML dictionary"
except yaml.YAMLError as e:
return False, f"Invalid YAML in frontmatter: {e}"
# Define allowed properties
ALLOWED_PROPERTIES = {'name', 'description', 'license', 'allowed-tools', 'metadata'}
# Check for unexpected properties (excluding nested keys under metadata)
unexpected_keys = set(frontmatter.keys()) - ALLOWED_PROPERTIES
if unexpected_keys:
return False, (
f"Unexpected key(s) in SKILL.md frontmatter: {', '.join(sorted(unexpected_keys))}. "
f"Allowed properties are: {', '.join(sorted(ALLOWED_PROPERTIES))}"
)
# Check required fields # Check required fields
if 'name:' not in frontmatter: if 'name' not in frontmatter:
return False, "Missing 'name' in frontmatter" return False, "Missing 'name' in frontmatter"
if 'description:' not in frontmatter: if 'description' not in frontmatter:
return False, "Missing 'description' in frontmatter" return False, "Missing 'description' in frontmatter"
# Extract name for validation # Extract name for validation
name_match = re.search(r'name:\s*(.+)', frontmatter) name = frontmatter.get('name', '')
if name_match: if not isinstance(name, str):
name = name_match.group(1).strip() return False, f"Name must be a string, got {type(name).__name__}"
name = name.strip()
if name:
# Check naming convention (hyphen-case: lowercase with hyphens) # Check naming convention (hyphen-case: lowercase with hyphens)
if not re.match(r'^[a-z0-9-]+$', name): if not re.match(r'^[a-z0-9-]+$', name):
return False, f"Name '{name}' should be hyphen-case (lowercase letters, digits, and hyphens only)" return False, f"Name '{name}' should be hyphen-case (lowercase letters, digits, and hyphens only)"
if name.startswith('-') or name.endswith('-') or '--' in name: if name.startswith('-') or name.endswith('-') or '--' in name:
return False, f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens" return False, f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens"
# Check name length (max 64 characters per spec)
if len(name) > 64:
return False, f"Name is too long ({len(name)} characters). Maximum is 64 characters."
# Extract and validate description # Extract and validate description
desc_match = re.search(r'description:\s*(.+)', frontmatter) description = frontmatter.get('description', '')
if desc_match: if not isinstance(description, str):
description = desc_match.group(1).strip() return False, f"Description must be a string, got {type(description).__name__}"
description = description.strip()
if description:
# Check for angle brackets # Check for angle brackets
if '<' in description or '>' in description: if '<' in description or '>' in description:
return False, "Description cannot contain angle brackets (< or >)" return False, "Description cannot contain angle brackets (< or >)"
# Check description length (max 1024 characters per spec)
if len(description) > 1024:
return False, f"Description is too long ({len(description)} characters). Maximum is 1024 characters."
return True, "Skill is valid!" return True, "Skill is valid!"

View File

@@ -1,646 +1,254 @@
--- ---
name: slack-gif-creator name: slack-gif-creator
description: Toolkit for creating animated GIFs optimized for Slack, with validators for size constraints and composable animation primitives. This skill applies when users request animated GIFs or emoji animations for Slack from descriptions like "make me a GIF for Slack of X doing Y". description: Knowledge and utilities for creating animated GIFs optimized for Slack. Provides constraints, validation tools, and animation concepts. Use when users request animated GIFs for Slack like "make me a GIF of X doing Y for Slack."
license: Complete terms in LICENSE.txt license: Complete terms in LICENSE.txt
--- ---
# Slack GIF Creator - Flexible Toolkit # Slack GIF Creator
A toolkit for creating animated GIFs optimized for Slack. Provides validators for Slack's constraints, composable animation primitives, and optional helper utilities. **Apply these tools however needed to achieve the creative vision.** A toolkit providing utilities and knowledge for creating animated GIFs optimized for Slack.
## Slack's Requirements ## Slack Requirements
Slack has specific requirements for GIFs based on their use: **Dimensions:**
- Emoji GIFs: 128x128 (recommended)
- Message GIFs: 480x480
**Message GIFs:** **Parameters:**
- Max size: ~2MB - FPS: 10-30 (lower is smaller file size)
- Optimal dimensions: 480x480 - Colors: 48-128 (fewer = smaller file size)
- Typical FPS: 15-20 - Duration: Keep under 3 seconds for emoji GIFs
- Color limit: 128-256
- Duration: 2-5s
**Emoji GIFs:** ## Core Workflow
- Max size: 64KB (strict limit)
- Optimal dimensions: 128x128
- Typical FPS: 10-12
- Color limit: 32-48
- Duration: 1-2s
**Emoji GIFs are challenging** - the 64KB limit is strict. Strategies that help:
- Limit to 10-15 frames total
- Use 32-48 colors maximum
- Keep designs simple
- Avoid gradients
- Validate file size frequently
## Toolkit Structure
This skill provides three types of tools:
1. **Validators** - Check if a GIF meets Slack's requirements
2. **Animation Primitives** - Composable building blocks for motion (shake, bounce, move, kaleidoscope)
3. **Helper Utilities** - Optional functions for common needs (text, colors, effects)
**Complete creative freedom is available in how these tools are applied.**
## Core Validators
To ensure a GIF meets Slack's constraints, use these validators:
```python ```python
from core.gif_builder import GIFBuilder from core.gif_builder import GIFBuilder
from PIL import Image, ImageDraw
# After creating your GIF, check if it meets requirements # 1. Create builder
builder = GIFBuilder(width=128, height=128, fps=10) builder = GIFBuilder(width=128, height=128, fps=10)
# ... add your frames however you want ...
# Save and check size # 2. Generate frames
info = builder.save('emoji.gif', num_colors=48, optimize_for_emoji=True) for i in range(12):
frame = Image.new('RGB', (128, 128), (240, 248, 255))
draw = ImageDraw.Draw(frame)
# The save method automatically warns if file exceeds limits # Draw your animation using PIL primitives
# info dict contains: size_kb, size_mb, frame_count, duration_seconds # (circles, polygons, lines, etc.)
builder.add_frame(frame)
# 3. Save with optimization
builder.save('output.gif', num_colors=48, optimize_for_emoji=True)
``` ```
**File size validator**: ## Drawing Graphics
### Working with User-Uploaded Images
If a user uploads an image, consider whether they want to:
- **Use it directly** (e.g., "animate this", "split this into frames")
- **Use it as inspiration** (e.g., "make something like this")
Load and work with images using PIL:
```python ```python
from core.validators import check_slack_size from PIL import Image
# Check if GIF meets size limits uploaded = Image.open('file.png')
passes, info = check_slack_size('emoji.gif', is_emoji=True) # Use directly, or just as reference for colors/style
# Returns: (True/False, dict with size details)
``` ```
**Dimension validator**: ### Drawing from Scratch
When drawing graphics from scratch, use PIL ImageDraw primitives:
```python ```python
from core.validators import validate_dimensions from PIL import ImageDraw
# Check dimensions draw = ImageDraw.Draw(frame)
passes, info = validate_dimensions(128, 128, is_emoji=True)
# Returns: (True/False, dict with dimension details) # Circles/ovals
draw.ellipse([x1, y1, x2, y2], fill=(r, g, b), outline=(r, g, b), width=3)
# Stars, triangles, any polygon
points = [(x1, y1), (x2, y2), (x3, y3), ...]
draw.polygon(points, fill=(r, g, b), outline=(r, g, b), width=3)
# Lines
draw.line([(x1, y1), (x2, y2)], fill=(r, g, b), width=5)
# Rectangles
draw.rectangle([x1, y1, x2, y2], fill=(r, g, b), outline=(r, g, b), width=3)
``` ```
**Complete validation**: **Don't use:** Emoji fonts (unreliable across platforms) or assume pre-packaged graphics exist in this skill.
### Making Graphics Look Good
Graphics should look polished and creative, not basic. Here's how:
**Use thicker lines** - Always set `width=2` or higher for outlines and lines. Thin lines (width=1) look choppy and amateurish.
**Add visual depth**:
- Use gradients for backgrounds (`create_gradient_background`)
- Layer multiple shapes for complexity (e.g., a star with a smaller star inside)
**Make shapes more interesting**:
- Don't just draw a plain circle - add highlights, rings, or patterns
- Stars can have glows (draw larger, semi-transparent versions behind)
- Combine multiple shapes (stars + sparkles, circles + rings)
**Pay attention to colors**:
- Use vibrant, complementary colors
- Add contrast (dark outlines on light shapes, light outlines on dark shapes)
- Consider the overall composition
**For complex shapes** (hearts, snowflakes, etc.):
- Use combinations of polygons and ellipses
- Calculate points carefully for symmetry
- Add details (a heart can have a highlight curve, snowflakes have intricate branches)
Be creative and detailed! A good Slack GIF should look polished, not like placeholder graphics.
## Available Utilities
### GIFBuilder (`core.gif_builder`)
Assembles frames and optimizes for Slack:
```python
builder = GIFBuilder(width=128, height=128, fps=10)
builder.add_frame(frame) # Add PIL Image
builder.add_frames(frames) # Add list of frames
builder.save('out.gif', num_colors=48, optimize_for_emoji=True, remove_duplicates=True)
```
### Validators (`core.validators`)
Check if GIF meets Slack requirements:
```python ```python
from core.validators import validate_gif, is_slack_ready from core.validators import validate_gif, is_slack_ready
# Run all validations # Detailed validation
all_pass, results = validate_gif('emoji.gif', is_emoji=True) passes, info = validate_gif('my.gif', is_emoji=True, verbose=True)
# Or quick check # Quick check
if is_slack_ready('emoji.gif', is_emoji=True): if is_slack_ready('my.gif'):
print("Ready to upload!") print("Ready!")
``` ```
## Animation Primitives ### Easing Functions (`core.easing`)
Smooth motion instead of linear:
These are composable building blocks for motion. Apply these to any object in any combination:
### Shake
```python
from templates.shake import create_shake_animation
# Shake an emoji
frames = create_shake_animation(
object_type='emoji',
object_data={'emoji': '😱', 'size': 80},
num_frames=20,
shake_intensity=15,
direction='both' # or 'horizontal', 'vertical'
)
```
### Bounce
```python
from templates.bounce import create_bounce_animation
# Bounce a circle
frames = create_bounce_animation(
object_type='circle',
object_data={'radius': 40, 'color': (255, 100, 100)},
num_frames=30,
bounce_height=150
)
```
### Spin / Rotate
```python
from templates.spin import create_spin_animation, create_loading_spinner
# Clockwise spin
frames = create_spin_animation(
object_type='emoji',
object_data={'emoji': '🔄', 'size': 100},
rotation_type='clockwise',
full_rotations=2
)
# Wobble rotation
frames = create_spin_animation(rotation_type='wobble', full_rotations=3)
# Loading spinner
frames = create_loading_spinner(spinner_type='dots')
```
### Pulse / Heartbeat
```python
from templates.pulse import create_pulse_animation, create_attention_pulse
# Smooth pulse
frames = create_pulse_animation(
object_data={'emoji': '❤️', 'size': 100},
pulse_type='smooth',
scale_range=(0.8, 1.2)
)
# Heartbeat (double-pump)
frames = create_pulse_animation(pulse_type='heartbeat')
# Attention pulse for emoji GIFs
frames = create_attention_pulse(emoji='⚠️', num_frames=20)
```
### Fade
```python
from templates.fade import create_fade_animation, create_crossfade
# Fade in
frames = create_fade_animation(fade_type='in')
# Fade out
frames = create_fade_animation(fade_type='out')
# Crossfade between two emojis
frames = create_crossfade(
object1_data={'emoji': '😊', 'size': 100},
object2_data={'emoji': '😂', 'size': 100}
)
```
### Zoom
```python
from templates.zoom import create_zoom_animation, create_explosion_zoom
# Zoom in dramatically
frames = create_zoom_animation(
zoom_type='in',
scale_range=(0.1, 2.0),
add_motion_blur=True
)
# Zoom out
frames = create_zoom_animation(zoom_type='out')
# Explosion zoom
frames = create_explosion_zoom(emoji='💥')
```
### Explode / Shatter
```python
from templates.explode import create_explode_animation, create_particle_burst
# Burst explosion
frames = create_explode_animation(
explode_type='burst',
num_pieces=25
)
# Shatter effect
frames = create_explode_animation(explode_type='shatter')
# Dissolve into particles
frames = create_explode_animation(explode_type='dissolve')
# Particle burst
frames = create_particle_burst(particle_count=30)
```
### Wiggle / Jiggle
```python
from templates.wiggle import create_wiggle_animation, create_excited_wiggle
# Jello wobble
frames = create_wiggle_animation(
wiggle_type='jello',
intensity=1.0,
cycles=2
)
# Wave motion
frames = create_wiggle_animation(wiggle_type='wave')
# Excited wiggle for emoji GIFs
frames = create_excited_wiggle(emoji='🎉')
```
### Slide
```python
from templates.slide import create_slide_animation, create_multi_slide
# Slide in from left with overshoot
frames = create_slide_animation(
direction='left',
slide_type='in',
overshoot=True
)
# Slide across
frames = create_slide_animation(direction='left', slide_type='across')
# Multiple objects sliding in sequence
objects = [
{'data': {'emoji': '🎯', 'size': 60}, 'direction': 'left', 'final_pos': (120, 240)},
{'data': {'emoji': '🎪', 'size': 60}, 'direction': 'right', 'final_pos': (240, 240)}
]
frames = create_multi_slide(objects, stagger_delay=5)
```
### Flip
```python
from templates.flip import create_flip_animation, create_quick_flip
# Horizontal flip between two emojis
frames = create_flip_animation(
object1_data={'emoji': '😊', 'size': 120},
object2_data={'emoji': '😂', 'size': 120},
flip_axis='horizontal'
)
# Vertical flip
frames = create_flip_animation(flip_axis='vertical')
# Quick flip for emoji GIFs
frames = create_quick_flip('👍', '👎')
```
### Morph / Transform
```python
from templates.morph import create_morph_animation, create_reaction_morph
# Crossfade morph
frames = create_morph_animation(
object1_data={'emoji': '😊', 'size': 100},
object2_data={'emoji': '😂', 'size': 100},
morph_type='crossfade'
)
# Scale morph (shrink while other grows)
frames = create_morph_animation(morph_type='scale')
# Spin morph (3D flip-like)
frames = create_morph_animation(morph_type='spin_morph')
```
### Move Effect
```python
from templates.move import create_move_animation
# Linear movement
frames = create_move_animation(
object_type='emoji',
object_data={'emoji': '🚀', 'size': 60},
start_pos=(50, 240),
end_pos=(430, 240),
motion_type='linear',
easing='ease_out'
)
# Arc movement (parabolic trajectory)
frames = create_move_animation(
object_type='emoji',
object_data={'emoji': '', 'size': 60},
start_pos=(50, 350),
end_pos=(430, 350),
motion_type='arc',
motion_params={'arc_height': 150}
)
# Circular movement
frames = create_move_animation(
object_type='emoji',
object_data={'emoji': '🌍', 'size': 50},
motion_type='circle',
motion_params={
'center': (240, 240),
'radius': 120,
'angle_range': 360 # full circle
}
)
# Wave movement
frames = create_move_animation(
motion_type='wave',
motion_params={
'wave_amplitude': 50,
'wave_frequency': 2
}
)
# Or use low-level easing functions
from core.easing import interpolate, calculate_arc_motion
for i in range(num_frames):
t = i / (num_frames - 1)
x = interpolate(start_x, end_x, t, easing='ease_out')
# Or: x, y = calculate_arc_motion(start, end, height, t)
```
### Kaleidoscope Effect
```python
from templates.kaleidoscope import apply_kaleidoscope, create_kaleidoscope_animation
# Apply to a single frame
kaleido_frame = apply_kaleidoscope(frame, segments=8)
# Or create animated kaleidoscope
frames = create_kaleidoscope_animation(
base_frame=my_frame, # or None for demo pattern
num_frames=30,
segments=8,
rotation_speed=1.0
)
# Simple mirror effects (faster)
from templates.kaleidoscope import apply_simple_mirror
mirrored = apply_simple_mirror(frame, mode='quad') # 4-way mirror
# modes: 'horizontal', 'vertical', 'quad', 'radial'
```
**To compose primitives freely, follow these patterns:**
```python
# Example: Bounce + shake for impact
for i in range(num_frames):
frame = create_blank_frame(480, 480, bg_color)
# Bounce motion
t_bounce = i / (num_frames - 1)
y = interpolate(start_y, ground_y, t_bounce, 'bounce_out')
# Add shake on impact (when y reaches ground)
if y >= ground_y - 5:
shake_x = math.sin(i * 2) * 10
x = center_x + shake_x
else:
x = center_x
draw_emoji(frame, '', (x, y), size=60)
builder.add_frame(frame)
```
## Helper Utilities
These are optional helpers for common needs. **Use, modify, or replace these with custom implementations as needed.**
### GIF Builder (Assembly & Optimization)
```python
from core.gif_builder import GIFBuilder
# Create builder with your chosen settings
builder = GIFBuilder(width=480, height=480, fps=20)
# Add frames (however you created them)
for frame in my_frames:
builder.add_frame(frame)
# Save with optimization
builder.save('output.gif',
num_colors=128,
optimize_for_emoji=False)
```
Key features:
- Automatic color quantization
- Duplicate frame removal
- Size warnings for Slack limits
- Emoji mode (aggressive optimization)
### Text Rendering
For small GIFs like emojis, text readability is challenging. A common solution involves adding outlines:
```python
from core.typography import draw_text_with_outline, TYPOGRAPHY_SCALE
# Text with outline (helps readability)
draw_text_with_outline(
frame, "BONK!",
position=(240, 100),
font_size=TYPOGRAPHY_SCALE['h1'], # 60px
text_color=(255, 68, 68),
outline_color=(0, 0, 0),
outline_width=4,
centered=True
)
```
To implement custom text rendering, use PIL's `ImageDraw.text()` which works fine for larger GIFs.
### Color Management
Professional-looking GIFs often use cohesive color palettes:
```python
from core.color_palettes import get_palette
# Get a pre-made palette
palette = get_palette('vibrant') # or 'pastel', 'dark', 'neon', 'professional'
bg_color = palette['background']
text_color = palette['primary']
accent_color = palette['accent']
```
To work with colors directly, use RGB tuples - whatever works for the use case.
### Visual Effects
Optional effects for impact moments:
```python
from core.visual_effects import ParticleSystem, create_impact_flash, create_shockwave_rings
# Particle system
particles = ParticleSystem()
particles.emit_sparkles(x=240, y=200, count=15)
particles.emit_confetti(x=240, y=200, count=20)
# Update and render each frame
particles.update()
particles.render(frame)
# Flash effect
frame = create_impact_flash(frame, position=(240, 200), radius=100)
# Shockwave rings
frame = create_shockwave_rings(frame, position=(240, 200), radii=[30, 60, 90])
```
### Easing Functions
Smooth motion uses easing instead of linear interpolation:
```python ```python
from core.easing import interpolate from core.easing import interpolate
# Object falling (accelerates) # Progress from 0.0 to 1.0
y = interpolate(start=0, end=400, t=progress, easing='ease_in') t = i / (num_frames - 1)
# Object landing (decelerates) # Apply easing
y = interpolate(start=0, end=400, t=progress, easing='ease_out') y = interpolate(start=0, end=400, t=t, easing='ease_out')
# Bouncing # Available: linear, ease_in, ease_out, ease_in_out,
y = interpolate(start=0, end=400, t=progress, easing='bounce_out') # bounce_out, elastic_out, back_out
# Overshoot (elastic)
scale = interpolate(start=0.5, end=1.0, t=progress, easing='elastic_out')
``` ```
Available easings: `linear`, `ease_in`, `ease_out`, `ease_in_out`, `bounce_out`, `elastic_out`, `back_out` (overshoot), and more in `core/easing.py`. ### Frame Helpers (`core.frame_composer`)
Convenience functions for common needs:
### Frame Composition
Basic drawing utilities if you need them:
```python ```python
from core.frame_composer import ( from core.frame_composer import (
create_gradient_background, # Gradient backgrounds create_blank_frame, # Solid color background
draw_emoji_enhanced, # Emoji with optional shadow create_gradient_background, # Vertical gradient
draw_circle_with_shadow, # Shapes with depth draw_circle, # Helper for circles
draw_star # 5-pointed stars draw_text, # Simple text rendering
draw_star # 5-pointed star
) )
# Gradient background
frame = create_gradient_background(480, 480, top_color, bottom_color)
# Emoji with shadow
draw_emoji_enhanced(frame, '🎉', position=(200, 200), size=80, shadow=True)
``` ```
## Animation Concepts
### Shake/Vibrate
Offset object position with oscillation:
- Use `math.sin()` or `math.cos()` with frame index
- Add small random variations for natural feel
- Apply to x and/or y position
### Pulse/Heartbeat
Scale object size rhythmically:
- Use `math.sin(t * frequency * 2 * math.pi)` for smooth pulse
- For heartbeat: two quick pulses then pause (adjust sine wave)
- Scale between 0.8 and 1.2 of base size
### Bounce
Object falls and bounces:
- Use `interpolate()` with `easing='bounce_out'` for landing
- Use `easing='ease_in'` for falling (accelerating)
- Apply gravity by increasing y velocity each frame
### Spin/Rotate
Rotate object around center:
- PIL: `image.rotate(angle, resample=Image.BICUBIC)`
- For wobble: use sine wave for angle instead of linear
### Fade In/Out
Gradually appear or disappear:
- Create RGBA image, adjust alpha channel
- Or use `Image.blend(image1, image2, alpha)`
- Fade in: alpha from 0 to 1
- Fade out: alpha from 1 to 0
### Slide
Move object from off-screen to position:
- Start position: outside frame bounds
- End position: target location
- Use `interpolate()` with `easing='ease_out'` for smooth stop
- For overshoot: use `easing='back_out'`
### Zoom
Scale and position for zoom effect:
- Zoom in: scale from 0.1 to 2.0, crop center
- Zoom out: scale from 2.0 to 1.0
- Can add motion blur for drama (PIL filter)
### Explode/Particle Burst
Create particles radiating outward:
- Generate particles with random angles and velocities
- Update each particle: `x += vx`, `y += vy`
- Add gravity: `vy += gravity_constant`
- Fade out particles over time (reduce alpha)
## Optimization Strategies ## Optimization Strategies
When your GIF is too large: Only when asked to make the file size smaller, implement a few of the following methods:
**For Message GIFs (>2MB):** 1. **Fewer frames** - Lower FPS (10 instead of 20) or shorter duration
1. Reduce frames (lower FPS or shorter duration) 2. **Fewer colors** - `num_colors=48` instead of 128
2. Reduce colors (128 → 64 colors) 3. **Smaller dimensions** - 128x128 instead of 480x480
3. Reduce dimensions (480x480 → 320x320) 4. **Remove duplicates** - `remove_duplicates=True` in save()
4. Enable duplicate frame removal 5. **Emoji mode** - `optimize_for_emoji=True` auto-optimizes
**For Emoji GIFs (>64KB) - be aggressive:**
1. Limit to 10-12 frames total
2. Use 32-40 colors maximum
3. Avoid gradients (solid colors compress better)
4. Simplify design (fewer elements)
5. Use `optimize_for_emoji=True` in save method
## Example Composition Patterns
### Simple Reaction (Pulsing)
```python ```python
builder = GIFBuilder(128, 128, 10) # Maximum optimization for emoji
builder.save(
for i in range(12): 'emoji.gif',
frame = Image.new('RGB', (128, 128), (240, 248, 255)) num_colors=48,
optimize_for_emoji=True,
# Pulsing scale remove_duplicates=True
scale = 1.0 + math.sin(i * 0.5) * 0.15
size = int(60 * scale)
draw_emoji_enhanced(frame, '😱', position=(64-size//2, 64-size//2),
size=size, shadow=False)
builder.add_frame(frame)
builder.save('reaction.gif', num_colors=40, optimize_for_emoji=True)
# Validate
from core.validators import check_slack_size
check_slack_size('reaction.gif', is_emoji=True)
```
### Action with Impact (Bounce + Flash)
```python
builder = GIFBuilder(480, 480, 20)
# Phase 1: Object falls
for i in range(15):
frame = create_gradient_background(480, 480, (240, 248, 255), (200, 230, 255))
t = i / 14
y = interpolate(0, 350, t, 'ease_in')
draw_emoji_enhanced(frame, '', position=(220, int(y)), size=80)
builder.add_frame(frame)
# Phase 2: Impact + flash
for i in range(8):
frame = create_gradient_background(480, 480, (240, 248, 255), (200, 230, 255))
# Flash on first frames
if i < 3:
frame = create_impact_flash(frame, (240, 350), radius=120, intensity=0.6)
draw_emoji_enhanced(frame, '', position=(220, 350), size=80)
# Text appears
if i > 2:
draw_text_with_outline(frame, "GOAL!", position=(240, 150),
font_size=60, text_color=(255, 68, 68),
outline_color=(0, 0, 0), outline_width=4, centered=True)
builder.add_frame(frame)
builder.save('goal.gif', num_colors=128)
```
### Combining Primitives (Move + Shake)
```python
from templates.shake import create_shake_animation
# Create shake animation
shake_frames = create_shake_animation(
object_type='emoji',
object_data={'emoji': '😰', 'size': 70},
num_frames=20,
shake_intensity=12
) )
# Create moving element that triggers the shake
builder = GIFBuilder(480, 480, 20)
for i in range(40):
t = i / 39
if i < 20:
# Before trigger - use blank frame with moving object
frame = create_blank_frame(480, 480, (255, 255, 255))
x = interpolate(50, 300, t * 2, 'linear')
draw_emoji_enhanced(frame, '🚗', position=(int(x), 300), size=60)
draw_emoji_enhanced(frame, '😰', position=(350, 200), size=70)
else:
# After trigger - use shake frame
frame = shake_frames[i - 20]
# Add the car in final position
draw_emoji_enhanced(frame, '🚗', position=(300, 300), size=60)
builder.add_frame(frame)
builder.save('scare.gif')
``` ```
## Philosophy ## Philosophy
This toolkit provides building blocks, not rigid recipes. To work with a GIF request: This skill provides:
- **Knowledge**: Slack's requirements and animation concepts
- **Utilities**: GIFBuilder, validators, easing functions
- **Flexibility**: Create the animation logic using PIL primitives
1. **Understand the creative vision** - What should happen? What's the mood? It does NOT provide:
2. **Design the animation** - Break it into phases (anticipation, action, reaction) - Rigid animation templates or pre-made functions
3. **Apply primitives as needed** - Shake, bounce, move, effects - mix freely - Emoji font rendering (unreliable across platforms)
4. **Validate constraints** - Check file size, especially for emoji GIFs - A library of pre-packaged graphics built into the skill
5. **Iterate if needed** - Reduce frames/colors if over size limits
**The goal is creative freedom within Slack's technical constraints.** **Note on user uploads**: This skill doesn't include pre-built graphics, but if a user uploads an image, use PIL to load and work with it - interpret based on their request whether they want it used directly or just as inspiration.
Be creative! Combine concepts (bouncing + rotating, pulsing + sliding, etc.) and use PIL's full capabilities.
## Dependencies ## Dependencies
To use this toolkit, install these dependencies only if they aren't already present:
```bash ```bash
pip install pillow imageio numpy pip install pillow imageio numpy
``` ```

View File

@@ -101,25 +101,25 @@ def ease_in_out_elastic(t: float) -> float:
# Convenience mapping # Convenience mapping
EASING_FUNCTIONS = { EASING_FUNCTIONS = {
'linear': linear, "linear": linear,
'ease_in': ease_in_quad, "ease_in": ease_in_quad,
'ease_out': ease_out_quad, "ease_out": ease_out_quad,
'ease_in_out': ease_in_out_quad, "ease_in_out": ease_in_out_quad,
'bounce_in': ease_in_bounce, "bounce_in": ease_in_bounce,
'bounce_out': ease_out_bounce, "bounce_out": ease_out_bounce,
'bounce': ease_in_out_bounce, "bounce": ease_in_out_bounce,
'elastic_in': ease_in_elastic, "elastic_in": ease_in_elastic,
'elastic_out': ease_out_elastic, "elastic_out": ease_out_elastic,
'elastic': ease_in_out_elastic, "elastic": ease_in_out_elastic,
} }
def get_easing(name: str = 'linear'): def get_easing(name: str = "linear"):
"""Get easing function by name.""" """Get easing function by name."""
return EASING_FUNCTIONS.get(name, linear) return EASING_FUNCTIONS.get(name, linear)
def interpolate(start: float, end: float, t: float, easing: str = 'linear') -> float: def interpolate(start: float, end: float, t: float, easing: str = "linear") -> float:
""" """
Interpolate between two values with easing. Interpolate between two values with easing.
@@ -160,8 +160,9 @@ def ease_back_in_out(t: float) -> float:
return (pow(2 * t - 2, 2) * ((c2 + 1) * (t * 2 - 2) + c2) + 2) / 2 return (pow(2 * t - 2, 2) * ((c2 + 1) * (t * 2 - 2) + c2) + 2) / 2
def apply_squash_stretch(base_scale: tuple[float, float], intensity: float, def apply_squash_stretch(
direction: str = 'vertical') -> tuple[float, float]: base_scale: tuple[float, float], intensity: float, direction: str = "vertical"
) -> tuple[float, float]:
""" """
Calculate squash and stretch scales for more dynamic animation. Calculate squash and stretch scales for more dynamic animation.
@@ -175,24 +176,25 @@ def apply_squash_stretch(base_scale: tuple[float, float], intensity: float,
""" """
width_scale, height_scale = base_scale width_scale, height_scale = base_scale
if direction == 'vertical': if direction == "vertical":
# Compress vertically, expand horizontally (preserve volume) # Compress vertically, expand horizontally (preserve volume)
height_scale *= (1 - intensity * 0.5) height_scale *= 1 - intensity * 0.5
width_scale *= (1 + intensity * 0.5) width_scale *= 1 + intensity * 0.5
elif direction == 'horizontal': elif direction == "horizontal":
# Compress horizontally, expand vertically # Compress horizontally, expand vertically
width_scale *= (1 - intensity * 0.5) width_scale *= 1 - intensity * 0.5
height_scale *= (1 + intensity * 0.5) height_scale *= 1 + intensity * 0.5
elif direction == 'both': elif direction == "both":
# General squash (both dimensions) # General squash (both dimensions)
width_scale *= (1 - intensity * 0.3) width_scale *= 1 - intensity * 0.3
height_scale *= (1 - intensity * 0.3) height_scale *= 1 - intensity * 0.3
return (width_scale, height_scale) return (width_scale, height_scale)
def calculate_arc_motion(start: tuple[float, float], end: tuple[float, float], def calculate_arc_motion(
height: float, t: float) -> tuple[float, float]: start: tuple[float, float], end: tuple[float, float], height: float, t: float
) -> tuple[float, float]:
""" """
Calculate position along a parabolic arc (natural motion path). Calculate position along a parabolic arc (natural motion path).
@@ -221,10 +223,12 @@ def calculate_arc_motion(start: tuple[float, float], end: tuple[float, float],
# Add new easing functions to the convenience mapping # Add new easing functions to the convenience mapping
EASING_FUNCTIONS.update({ EASING_FUNCTIONS.update(
'back_in': ease_back_in, {
'back_out': ease_back_out, "back_in": ease_back_in,
'back_in_out': ease_back_in_out, "back_out": ease_back_out,
'anticipate': ease_back_in, # Alias "back_in_out": ease_back_in_out,
'overshoot': ease_back_out, # Alias "anticipate": ease_back_in, # Alias
}) "overshoot": ease_back_out, # Alias
}
)

View File

@@ -6,12 +6,15 @@ Provides functions for drawing shapes, text, emojis, and compositing elements
together to create animation frames. together to create animation frames.
""" """
from PIL import Image, ImageDraw, ImageFont
import numpy as np
from typing import Optional from typing import Optional
import numpy as np
from PIL import Image, ImageDraw, ImageFont
def create_blank_frame(width: int, height: int, color: tuple[int, int, int] = (255, 255, 255)) -> Image.Image:
def create_blank_frame(
width: int, height: int, color: tuple[int, int, int] = (255, 255, 255)
) -> Image.Image:
""" """
Create a blank frame with solid color background. Create a blank frame with solid color background.
@@ -23,13 +26,17 @@ def create_blank_frame(width: int, height: int, color: tuple[int, int, int] = (2
Returns: Returns:
PIL Image PIL Image
""" """
return Image.new('RGB', (width, height), color) return Image.new("RGB", (width, height), color)
def draw_circle(frame: Image.Image, center: tuple[int, int], radius: int, def draw_circle(
fill_color: Optional[tuple[int, int, int]] = None, frame: Image.Image,
outline_color: Optional[tuple[int, int, int]] = None, center: tuple[int, int],
outline_width: int = 1) -> Image.Image: radius: int,
fill_color: Optional[tuple[int, int, int]] = None,
outline_color: Optional[tuple[int, int, int]] = None,
outline_width: int = 1,
) -> Image.Image:
""" """
Draw a circle on a frame. Draw a circle on a frame.
@@ -51,52 +58,13 @@ def draw_circle(frame: Image.Image, center: tuple[int, int], radius: int,
return frame return frame
def draw_rectangle(frame: Image.Image, top_left: tuple[int, int], bottom_right: tuple[int, int], def draw_text(
fill_color: Optional[tuple[int, int, int]] = None, frame: Image.Image,
outline_color: Optional[tuple[int, int, int]] = None, text: str,
outline_width: int = 1) -> Image.Image: position: tuple[int, int],
""" color: tuple[int, int, int] = (0, 0, 0),
Draw a rectangle on a frame. centered: bool = False,
) -> Image.Image:
Args:
frame: PIL Image to draw on
top_left: (x, y) top-left corner
bottom_right: (x, y) bottom-right corner
fill_color: RGB fill color (None for no fill)
outline_color: RGB outline color (None for no outline)
outline_width: Outline width in pixels
Returns:
Modified frame
"""
draw = ImageDraw.Draw(frame)
draw.rectangle([top_left, bottom_right], fill=fill_color, outline=outline_color, width=outline_width)
return frame
def draw_line(frame: Image.Image, start: tuple[int, int], end: tuple[int, int],
color: tuple[int, int, int] = (0, 0, 0), width: int = 2) -> Image.Image:
"""
Draw a line on a frame.
Args:
frame: PIL Image to draw on
start: (x, y) start position
end: (x, y) end position
color: RGB line color
width: Line width in pixels
Returns:
Modified frame
"""
draw = ImageDraw.Draw(frame)
draw.line([start, end], fill=color, width=width)
return frame
def draw_text(frame: Image.Image, text: str, position: tuple[int, int],
font_size: int = 40, color: tuple[int, int, int] = (0, 0, 0),
centered: bool = False) -> Image.Image:
""" """
Draw text on a frame. Draw text on a frame.
@@ -104,7 +72,6 @@ def draw_text(frame: Image.Image, text: str, position: tuple[int, int],
frame: PIL Image to draw on frame: PIL Image to draw on
text: Text to draw text: Text to draw
position: (x, y) position (top-left unless centered=True) position: (x, y) position (top-left unless centered=True)
font_size: Font size in pixels
color: RGB text color color: RGB text color
centered: If True, center text at position centered: If True, center text at position
@@ -113,11 +80,9 @@ def draw_text(frame: Image.Image, text: str, position: tuple[int, int],
""" """
draw = ImageDraw.Draw(frame) draw = ImageDraw.Draw(frame)
# Try to use default font, fall back to basic if not available # Uses Pillow's default font.
try: # If the font should be changed for the emoji, add additional logic here.
font = ImageFont.truetype("/System/Library/Fonts/Helvetica.ttc", font_size) font = ImageFont.load_default()
except:
font = ImageFont.load_default()
if centered: if centered:
bbox = draw.textbbox((0, 0), text, font=font) bbox = draw.textbbox((0, 0), text, font=font)
@@ -131,110 +96,12 @@ def draw_text(frame: Image.Image, text: str, position: tuple[int, int],
return frame return frame
def draw_emoji(frame: Image.Image, emoji: str, position: tuple[int, int], size: int = 60) -> Image.Image: def create_gradient_background(
""" width: int,
Draw emoji text on a frame (requires system emoji support). height: int,
top_color: tuple[int, int, int],
Args: bottom_color: tuple[int, int, int],
frame: PIL Image to draw on ) -> Image.Image:
emoji: Emoji character(s)
position: (x, y) position
size: Emoji size in pixels
Returns:
Modified frame
"""
draw = ImageDraw.Draw(frame)
# Use Apple Color Emoji font on macOS
try:
font = ImageFont.truetype("/System/Library/Fonts/Apple Color Emoji.ttc", size)
except:
# Fallback to text-based emoji
font = ImageFont.truetype("/System/Library/Fonts/Helvetica.ttc", size)
draw.text(position, emoji, font=font, embedded_color=True)
return frame
def composite_layers(base: Image.Image, overlay: Image.Image,
position: tuple[int, int] = (0, 0), alpha: float = 1.0) -> Image.Image:
"""
Composite one image on top of another.
Args:
base: Base image
overlay: Image to overlay on top
position: (x, y) position to place overlay
alpha: Opacity of overlay (0.0 = transparent, 1.0 = opaque)
Returns:
Composite image
"""
# Convert to RGBA for transparency support
base_rgba = base.convert('RGBA')
overlay_rgba = overlay.convert('RGBA')
# Apply alpha
if alpha < 1.0:
overlay_rgba = overlay_rgba.copy()
overlay_rgba.putalpha(int(255 * alpha))
# Paste overlay onto base
base_rgba.paste(overlay_rgba, position, overlay_rgba)
# Convert back to RGB
return base_rgba.convert('RGB')
def draw_stick_figure(frame: Image.Image, position: tuple[int, int], scale: float = 1.0,
color: tuple[int, int, int] = (0, 0, 0), line_width: int = 3) -> Image.Image:
"""
Draw a simple stick figure.
Args:
frame: PIL Image to draw on
position: (x, y) center position of head
scale: Size multiplier
color: RGB line color
line_width: Line width in pixels
Returns:
Modified frame
"""
draw = ImageDraw.Draw(frame)
x, y = position
# Scale dimensions
head_radius = int(15 * scale)
body_length = int(40 * scale)
arm_length = int(25 * scale)
leg_length = int(35 * scale)
leg_spread = int(15 * scale)
# Head
draw.ellipse([x - head_radius, y - head_radius, x + head_radius, y + head_radius],
outline=color, width=line_width)
# Body
body_start = y + head_radius
body_end = body_start + body_length
draw.line([(x, body_start), (x, body_end)], fill=color, width=line_width)
# Arms
arm_y = body_start + int(body_length * 0.3)
draw.line([(x - arm_length, arm_y), (x + arm_length, arm_y)], fill=color, width=line_width)
# Legs
draw.line([(x, body_end), (x - leg_spread, body_end + leg_length)], fill=color, width=line_width)
draw.line([(x, body_end), (x + leg_spread, body_end + leg_length)], fill=color, width=line_width)
return frame
def create_gradient_background(width: int, height: int,
top_color: tuple[int, int, int],
bottom_color: tuple[int, int, int]) -> Image.Image:
""" """
Create a vertical gradient background. Create a vertical gradient background.
@@ -247,7 +114,7 @@ def create_gradient_background(width: int, height: int,
Returns: Returns:
PIL Image with gradient PIL Image with gradient
""" """
frame = Image.new('RGB', (width, height)) frame = Image.new("RGB", (width, height))
draw = ImageDraw.Draw(frame) draw = ImageDraw.Draw(frame)
# Calculate color step for each row # Calculate color step for each row
@@ -267,175 +134,14 @@ def create_gradient_background(width: int, height: int,
return frame return frame
def draw_emoji_enhanced(frame: Image.Image, emoji: str, position: tuple[int, int], def draw_star(
size: int = 60, shadow: bool = True, frame: Image.Image,
shadow_offset: tuple[int, int] = (2, 2)) -> Image.Image: center: tuple[int, int],
""" size: int,
Draw emoji with optional shadow for better visual quality. fill_color: tuple[int, int, int],
outline_color: Optional[tuple[int, int, int]] = None,
Args: outline_width: int = 1,
frame: PIL Image to draw on ) -> Image.Image:
emoji: Emoji character(s)
position: (x, y) position
size: Emoji size in pixels (minimum 12)
shadow: Whether to add drop shadow
shadow_offset: Shadow offset
Returns:
Modified frame
"""
draw = ImageDraw.Draw(frame)
# Ensure minimum size to avoid font rendering errors
size = max(12, size)
# Use Apple Color Emoji font on macOS
try:
font = ImageFont.truetype("/System/Library/Fonts/Apple Color Emoji.ttc", size)
except:
# Fallback to text-based emoji
try:
font = ImageFont.truetype("/System/Library/Fonts/Helvetica.ttc", size)
except:
font = ImageFont.load_default()
# Draw shadow first if enabled
if shadow and size >= 20: # Only draw shadow for larger emojis
shadow_pos = (position[0] + shadow_offset[0], position[1] + shadow_offset[1])
# Draw semi-transparent shadow (simulated by drawing multiple times)
for offset in range(1, 3):
try:
draw.text((shadow_pos[0] + offset, shadow_pos[1] + offset),
emoji, font=font, embedded_color=True, fill=(0, 0, 0, 100))
except:
pass # Skip shadow if it fails
# Draw main emoji
try:
draw.text(position, emoji, font=font, embedded_color=True)
except:
# Fallback to basic drawing if embedded color fails
draw.text(position, emoji, font=font, fill=(0, 0, 0))
return frame
def draw_circle_with_shadow(frame: Image.Image, center: tuple[int, int], radius: int,
fill_color: tuple[int, int, int],
shadow_offset: tuple[int, int] = (3, 3),
shadow_color: tuple[int, int, int] = (0, 0, 0)) -> Image.Image:
"""
Draw a circle with drop shadow.
Args:
frame: PIL Image to draw on
center: (x, y) center position
radius: Circle radius
fill_color: RGB fill color
shadow_offset: (x, y) shadow offset
shadow_color: RGB shadow color
Returns:
Modified frame
"""
draw = ImageDraw.Draw(frame)
x, y = center
# Draw shadow
shadow_center = (x + shadow_offset[0], y + shadow_offset[1])
shadow_bbox = [
shadow_center[0] - radius,
shadow_center[1] - radius,
shadow_center[0] + radius,
shadow_center[1] + radius
]
draw.ellipse(shadow_bbox, fill=shadow_color)
# Draw main circle
bbox = [x - radius, y - radius, x + radius, y + radius]
draw.ellipse(bbox, fill=fill_color)
return frame
def draw_rounded_rectangle(frame: Image.Image, top_left: tuple[int, int],
bottom_right: tuple[int, int], radius: int,
fill_color: Optional[tuple[int, int, int]] = None,
outline_color: Optional[tuple[int, int, int]] = None,
outline_width: int = 1) -> Image.Image:
"""
Draw a rectangle with rounded corners.
Args:
frame: PIL Image to draw on
top_left: (x, y) top-left corner
bottom_right: (x, y) bottom-right corner
radius: Corner radius
fill_color: RGB fill color (None for no fill)
outline_color: RGB outline color (None for no outline)
outline_width: Outline width
Returns:
Modified frame
"""
draw = ImageDraw.Draw(frame)
x1, y1 = top_left
x2, y2 = bottom_right
# Draw rounded rectangle using PIL's built-in method
draw.rounded_rectangle([x1, y1, x2, y2], radius=radius,
fill=fill_color, outline=outline_color, width=outline_width)
return frame
def add_vignette(frame: Image.Image, strength: float = 0.5) -> Image.Image:
"""
Add a vignette effect (darkened edges) to frame.
Args:
frame: PIL Image
strength: Vignette strength (0.0-1.0)
Returns:
Frame with vignette
"""
width, height = frame.size
# Create radial gradient mask
center_x, center_y = width // 2, height // 2
max_dist = ((width / 2) ** 2 + (height / 2) ** 2) ** 0.5
# Create overlay
overlay = Image.new('RGB', (width, height), (0, 0, 0))
pixels = overlay.load()
for y in range(height):
for x in range(width):
# Calculate distance from center
dx = x - center_x
dy = y - center_y
dist = (dx ** 2 + dy ** 2) ** 0.5
# Calculate vignette value
vignette = min(1, (dist / max_dist) * strength)
value = int(255 * (1 - vignette))
pixels[x, y] = (value, value, value)
# Blend with original using multiply
frame_array = np.array(frame, dtype=np.float32) / 255
overlay_array = np.array(overlay, dtype=np.float32) / 255
result = frame_array * overlay_array
result = (result * 255).astype(np.uint8)
return Image.fromarray(result)
def draw_star(frame: Image.Image, center: tuple[int, int], size: int,
fill_color: tuple[int, int, int],
outline_color: Optional[tuple[int, int, int]] = None,
outline_width: int = 1) -> Image.Image:
""" """
Draw a 5-pointed star. Draw a 5-pointed star.
@@ -451,6 +157,7 @@ def draw_star(frame: Image.Image, center: tuple[int, int], size: int,
Modified frame Modified frame
""" """
import math import math
draw = ImageDraw.Draw(frame) draw = ImageDraw.Draw(frame)
x, y = center x, y = center

View File

@@ -8,9 +8,10 @@ generated frames, with automatic optimization for Slack's requirements.
from pathlib import Path from pathlib import Path
from typing import Optional from typing import Optional
import imageio.v3 as imageio import imageio.v3 as imageio
from PIL import Image
import numpy as np import numpy as np
from PIL import Image
class GIFBuilder: class GIFBuilder:
@@ -38,12 +39,14 @@ class GIFBuilder:
frame: Frame as numpy array or PIL Image (will be converted to RGB) frame: Frame as numpy array or PIL Image (will be converted to RGB)
""" """
if isinstance(frame, Image.Image): if isinstance(frame, Image.Image):
frame = np.array(frame.convert('RGB')) frame = np.array(frame.convert("RGB"))
# Ensure frame is correct size # Ensure frame is correct size
if frame.shape[:2] != (self.height, self.width): if frame.shape[:2] != (self.height, self.width):
pil_frame = Image.fromarray(frame) pil_frame = Image.fromarray(frame)
pil_frame = pil_frame.resize((self.width, self.height), Image.Resampling.LANCZOS) pil_frame = pil_frame.resize(
(self.width, self.height), Image.Resampling.LANCZOS
)
frame = np.array(pil_frame) frame = np.array(pil_frame)
self.frames.append(frame) self.frames.append(frame)
@@ -53,7 +56,9 @@ class GIFBuilder:
for frame in frames: for frame in frames:
self.add_frame(frame) self.add_frame(frame)
def optimize_colors(self, num_colors: int = 128, use_global_palette: bool = True) -> list[np.ndarray]: def optimize_colors(
self, num_colors: int = 128, use_global_palette: bool = True
) -> list[np.ndarray]:
""" """
Reduce colors in all frames using quantization. Reduce colors in all frames using quantization.
@@ -70,12 +75,16 @@ class GIFBuilder:
# Create a global palette from all frames # Create a global palette from all frames
# Sample frames to build palette # Sample frames to build palette
sample_size = min(5, len(self.frames)) sample_size = min(5, len(self.frames))
sample_indices = [int(i * len(self.frames) / sample_size) for i in range(sample_size)] sample_indices = [
int(i * len(self.frames) / sample_size) for i in range(sample_size)
]
sample_frames = [self.frames[i] for i in sample_indices] sample_frames = [self.frames[i] for i in sample_indices]
# Combine sample frames into a single image for palette generation # Combine sample frames into a single image for palette generation
# Flatten each frame to get all pixels, then stack them # Flatten each frame to get all pixels, then stack them
all_pixels = np.vstack([f.reshape(-1, 3) for f in sample_frames]) # (total_pixels, 3) all_pixels = np.vstack(
[f.reshape(-1, 3) for f in sample_frames]
) # (total_pixels, 3)
# Create a properly-shaped RGB image from the pixel data # Create a properly-shaped RGB image from the pixel data
# We'll make a roughly square image from all the pixels # We'll make a roughly square image from all the pixels
@@ -90,8 +99,10 @@ class GIFBuilder:
all_pixels = np.vstack([all_pixels, padding]) all_pixels = np.vstack([all_pixels, padding])
# Reshape to proper RGB image format (H, W, 3) # Reshape to proper RGB image format (H, W, 3)
img_array = all_pixels[:pixels_needed].reshape(height, width, 3).astype(np.uint8) img_array = (
combined_img = Image.fromarray(img_array, mode='RGB') all_pixels[:pixels_needed].reshape(height, width, 3).astype(np.uint8)
)
combined_img = Image.fromarray(img_array, mode="RGB")
# Generate global palette # Generate global palette
global_palette = combined_img.quantize(colors=num_colors, method=2) global_palette = combined_img.quantize(colors=num_colors, method=2)
@@ -100,22 +111,23 @@ class GIFBuilder:
for frame in self.frames: for frame in self.frames:
pil_frame = Image.fromarray(frame) pil_frame = Image.fromarray(frame)
quantized = pil_frame.quantize(palette=global_palette, dither=1) quantized = pil_frame.quantize(palette=global_palette, dither=1)
optimized.append(np.array(quantized.convert('RGB'))) optimized.append(np.array(quantized.convert("RGB")))
else: else:
# Use per-frame quantization # Use per-frame quantization
for frame in self.frames: for frame in self.frames:
pil_frame = Image.fromarray(frame) pil_frame = Image.fromarray(frame)
quantized = pil_frame.quantize(colors=num_colors, method=2, dither=1) quantized = pil_frame.quantize(colors=num_colors, method=2, dither=1)
optimized.append(np.array(quantized.convert('RGB'))) optimized.append(np.array(quantized.convert("RGB")))
return optimized return optimized
def deduplicate_frames(self, threshold: float = 0.995) -> int: def deduplicate_frames(self, threshold: float = 0.9995) -> int:
""" """
Remove duplicate or near-duplicate consecutive frames. Remove duplicate or near-duplicate consecutive frames.
Args: Args:
threshold: Similarity threshold (0.0-1.0). Higher = more strict (0.995 = very similar). threshold: Similarity threshold (0.0-1.0). Higher = more strict (0.9995 = nearly identical).
Use 0.9995+ to preserve subtle animations, 0.98 for aggressive removal.
Returns: Returns:
Number of frames removed Number of frames removed
@@ -136,7 +148,7 @@ class GIFBuilder:
similarity = 1.0 - (np.mean(diff) / 255.0) similarity = 1.0 - (np.mean(diff) / 255.0)
# Keep frame if sufficiently different # Keep frame if sufficiently different
# High threshold (0.995) means only remove truly identical frames # High threshold (0.9995+) means only remove nearly identical frames
if similarity < threshold: if similarity < threshold:
deduplicated.append(self.frames[i]) deduplicated.append(self.frames[i])
else: else:
@@ -145,16 +157,21 @@ class GIFBuilder:
self.frames = deduplicated self.frames = deduplicated
return removed_count return removed_count
def save(self, output_path: str | Path, num_colors: int = 128, def save(
optimize_for_emoji: bool = False, remove_duplicates: bool = True) -> dict: self,
output_path: str | Path,
num_colors: int = 128,
optimize_for_emoji: bool = False,
remove_duplicates: bool = False,
) -> dict:
""" """
Save frames as optimized GIF for Slack. Save frames as optimized GIF for Slack.
Args: Args:
output_path: Where to save the GIF output_path: Where to save the GIF
num_colors: Number of colors to use (fewer = smaller file) num_colors: Number of colors to use (fewer = smaller file)
optimize_for_emoji: If True, optimize for <64KB emoji size optimize_for_emoji: If True, optimize for emoji size (128x128, fewer colors)
remove_duplicates: Remove duplicate consecutive frames remove_duplicates: If True, remove duplicate consecutive frames (opt-in)
Returns: Returns:
Dictionary with file info (path, size, dimensions, frame_count) Dictionary with file info (path, size, dimensions, frame_count)
@@ -163,18 +180,21 @@ class GIFBuilder:
raise ValueError("No frames to save. Add frames with add_frame() first.") raise ValueError("No frames to save. Add frames with add_frame() first.")
output_path = Path(output_path) output_path = Path(output_path)
original_frame_count = len(self.frames)
# Remove duplicate frames to reduce file size # Remove duplicate frames to reduce file size
if remove_duplicates: if remove_duplicates:
removed = self.deduplicate_frames(threshold=0.98) removed = self.deduplicate_frames(threshold=0.9995)
if removed > 0: if removed > 0:
print(f" Removed {removed} duplicate frames") print(
f" Removed {removed} nearly identical frames (preserved subtle animations)"
)
# Optimize for emoji if requested # Optimize for emoji if requested
if optimize_for_emoji: if optimize_for_emoji:
if self.width > 128 or self.height > 128: if self.width > 128 or self.height > 128:
print(f" Resizing from {self.width}x{self.height} to 128x128 for emoji") print(
f" Resizing from {self.width}x{self.height} to 128x128 for emoji"
)
self.width = 128 self.width = 128
self.height = 128 self.height = 128
# Resize all frames # Resize all frames
@@ -188,10 +208,14 @@ class GIFBuilder:
# More aggressive FPS reduction for emoji # More aggressive FPS reduction for emoji
if len(self.frames) > 12: if len(self.frames) > 12:
print(f" Reducing frames from {len(self.frames)} to ~12 for emoji size") print(
f" Reducing frames from {len(self.frames)} to ~12 for emoji size"
)
# Keep every nth frame to get close to 12 frames # Keep every nth frame to get close to 12 frames
keep_every = max(1, len(self.frames) // 12) keep_every = max(1, len(self.frames) // 12)
self.frames = [self.frames[i] for i in range(0, len(self.frames), keep_every)] self.frames = [
self.frames[i] for i in range(0, len(self.frames), keep_every)
]
# Optimize colors with global palette # Optimize colors with global palette
optimized_frames = self.optimize_colors(num_colors, use_global_palette=True) optimized_frames = self.optimize_colors(num_colors, use_global_palette=True)
@@ -204,7 +228,7 @@ class GIFBuilder:
output_path, output_path,
optimized_frames, optimized_frames,
duration=frame_duration, duration=frame_duration,
loop=0 # Infinite loop loop=0, # Infinite loop
) )
# Get file info # Get file info
@@ -212,14 +236,14 @@ class GIFBuilder:
file_size_mb = file_size_kb / 1024 file_size_mb = file_size_kb / 1024
info = { info = {
'path': str(output_path), "path": str(output_path),
'size_kb': file_size_kb, "size_kb": file_size_kb,
'size_mb': file_size_mb, "size_mb": file_size_mb,
'dimensions': f'{self.width}x{self.height}', "dimensions": f"{self.width}x{self.height}",
'frame_count': len(optimized_frames), "frame_count": len(optimized_frames),
'fps': self.fps, "fps": self.fps,
'duration_seconds': len(optimized_frames) / self.fps, "duration_seconds": len(optimized_frames) / self.fps,
'colors': num_colors "colors": num_colors,
} }
# Print info # Print info
@@ -231,13 +255,12 @@ class GIFBuilder:
print(f" Duration: {info['duration_seconds']:.1f}s") print(f" Duration: {info['duration_seconds']:.1f}s")
print(f" Colors: {num_colors}") print(f" Colors: {num_colors}")
# Warnings # Size info
if optimize_for_emoji and file_size_kb > 64: if optimize_for_emoji:
print(f"\n⚠️ WARNING: Emoji file size ({file_size_kb:.1f} KB) exceeds 64 KB limit") print(f" Optimized for emoji (128x128, reduced colors)")
print(" Try: fewer frames, fewer colors, or simpler design") if file_size_mb > 1.0:
elif not optimize_for_emoji and file_size_kb > 2048: print(f"\n Note: Large file size ({file_size_kb:.1f} KB)")
print(f"\n⚠️ WARNING: File size ({file_size_kb:.1f} KB) is large for Slack") print(" Consider: fewer frames, smaller dimensions, or fewer colors")
print(" Try: fewer frames, smaller dimensions, or fewer colors")
return info return info

View File

@@ -8,146 +8,36 @@ These validators help ensure your GIFs meet Slack's size and dimension constrain
from pathlib import Path from pathlib import Path
def check_slack_size(gif_path: str | Path, is_emoji: bool = True) -> tuple[bool, dict]: def validate_gif(
gif_path: str | Path, is_emoji: bool = True, verbose: bool = True
) -> tuple[bool, dict]:
""" """
Check if GIF meets Slack size limits. Validate GIF for Slack (dimensions, size, frame count).
Args: Args:
gif_path: Path to GIF file gif_path: Path to GIF file
is_emoji: True for emoji GIF (64KB limit), False for message GIF (2MB limit) is_emoji: True for emoji (128x128 recommended), False for message GIF
verbose: Print validation details
Returns: Returns:
Tuple of (passes: bool, info: dict with details) Tuple of (passes: bool, results: dict with all details)
"""
gif_path = Path(gif_path)
if not gif_path.exists():
return False, {'error': f'File not found: {gif_path}'}
size_bytes = gif_path.stat().st_size
size_kb = size_bytes / 1024
size_mb = size_kb / 1024
limit_kb = 64 if is_emoji else 2048
limit_mb = limit_kb / 1024
passes = size_kb <= limit_kb
info = {
'size_bytes': size_bytes,
'size_kb': size_kb,
'size_mb': size_mb,
'limit_kb': limit_kb,
'limit_mb': limit_mb,
'passes': passes,
'type': 'emoji' if is_emoji else 'message'
}
# Print feedback
if passes:
print(f"{size_kb:.1f} KB - within {limit_kb} KB limit")
else:
print(f"{size_kb:.1f} KB - exceeds {limit_kb} KB limit")
overage_kb = size_kb - limit_kb
overage_percent = (overage_kb / limit_kb) * 100
print(f" Over by: {overage_kb:.1f} KB ({overage_percent:.1f}%)")
print(f" Try: fewer frames, fewer colors, or simpler design")
return passes, info
def validate_dimensions(width: int, height: int, is_emoji: bool = True) -> tuple[bool, dict]:
"""
Check if dimensions are suitable for Slack.
Args:
width: Frame width in pixels
height: Frame height in pixels
is_emoji: True for emoji GIF, False for message GIF
Returns:
Tuple of (passes: bool, info: dict with details)
"""
info = {
'width': width,
'height': height,
'is_square': width == height,
'type': 'emoji' if is_emoji else 'message'
}
if is_emoji:
# Emoji GIFs should be 128x128
optimal = width == height == 128
acceptable = width == height and 64 <= width <= 128
info['optimal'] = optimal
info['acceptable'] = acceptable
if optimal:
print(f"{width}x{height} - optimal for emoji")
passes = True
elif acceptable:
print(f"{width}x{height} - acceptable but 128x128 is optimal")
passes = True
else:
print(f"{width}x{height} - emoji should be square, 128x128 recommended")
passes = False
else:
# Message GIFs should be square-ish and reasonable size
aspect_ratio = max(width, height) / min(width, height) if min(width, height) > 0 else float('inf')
reasonable_size = 320 <= min(width, height) <= 640
info['aspect_ratio'] = aspect_ratio
info['reasonable_size'] = reasonable_size
# Check if roughly square (within 2:1 ratio)
is_square_ish = aspect_ratio <= 2.0
if is_square_ish and reasonable_size:
print(f"{width}x{height} - good for message GIF")
passes = True
elif is_square_ish:
print(f"{width}x{height} - square-ish but unusual size")
passes = True
elif reasonable_size:
print(f"{width}x{height} - good size but not square-ish")
passes = True
else:
print(f"{width}x{height} - unusual dimensions for Slack")
passes = False
return passes, info
def validate_gif(gif_path: str | Path, is_emoji: bool = True) -> tuple[bool, dict]:
"""
Run all validations on a GIF file.
Args:
gif_path: Path to GIF file
is_emoji: True for emoji GIF, False for message GIF
Returns:
Tuple of (all_pass: bool, results: dict)
""" """
from PIL import Image from PIL import Image
gif_path = Path(gif_path) gif_path = Path(gif_path)
if not gif_path.exists(): if not gif_path.exists():
return False, {'error': f'File not found: {gif_path}'} return False, {"error": f"File not found: {gif_path}"}
print(f"\nValidating {gif_path.name} as {'emoji' if is_emoji else 'message'} GIF:") # Get file size
print("=" * 60) size_bytes = gif_path.stat().st_size
size_kb = size_bytes / 1024
size_mb = size_kb / 1024
# Check file size # Get dimensions and frame info
size_pass, size_info = check_slack_size(gif_path, is_emoji)
# Check dimensions
try: try:
with Image.open(gif_path) as img: with Image.open(gif_path) as img:
width, height = img.size width, height = img.size
dim_pass, dim_info = validate_dimensions(width, height, is_emoji)
# Count frames # Count frames
frame_count = 0 frame_count = 0
@@ -158,107 +48,89 @@ def validate_gif(gif_path: str | Path, is_emoji: bool = True) -> tuple[bool, dic
except EOFError: except EOFError:
pass pass
# Get duration if available # Get duration
try: try:
duration_ms = img.info.get('duration', 100) duration_ms = img.info.get("duration", 100)
total_duration = (duration_ms * frame_count) / 1000 total_duration = (duration_ms * frame_count) / 1000
fps = frame_count / total_duration if total_duration > 0 else 0 fps = frame_count / total_duration if total_duration > 0 else 0
except: except:
duration_ms = None
total_duration = None total_duration = None
fps = None fps = None
except Exception as e: except Exception as e:
return False, {'error': f'Failed to read GIF: {e}'} return False, {"error": f"Failed to read GIF: {e}"}
print(f"\nFrames: {frame_count}") # Validate dimensions
if total_duration: if is_emoji:
print(f"Duration: {total_duration:.1f}s @ {fps:.1f} fps") optimal = width == height == 128
acceptable = width == height and 64 <= width <= 128
all_pass = size_pass and dim_pass dim_pass = acceptable
else:
aspect_ratio = (
max(width, height) / min(width, height)
if min(width, height) > 0
else float("inf")
)
dim_pass = aspect_ratio <= 2.0 and 320 <= min(width, height) <= 640
results = { results = {
'file': str(gif_path), "file": str(gif_path),
'passes': all_pass, "passes": dim_pass,
'size': size_info, "width": width,
'dimensions': dim_info, "height": height,
'frame_count': frame_count, "size_kb": size_kb,
'duration_seconds': total_duration, "size_mb": size_mb,
'fps': fps "frame_count": frame_count,
"duration_seconds": total_duration,
"fps": fps,
"is_emoji": is_emoji,
"optimal": optimal if is_emoji else None,
} }
print("=" * 60) # Print if verbose
if all_pass: if verbose:
print("✓ All validations passed!") print(f"\nValidating {gif_path.name}:")
else: print(
print("✗ Some validations failed") f" Dimensions: {width}x{height}"
print() + (
f" ({'optimal' if optimal else 'acceptable'})"
if is_emoji and acceptable
else ""
)
)
print(
f" Size: {size_kb:.1f} KB"
+ (f" ({size_mb:.2f} MB)" if size_mb >= 1.0 else "")
)
print(
f" Frames: {frame_count}"
+ (f" @ {fps:.1f} fps ({total_duration:.1f}s)" if fps else "")
)
return all_pass, results if not dim_pass:
print(
f" Note: {'Emoji should be 128x128' if is_emoji else 'Unusual dimensions for Slack'}"
)
if size_mb > 5.0:
print(f" Note: Large file size - consider fewer frames/colors")
return dim_pass, results
def get_optimization_suggestions(results: dict) -> list[str]: def is_slack_ready(
""" gif_path: str | Path, is_emoji: bool = True, verbose: bool = True
Get suggestions for optimizing a GIF based on validation results. ) -> bool:
Args:
results: Results dict from validate_gif()
Returns:
List of suggestion strings
"""
suggestions = []
if not results.get('passes', False):
size_info = results.get('size', {})
dim_info = results.get('dimensions', {})
# Size suggestions
if not size_info.get('passes', True):
overage = size_info['size_kb'] - size_info['limit_kb']
if size_info['type'] == 'emoji':
suggestions.append(f"Reduce file size by {overage:.1f} KB:")
suggestions.append(" - Limit to 10-12 frames")
suggestions.append(" - Use 32-40 colors maximum")
suggestions.append(" - Remove gradients (solid colors compress better)")
suggestions.append(" - Simplify design")
else:
suggestions.append(f"Reduce file size by {overage:.1f} KB:")
suggestions.append(" - Reduce frame count or FPS")
suggestions.append(" - Use fewer colors (128 → 64)")
suggestions.append(" - Reduce dimensions")
# Dimension suggestions
if not dim_info.get('optimal', True) and dim_info.get('type') == 'emoji':
suggestions.append("For optimal emoji GIF:")
suggestions.append(" - Use 128x128 dimensions")
suggestions.append(" - Ensure square aspect ratio")
return suggestions
# Convenience function for quick checks
def is_slack_ready(gif_path: str | Path, is_emoji: bool = True, verbose: bool = True) -> bool:
""" """
Quick check if GIF is ready for Slack. Quick check if GIF is ready for Slack.
Args: Args:
gif_path: Path to GIF file gif_path: Path to GIF file
is_emoji: True for emoji GIF, False for message GIF is_emoji: True for emoji GIF, False for message GIF
verbose: Print detailed feedback verbose: Print feedback
Returns: Returns:
True if ready, False otherwise True if dimensions are acceptable
""" """
if verbose: passes, _ = validate_gif(gif_path, is_emoji, verbose)
passes, results = validate_gif(gif_path, is_emoji) return passes
if not passes:
suggestions = get_optimization_suggestions(results)
if suggestions:
print("\nSuggestions:")
for suggestion in suggestions:
print(suggestion)
return passes
else:
size_pass, _ = check_slack_size(gif_path, is_emoji)
return size_pass

View File

@@ -1,10 +1,10 @@
--- ---
name: artifacts-builder name: web-artifacts-builder
description: Suite of tools for creating elaborate, multi-component claude.ai HTML artifacts using modern frontend web technologies (React, Tailwind CSS, shadcn/ui). Use for complex artifacts requiring state management, routing, or shadcn/ui components - not for simple single-file HTML/JSX artifacts. description: Suite of tools for creating elaborate, multi-component claude.ai HTML artifacts using modern frontend web technologies (React, Tailwind CSS, shadcn/ui). Use for complex artifacts requiring state management, routing, or shadcn/ui components - not for simple single-file HTML/JSX artifacts.
license: Complete terms in LICENSE.txt license: Complete terms in LICENSE.txt
--- ---
# Artifacts Builder # Web Artifacts Builder
To build powerful frontend claude.ai artifacts, follow these steps: To build powerful frontend claude.ai artifacts, follow these steps:
1. Initialize the frontend repo using `scripts/init-artifact.sh` 1. Initialize the frontend repo using `scripts/init-artifact.sh`