mirror of
https://github.com/affaan-m/everything-claude-code.git
synced 2026-04-08 02:03:34 +08:00
* fix(ci): resolve cross-platform test failures - Sanity check script (check-codex-global-state.sh) now falls back to grep -E when ripgrep is not available, fixing the codex-hooks sync test on all CI platforms. Patterns converted to POSIX ERE for portability. - Unicode safety test accepts both / and \ path separators so the executable-file assertion passes on Windows. - Gacha test sets PYTHONUTF8=1 so Python uses UTF-8 stdout encoding on Windows instead of cp1252, preventing UnicodeEncodeError on box-drawing characters. - Quoted-hook-path test skipped on Windows where NTFS disallows double-quote characters in filenames. * feat: port remotion-video-creation skill (29 rules), restore missing files New skill: - remotion-video-creation: 29 domain-specific Remotion rules covering 3D/Three.js, animations, audio, captions, charts, compositions, fonts, GIFs, Lottie, measuring, sequencing, tailwind, text animations, timing, transitions, trimming, and video embedding. Ported from personal skills. Restored: - autonomous-agent-harness/SKILL.md (was in commit but missing from worktree) - lead-intelligence/ (full directory restored from branch commit) Updated: - manifests/install-modules.json: added remotion-video-creation to media-generation - README.md + AGENTS.md: synced counts to 139 skills Catalog validates: 30 agents, 60 commands, 139 skills. * fix(security): pin MCP server versions, add dependabot, pin github-script SHA Critical: - Pin all npx -y MCP server packages to specific versions in .mcp.json to prevent supply chain attacks via version hijacking: - @modelcontextprotocol/server-github@2025.4.8 - @modelcontextprotocol/server-memory@2026.1.26 - @modelcontextprotocol/server-sequential-thinking@2025.12.18 - @playwright/mcp@0.0.69 (was 0.0.68) Medium: - Add .github/dependabot.yml for weekly npm + github-actions updates with grouped minor/patch PRs - Pin actions/github-script to SHA (was @v7 tag, now pinned to commit) * feat: add social-graph-ranker skill — weighted network proximity scoring New skill: social-graph-ranker - Weighted social graph traversal with exponential decay across hops - Bridge Score: B(m) = Σ w(t) · λ^(d(m,t)-1) ranks mutuals by target proximity - Extended Score incorporates 2nd-order network (mutual-of-mutual connections) - Final ranking includes engagement bonus for responsive connections - Runs in parallel with lead-intelligence skill for combined warm+cold outreach - Supports X API + LinkedIn CSV for graph harvesting - Outputs tiered action list: warm intros, direct outreach, network gap analysis Added to business-content install module. Catalog validates: 30/60/140. * fix(security): npm audit fix — resolve all dependency vulnerabilities Applied npm audit fix --force to resolve: - minimatch ReDoS (3 vulnerabilities, HIGH) - smol-toml DoS (MODERATE) - brace-expansion memory exhaustion (MODERATE) - markdownlint-cli upgraded from 0.47.0 to 0.48.0 npm audit now reports 0 vulnerabilities. * fix: resolve markdown lint and yarn lockfile sync - MD047: ensure single trailing newline on all remotion rule files - MD012: remove consecutive blank lines in lottie, measuring-dom-nodes, trimming - MD034: wrap bare URLs in angle brackets (tailwind, transcribe-captions) - yarn.lock: regenerated to sync with npm audit changes in package.json * fix: replace unicode arrows in lead-intelligence (CI unicode safety check)
127 lines
3.6 KiB
Markdown
127 lines
3.6 KiB
Markdown
---
|
|
name: display-captions
|
|
description: Displaying captions in Remotion with TikTok-style pages and word highlighting
|
|
metadata:
|
|
tags: captions, subtitles, display, tiktok, highlight
|
|
---
|
|
|
|
# Displaying captions in Remotion
|
|
|
|
This guide explains how to display captions in Remotion, assuming you already have captions in the `Caption` format.
|
|
|
|
## Prerequisites
|
|
|
|
First, the @remotion/captions package needs to be installed.
|
|
If it is not installed, use the following command:
|
|
|
|
```bash
|
|
npx remotion add @remotion/captions # If project uses npm
|
|
bunx remotion add @remotion/captions # If project uses bun
|
|
yarn remotion add @remotion/captions # If project uses yarn
|
|
pnpm exec remotion add @remotion/captions # If project uses pnpm
|
|
```
|
|
|
|
## Creating pages
|
|
|
|
Use `createTikTokStyleCaptions()` to group captions into pages. The `combineTokensWithinMilliseconds` option controls how many words appear at once:
|
|
|
|
```tsx
|
|
import {useMemo} from 'react';
|
|
import {createTikTokStyleCaptions} from '@remotion/captions';
|
|
import type {Caption} from '@remotion/captions';
|
|
|
|
// How often captions should switch (in milliseconds)
|
|
// Higher values = more words per page
|
|
// Lower values = fewer words (more word-by-word)
|
|
const SWITCH_CAPTIONS_EVERY_MS = 1200;
|
|
|
|
const {pages} = useMemo(() => {
|
|
return createTikTokStyleCaptions({
|
|
captions,
|
|
combineTokensWithinMilliseconds: SWITCH_CAPTIONS_EVERY_MS,
|
|
});
|
|
}, [captions]);
|
|
```
|
|
|
|
## Rendering with Sequences
|
|
|
|
Map over the pages and render each one in a `<Sequence>`. Calculate the start frame and duration from the page timing:
|
|
|
|
```tsx
|
|
import {Sequence, useVideoConfig, AbsoluteFill} from 'remotion';
|
|
import type {TikTokPage} from '@remotion/captions';
|
|
|
|
const CaptionedContent: React.FC = () => {
|
|
const {fps} = useVideoConfig();
|
|
|
|
return (
|
|
<AbsoluteFill>
|
|
{pages.map((page, index) => {
|
|
const nextPage = pages[index + 1] ?? null;
|
|
const startFrame = (page.startMs / 1000) * fps;
|
|
const endFrame = Math.min(
|
|
nextPage ? (nextPage.startMs / 1000) * fps : Infinity,
|
|
startFrame + (SWITCH_CAPTIONS_EVERY_MS / 1000) * fps,
|
|
);
|
|
const durationInFrames = endFrame - startFrame;
|
|
|
|
if (durationInFrames <= 0) {
|
|
return null;
|
|
}
|
|
|
|
return (
|
|
<Sequence
|
|
key={index}
|
|
from={startFrame}
|
|
durationInFrames={durationInFrames}
|
|
>
|
|
<CaptionPage page={page} />
|
|
</Sequence>
|
|
);
|
|
})}
|
|
</AbsoluteFill>
|
|
);
|
|
};
|
|
```
|
|
|
|
## Word highlighting
|
|
|
|
A caption page contains `tokens` which you can use to highlight the currently spoken word:
|
|
|
|
```tsx
|
|
import {AbsoluteFill, useCurrentFrame, useVideoConfig} from 'remotion';
|
|
import type {TikTokPage} from '@remotion/captions';
|
|
|
|
const HIGHLIGHT_COLOR = '#39E508';
|
|
|
|
const CaptionPage: React.FC<{page: TikTokPage}> = ({page}) => {
|
|
const frame = useCurrentFrame();
|
|
const {fps} = useVideoConfig();
|
|
|
|
// Current time relative to the start of the sequence
|
|
const currentTimeMs = (frame / fps) * 1000;
|
|
// Convert to absolute time by adding the page start
|
|
const absoluteTimeMs = page.startMs + currentTimeMs;
|
|
|
|
return (
|
|
<AbsoluteFill style={{justifyContent: 'center', alignItems: 'center'}}>
|
|
<div style={{fontSize: 80, fontWeight: 'bold', whiteSpace: 'pre'}}>
|
|
{page.tokens.map((token) => {
|
|
const isActive =
|
|
token.fromMs <= absoluteTimeMs && token.toMs > absoluteTimeMs;
|
|
|
|
return (
|
|
<span
|
|
key={token.fromMs}
|
|
style={{color: isActive ? HIGHLIGHT_COLOR : 'white'}}
|
|
>
|
|
{token.text}
|
|
</span>
|
|
);
|
|
})}
|
|
</div>
|
|
</AbsoluteFill>
|
|
);
|
|
};
|
|
```
|