mirror of
https://github.com/affaan-m/everything-claude-code.git
synced 2026-03-30 13:43:26 +08:00
407 lines
12 KiB
Markdown
407 lines
12 KiB
Markdown
# Streaming & Playback
|
|
|
|
VideoDB generates streams on-demand, returning HLS-compatible URLs that play instantly in any standard video player. No render times or export waits - edits, searches, and compositions stream immediately.
|
|
|
|
## Prerequisites
|
|
|
|
Videos **must be uploaded** to a collection before streams can be generated. For search-based streams, the video must also be **indexed** (spoken words and/or scenes). See [search.md](search.md) for indexing details.
|
|
|
|
## Core Concepts
|
|
|
|
### Stream Generation
|
|
|
|
Every video, search result, and timeline in VideoDB can produce a **stream URL**. This URL points to an HLS (HTTP Live Streaming) manifest that is compiled on demand.
|
|
|
|
```python
|
|
# From a video
|
|
stream_url = video.generate_stream()
|
|
|
|
# From a timeline
|
|
stream_url = timeline.generate_stream()
|
|
|
|
# From search results
|
|
stream_url = results.compile()
|
|
```
|
|
|
|
## Streaming a Single Video
|
|
|
|
### Basic Playback
|
|
|
|
```python
|
|
import videodb
|
|
|
|
conn = videodb.connect()
|
|
coll = conn.get_collection()
|
|
video = coll.get_video("your-video-id")
|
|
|
|
# Generate stream URL
|
|
stream_url = video.generate_stream()
|
|
print(f"Stream: {stream_url}")
|
|
|
|
# Open in default browser
|
|
video.play()
|
|
```
|
|
|
|
### With Subtitles
|
|
|
|
```python
|
|
# Index and add subtitles first
|
|
video.index_spoken_words(force=True)
|
|
stream_url = video.add_subtitle()
|
|
|
|
# Returned URL already includes subtitles
|
|
print(f"Subtitled stream: {stream_url}")
|
|
```
|
|
|
|
### Specific Segments
|
|
|
|
Stream only a portion of a video by passing a timeline of timestamp ranges:
|
|
|
|
```python
|
|
# Stream seconds 10-30 and 60-90
|
|
stream_url = video.generate_stream(timeline=[(10, 30), (60, 90)])
|
|
print(f"Segment stream: {stream_url}")
|
|
```
|
|
|
|
## Streaming Timeline Compositions
|
|
|
|
Build a multi-asset composition and stream it in real time:
|
|
|
|
```python
|
|
import videodb
|
|
from videodb.timeline import Timeline
|
|
from videodb.asset import VideoAsset, AudioAsset, ImageAsset, TextAsset, TextStyle
|
|
|
|
conn = videodb.connect()
|
|
coll = conn.get_collection()
|
|
|
|
video = coll.get_video(video_id)
|
|
music = coll.get_audio(music_id)
|
|
|
|
timeline = Timeline(conn)
|
|
|
|
# Main video content
|
|
timeline.add_inline(VideoAsset(asset_id=video.id))
|
|
|
|
# Background music overlay (starts at second 0)
|
|
timeline.add_overlay(0, AudioAsset(asset_id=music.id))
|
|
|
|
# Text overlay at the beginning
|
|
timeline.add_overlay(0, TextAsset(
|
|
text="Live Demo",
|
|
duration=3,
|
|
style=TextStyle(fontsize=48, fontcolor="white", boxcolor="#000000"),
|
|
))
|
|
|
|
# Generate the composed stream
|
|
stream_url = timeline.generate_stream()
|
|
print(f"Composed stream: {stream_url}")
|
|
```
|
|
|
|
**Important:** `add_inline()` only accepts `VideoAsset`. Use `add_overlay()` for `AudioAsset`, `ImageAsset`, and `TextAsset`.
|
|
|
|
For detailed timeline editing, see [editor.md](editor.md).
|
|
|
|
## Streaming Search Results
|
|
|
|
Compile search results into a single stream of all matching segments:
|
|
|
|
```python
|
|
from videodb import SearchType
|
|
from videodb.exceptions import InvalidRequestError
|
|
|
|
video.index_spoken_words(force=True)
|
|
try:
|
|
results = video.search("key announcement", search_type=SearchType.semantic)
|
|
|
|
# Compile all matching shots into one stream
|
|
stream_url = results.compile()
|
|
print(f"Search results stream: {stream_url}")
|
|
|
|
# Or play directly
|
|
results.play()
|
|
except InvalidRequestError as exc:
|
|
if "No results found" in str(exc):
|
|
print("No matching announcement segments were found.")
|
|
else:
|
|
raise
|
|
```
|
|
|
|
### Stream Individual Search Hits
|
|
|
|
```python
|
|
from videodb.exceptions import InvalidRequestError
|
|
|
|
try:
|
|
results = video.search("product demo", search_type=SearchType.semantic)
|
|
for i, shot in enumerate(results.get_shots()):
|
|
stream_url = shot.generate_stream()
|
|
print(f"Hit {i+1} [{shot.start:.1f}s-{shot.end:.1f}s]: {stream_url}")
|
|
except InvalidRequestError as exc:
|
|
if "No results found" in str(exc):
|
|
print("No product demo segments matched the query.")
|
|
else:
|
|
raise
|
|
```
|
|
|
|
## Audio Playback
|
|
|
|
Get a signed playback URL for audio content:
|
|
|
|
```python
|
|
audio = coll.get_audio(audio_id)
|
|
playback_url = audio.generate_url()
|
|
print(f"Audio URL: {playback_url}")
|
|
```
|
|
|
|
## Complete Workflow Examples
|
|
|
|
### Search-to-Stream Pipeline
|
|
|
|
Combine search, timeline composition, and streaming in one workflow:
|
|
|
|
```python
|
|
import videodb
|
|
from videodb import SearchType
|
|
from videodb.exceptions import InvalidRequestError
|
|
from videodb.timeline import Timeline
|
|
from videodb.asset import VideoAsset, TextAsset, TextStyle
|
|
|
|
conn = videodb.connect()
|
|
coll = conn.get_collection()
|
|
video = coll.get_video("your-video-id")
|
|
|
|
video.index_spoken_words(force=True)
|
|
|
|
# Search for key moments
|
|
queries = ["introduction", "main demo", "Q&A"]
|
|
timeline = Timeline(conn)
|
|
timeline_offset = 0.0
|
|
|
|
for query in queries:
|
|
try:
|
|
results = video.search(query, search_type=SearchType.semantic)
|
|
shots = results.get_shots()
|
|
except InvalidRequestError as exc:
|
|
if "No results found" in str(exc):
|
|
shots = []
|
|
else:
|
|
raise
|
|
|
|
if not shots:
|
|
continue
|
|
|
|
# Add the section label where this batch starts in the compiled timeline
|
|
timeline.add_overlay(timeline_offset, TextAsset(
|
|
text=query.title(),
|
|
duration=2,
|
|
style=TextStyle(fontsize=36, fontcolor="white", boxcolor="#222222"),
|
|
))
|
|
|
|
for shot in shots:
|
|
timeline.add_inline(
|
|
VideoAsset(asset_id=shot.video_id, start=shot.start, end=shot.end)
|
|
)
|
|
timeline_offset += shot.end - shot.start
|
|
|
|
stream_url = timeline.generate_stream()
|
|
print(f"Dynamic compilation: {stream_url}")
|
|
```
|
|
|
|
### Multi-Video Stream
|
|
|
|
Combine clips from different videos into a single stream:
|
|
|
|
```python
|
|
import videodb
|
|
from videodb.timeline import Timeline
|
|
from videodb.asset import VideoAsset
|
|
|
|
conn = videodb.connect()
|
|
coll = conn.get_collection()
|
|
|
|
video_clips = [
|
|
{"id": "vid_001", "start": 0, "end": 15},
|
|
{"id": "vid_002", "start": 10, "end": 30},
|
|
{"id": "vid_003", "start": 5, "end": 25},
|
|
]
|
|
|
|
timeline = Timeline(conn)
|
|
for clip in video_clips:
|
|
timeline.add_inline(
|
|
VideoAsset(asset_id=clip["id"], start=clip["start"], end=clip["end"])
|
|
)
|
|
|
|
stream_url = timeline.generate_stream()
|
|
print(f"Multi-video stream: {stream_url}")
|
|
```
|
|
|
|
### Conditional Stream Assembly
|
|
|
|
Build a stream dynamically based on search availability:
|
|
|
|
```python
|
|
import videodb
|
|
from videodb import SearchType
|
|
from videodb.exceptions import InvalidRequestError
|
|
from videodb.timeline import Timeline
|
|
from videodb.asset import VideoAsset, TextAsset, TextStyle
|
|
|
|
conn = videodb.connect()
|
|
coll = conn.get_collection()
|
|
video = coll.get_video("your-video-id")
|
|
|
|
video.index_spoken_words(force=True)
|
|
|
|
timeline = Timeline(conn)
|
|
|
|
# Try to find specific content; fall back to full video
|
|
topics = ["opening remarks", "technical deep dive", "closing"]
|
|
|
|
found_any = False
|
|
timeline_offset = 0.0
|
|
for topic in topics:
|
|
try:
|
|
results = video.search(topic, search_type=SearchType.semantic)
|
|
shots = results.get_shots()
|
|
except InvalidRequestError as exc:
|
|
if "No results found" in str(exc):
|
|
shots = []
|
|
else:
|
|
raise
|
|
|
|
if shots:
|
|
found_any = True
|
|
timeline.add_overlay(timeline_offset, TextAsset(
|
|
text=topic.title(),
|
|
duration=2,
|
|
style=TextStyle(fontsize=32, fontcolor="white", boxcolor="#1a1a2e"),
|
|
))
|
|
for shot in shots:
|
|
timeline.add_inline(
|
|
VideoAsset(asset_id=shot.video_id, start=shot.start, end=shot.end)
|
|
)
|
|
timeline_offset += shot.end - shot.start
|
|
|
|
if found_any:
|
|
stream_url = timeline.generate_stream()
|
|
print(f"Curated stream: {stream_url}")
|
|
else:
|
|
# Fall back to full video stream
|
|
stream_url = video.generate_stream()
|
|
print(f"Full video stream: {stream_url}")
|
|
```
|
|
|
|
### Live Event Recap
|
|
|
|
Process an event recording into a streamable recap with multiple sections:
|
|
|
|
```python
|
|
import videodb
|
|
from videodb import SearchType
|
|
from videodb.exceptions import InvalidRequestError
|
|
from videodb.timeline import Timeline
|
|
from videodb.asset import VideoAsset, AudioAsset, ImageAsset, TextAsset, TextStyle
|
|
|
|
conn = videodb.connect()
|
|
coll = conn.get_collection()
|
|
|
|
# Upload event recording
|
|
event = coll.upload(url="https://example.com/event-recording.mp4")
|
|
event.index_spoken_words(force=True)
|
|
|
|
# Generate background music
|
|
music = coll.generate_music(
|
|
prompt="upbeat corporate background music",
|
|
duration=120,
|
|
)
|
|
|
|
# Generate title image
|
|
title_img = coll.generate_image(
|
|
prompt="modern event recap title card, dark background, professional",
|
|
aspect_ratio="16:9",
|
|
)
|
|
|
|
# Build the recap timeline
|
|
timeline = Timeline(conn)
|
|
timeline_offset = 0.0
|
|
|
|
# Main video segments from search
|
|
try:
|
|
keynote = event.search("keynote announcement", search_type=SearchType.semantic)
|
|
keynote_shots = keynote.get_shots()[:5]
|
|
except InvalidRequestError as exc:
|
|
if "No results found" in str(exc):
|
|
keynote_shots = []
|
|
else:
|
|
raise
|
|
if keynote_shots:
|
|
keynote_start = timeline_offset
|
|
for shot in keynote_shots:
|
|
timeline.add_inline(
|
|
VideoAsset(asset_id=shot.video_id, start=shot.start, end=shot.end)
|
|
)
|
|
timeline_offset += shot.end - shot.start
|
|
else:
|
|
keynote_start = None
|
|
|
|
try:
|
|
demo = event.search("product demo", search_type=SearchType.semantic)
|
|
demo_shots = demo.get_shots()[:5]
|
|
except InvalidRequestError as exc:
|
|
if "No results found" in str(exc):
|
|
demo_shots = []
|
|
else:
|
|
raise
|
|
if demo_shots:
|
|
demo_start = timeline_offset
|
|
for shot in demo_shots:
|
|
timeline.add_inline(
|
|
VideoAsset(asset_id=shot.video_id, start=shot.start, end=shot.end)
|
|
)
|
|
timeline_offset += shot.end - shot.start
|
|
else:
|
|
demo_start = None
|
|
|
|
# Overlay title card image
|
|
timeline.add_overlay(0, ImageAsset(
|
|
asset_id=title_img.id, width=100, height=100, x=80, y=20, duration=5
|
|
))
|
|
|
|
# Overlay section labels at the correct timeline offsets
|
|
if keynote_start is not None:
|
|
timeline.add_overlay(max(5, keynote_start), TextAsset(
|
|
text="Keynote Highlights",
|
|
duration=3,
|
|
style=TextStyle(fontsize=40, fontcolor="white", boxcolor="#0d1117"),
|
|
))
|
|
if demo_start is not None:
|
|
timeline.add_overlay(max(5, demo_start), TextAsset(
|
|
text="Demo Highlights",
|
|
duration=3,
|
|
style=TextStyle(fontsize=36, fontcolor="white", boxcolor="#0d1117"),
|
|
))
|
|
|
|
# Overlay background music
|
|
timeline.add_overlay(0, AudioAsset(
|
|
asset_id=music.id, fade_in_duration=3
|
|
))
|
|
|
|
# Stream the final recap
|
|
stream_url = timeline.generate_stream()
|
|
print(f"Event recap: {stream_url}")
|
|
```
|
|
|
|
---
|
|
|
|
## Tips
|
|
|
|
- **HLS compatibility**: Stream URLs return HLS manifests (`.m3u8`). They work in Safari natively, and in other browsers via hls.js or similar libraries.
|
|
- **On-demand compilation**: Streams are compiled server-side when requested. The first play may have a brief compilation delay; subsequent plays of the same composition are cached.
|
|
- **Caching**: Calling `video.generate_stream()` a second time without arguments returns the cached stream URL rather than recompiling.
|
|
- **Segment streams**: `video.generate_stream(timeline=[(start, end)])` is the fastest way to stream a specific clip without building a full `Timeline` object.
|
|
- **Inline vs overlay**: `add_inline()` only accepts `VideoAsset` and places assets sequentially on the main track. `add_overlay()` accepts `AudioAsset`, `ImageAsset`, and `TextAsset` and layers them on top at a given start time.
|
|
- **TextStyle defaults**: `TextStyle` defaults to `font='Sans'`, `fontcolor='black'`. Use `boxcolor` (not `bgcolor`) for background color on text.
|
|
- **Combine with generation**: Use `coll.generate_music(prompt, duration)` and `coll.generate_image(prompt, aspect_ratio)` to create assets for timeline compositions.
|
|
- **Playback**: `.play()` opens the stream URL in the default system browser. For programmatic use, work with the URL string directly.
|