$ man content-wiki/programmatic-video-content
Tools and MCPsadvanced
Programmatic Video as Content
React components that render to MP4 — video as a first-class content type
Why Video Belongs in the Repo
Video has traditionally been a separate creative workflow. After Effects exports sit on hard drives. Canva projects live in a SaaS database. Neither shares design tokens, data, or deploy pipelines with your content system. Programmatic video changes this. When video is a React component in your monorepo, it imports the same shared package as your websites. Same color palette. Same type definitions. Same build pipeline. Change a brand color in the shared tokens and the websites and videos all update on the next build. Video stops being a separate content silo and becomes another node in the content graph.
CODE
The Remotion Model
Remotion evaluates React components frame by frame at your target FPS and encodes the result to video. You write JSX. It renders pixels. No GPU required. No timeline editor. The video app lives inside the monorepo alongside the website apps. It imports the shared data package for design tokens, colors, and brand configuration. Compositions in a root file define what gets rendered — each composition specifies dimensions, FPS, duration, and the component tree. A render script generates all variants in one command.
PATTERN
Multi-Format Output
Define aspect ratio presets as constants: LinkedIn 4:5, Reels 9:16, landscape 16:9. A responsive scaling hook normalizes rendering to a base resolution and scales proportionally. Each brand gets one composition per aspect ratio. Three brands times three formats equals nine compositions from one component tree. Adding a new brand or format means adding entries to the preset constants and the composition registry. No component code changes. The multi-format approach means one design session produces all social media variants automatically.
PATTERN
Deterministic Animation
Programmatic video requires deterministic rendering. Random values change between frames and break the output. The solution is seeded noise functions. Perlin noise driven by frame number produces organic animation — particle drift, character rain, opacity shimmer — that is fully reproducible. Same seed, same output, every render. This is critical for iteration: you can change one parameter and re-render knowing exactly which frames changed and why. It also enables caching — unchanged compositions skip rendering entirely.
CODE
Video in the Content Index
A content index that tracks your repo should also track video files. Parse filenames for brand, aspect ratio, and format. Track source files (render output) separately from deployed files (copied to site public directories). The index tells you which videos are rendered, which are deployed, and which are missing. Combined with the content graph, you can query for brands without video coverage or formats that need updating. Video becomes queryable, auditable, and integrated into the same content operations as blog posts and wiki entries.
PRO TIP
The Monorepo Advantage
The real win is not Remotion itself. It is video living in the same codebase as everything else. Design tokens are shared, not duplicated. Brand updates propagate automatically. The deploy pipeline handles video alongside web content. The content index tracks video files alongside blog posts. The knowledge graph can reference video compositions. The same CI that builds the websites can render the videos. Video is no longer a creative island — it is part of the system. That integration is the competitive advantage. Any tool can render video. Only a monorepo can integrate video into every other content type seamlessly.
related entries