tlocal

A collaborative workspace to extract, review, and localize on-screen text—built for full-length films and long-form programmes, not one-off clips.

Role

Co-Founder and Design Engineer

Team

Tongfei Zhu

Aditya Mishra

Luke Miller

Raptor English

Scope

March 28 – Ongoing

Tools

Figma

Cursor

TwelveLabs (Marengo, Pegasus)

AWS Bedrock

Baseten

LTX.io

tlocal started at a media-and-entertainment hackathon hosted by TwelveLabs: a 24-hour build that shipped a production-ready pipeline—not a slide deck. The team took first place in a room of engineers, executives, and practitioners with deep experience in entertainment and video intelligence.

The product replaces blank localization QC spreadsheets with full inventories in minutes, using TwelveLabs’ Marengo and Pegasus models to understand what appears on screen and when. Direction came from the room itself: stories from operators and studio-side leaders, teammate context from major M&E organizations, and a clear picture of where video intelligence can remove real friction.

Development continues beyond the event. tlocal is live—built from 0→1 at the hackathon, now moving from 1→100 with the same focus on shipping.

Product walkthrough

The workspace is organized around long runtimes, locale ownership, and reviewable artifacts—not disconnected strings.

Extract across the timeline

Turn signs, subtitles, credits, and graphics into reviewable tickets—each tied to timecode and scoped to full films and long programmes, not isolated clips.

Route work to language teams

Give every locale a clear next step: translate in context, adapt for tone and length, or route work for cultural review when the material needs a specialist pass.

Faster translation mockups

AI composites translated copy onto each screen in seconds so reviewers judge real fit, not flat strings, and tighten language suggestions with the frame in view. Admins get live progress and status updates as work moves across locales.

One intuitive workspace

From upload to translator queue, tlocal keeps on-screen text tied to timecode and context—so nothing gets lost across a long runtime.

Thanks

Thank you to the hackathon judges and studio partners—including representatives from Warner Bros. Discovery, Fox Corporation, Lionsgate, Sony Pictures Entertainment, and SVTA—and to the sponsors who made production-scale experimentation possible: Amazon Web Services (Bedrock), Baseten for model serving, and LTX.io for video generation. Special thanks to everyone at TwelveLabs who organized the event and supported the build.