My Experience at ADOS: From Keynotes to Hackathon Experiments
On 28 March 2025, I had the pleasure of participating in ADOS in Paris—a creative technology event—co‑organized by Banodoco and Lightricks at Artifex Lab. Walking into that space, you could feel the excitement: artists, engineers, and technologists converging around the open‑source LTXV model. I’d been following LTXV’s rapid evolution—a text-to-video model that generates video clips from text in real time—so it was thrilling to see the community around it in action.
First Day at ADOS: Talks & Connections
In the evening’s lineup, standout presentations ranged from Emma Catnip’s soulful art pieces to Vibeke Bertelsen’s uncanny human forms, and Yvann Barbot’s ComfyUI audio-reactive video demos. My supervisor, Christian Sandor, also delivered an inspiring keynote, and our team showcased a demo exploring the convergence of AR and AI—a project that deeply influenced my own thinking about generative storytelling.
Amid all this, I found LTXV’s open‑source availability fascinating—not just as technical innovation, but as a platform for collective creativity. Seeing Lightricks being open with their weights and tools seemed like a pivotal moment in democratizing video AI.
Day Two: Hackathon & Video Editing and Reference-based Style‑Transfer Experiments
The very next day I joined the ADOS hackathon, working side by side with others to push LTXV’s limits. Provided computation resources, power, snacks, and steady energy pulses fueled the intense collaborative rhythm of the event. For my project, I focused on one core idea: simultaneous style transfer and editing for text‑to‑video (T2V) using LTXV. Given a style reference image, a video clip, and a text prompt describing the edit, I experimented with three distinct methods to transfer visual style while
- Latent alpha blending: Alpha blending between the inverted style image latent and the inverted latent of the video’s first frame.
- KV sharing with alpha blending: Repeating the style’s keys and values to match video dimensions (repeat-interleave) and blending them.
- KV sharing via concatenation: Appending the style’s keys and values to the video’s keys and values for joint processing with full 3D attention.





Text Prompt: A big horse triumphantly at the peak of a towering mountain. Panorama of rugged peaks and valleys. Very futuristic vibe and animated aesthetic. Highlights of purple and golden colors in the scene. The sky is looks like an animated/cartoonish dream of galaxies, nebulae, stars, planets, moons, but the remainder of the scene is mostly realistic.
For the text‑based editing component, I used RF‑Edit. My findings indicate that latent alpha blending is ineffective for style transfer. KV sharing methods can propagate style, but often at the expense of content fidelity or motion consistency. In particular, style injection may introduce unnatural motion not present in the original video. The injection point within the attention layers significantly affects results: early-layer injection (e.g., layers 5 or 13) suppresses motion and overemphasizes style, while later-layer injection (e.g., layer 27) offers a better balance between style and motion.
Reflections & What Comes Next
ADOS was an inspiring gathering—not only for witnessing the technical evolution of generative video, but also for experiencing the vibrant culture of open, collaborative experimentation that surrounds it. Huge thanks to the ADOS team—Banodoco, Lightricks, and Artifex Lab—for organizing an inspiring and technically rich weekend. Events like this make open‑source generative video feel alive. I can’t wait to keep building on what started in Paris.