Generative AI

Mode Seeking meets Mean Seeking for Fast Long Video Generation

SShengqu CaiWWeili NieCChao LiuJJulius BernerLLvmin ZhangNNanye MaHHansheng ChenMManeesh AgrawalaLLeonidas GuibasGGordon WetzsteinAArash Vahdat
Published
February 27, 2026
Authors
11

Abstract

Scaling video generation from seconds to minutes faces a critical bottleneck: while short-video data is abundant and high-fidelity, coherent long-form data is scarce and limited to narrow domains. To address this, we propose a training paradigm where Mode Seeking meets Mean Seeking, decoupling local fidelity from long-term coherence based on a unified representation via a Decoupled Diffusion Transformer. Our approach utilizes a global Flow Matching head trained via supervised learning on long videos to capture narrative structure, while simultaneously employing a local Distribution Matching head that aligns sliding windows to a frozen short-video teacher via a mode-seeking reverse-KL divergence. This strategy enables the synthesis of minute-scale videos that learns long-range coherence and motions from limited long videos via supervised flow matching, while inheriting local realism by aligning every sliding-window segment of the student to a frozen short-video teacher, resulting in a few-step fast long video generator. Evaluations show that our method effectively closes the fidelity-horizon gap by jointly improving local sharpness, motion and long-range consistency. Project website: https://primecai.github.io/mmm/.

Keywords

Decoupled Diffusion Transformerflow matchingdistribution matchingmode-seeking reverse-KL divergencelong-range coherencelocal fidelityglobal Flow Matching headlocal Distribution Matching headsliding window segmentsshort-video teacher

More in Generative AI

View all
Mode Seeking meets Mean Seeking for Fast Long Video Generation | Paperchime