Generative AI

TokenTrim: Inference-Time Token Pruning for Autoregressive Long Video Generation

AAriel ShaulovEEitan ShaarAAmit EdenzonLLior Wolf
Published
January 30, 2026
Authors
4
Word Count
9,305

TokenTrim prunes corrupted tokens during video generation inference to eliminate temporal drift.

Abstract

Auto-regressive video generation enables long video synthesis by iteratively conditioning each new batch of frames on previously generated content. However, recent work has shown that such pipelines suffer from severe temporal drift, where errors accumulate and amplify over long horizons. We hypothesize that this drift does not primarily stem from insufficient model capacity, but rather from inference-time error propagation. Specifically, we contend that drift arises from the uncontrolled reuse of corrupted latent conditioning tokens during auto-regressive inference. To correct this accumulation of errors, we propose a simple, inference-time method that mitigates temporal drift by identifying and removing unstable latent tokens before they are reused for conditioning. For this purpose, we define unstable tokens as latent tokens whose representations deviate significantly from those of the previously generated batch, indicating potential corruption or semantic drift. By explicitly removing corrupted latent tokens from the auto-regressive context, rather than modifying entire spatial regions or model parameters, our method prevents unreliable latent information from influencing future generation steps. As a result, it significantly improves long-horizon temporal consistency without modifying the model architecture, training procedure, or leaving latent space.

Key Takeaways

  • 1

    TokenTrim removes corrupted tokens from cache during inference to reduce temporal drift in video generation.

  • 2

    The method improves video quality by 5.58-6.90% and reduces temporal drift by 26 percentage points.

  • 3

    Inference-time token pruning addresses error compounding without requiring model retraining or architectural changes.

Limitations

  • The script only discusses results on Wan model; generalization to other video models unclear.

  • Method requires computing per-token drift scores at each step, potentially adding computational overhead.

Keywords

auto-regressive video generationtemporal driftlatent conditioning tokensinference-time error propagationunstable tokenslatent space

More in Generative AI

View all
TokenTrim: Inference-Time Token Pruning for Autoregressive Long Video Generation | Paperchime