Generative AI

Unified Latents (UL): How to train your latents

JJonathan HeekEEmiel HoogeboomTThomas MensinkTTim Salimans
Published
February 19, 2026
Authors
4

Abstract

We present Unified Latents (UL), a framework for learning latent representations that are jointly regularized by a diffusion prior and decoded by a diffusion model. By linking the encoder's output noise to the prior's minimum noise level, we obtain a simple training objective that provides a tight upper bound on the latent bitrate. On ImageNet-512, our approach achieves competitive FID of 1.4, with high reconstruction quality (PSNR) while requiring fewer training FLOPs than models trained on Stable Diffusion latents. On Kinetics-600, we set a new state-of-the-art FVD of 1.3.

Keywords

diffusion priordiffusion modellatent representationstraining objectivelatent bitrateFIDPSNRFLOPsFVD

More in Generative AI

View all
Unified Latents (UL): How to train your latents | Paperchime