Generative AI

iFSQ: Improving FSQ for Image Generation with 1 Line of Code

BBin LinZZongjian LiYYuwei NiuKKaixiong GongYYunyang GeYYunlong LinMMingzhe ZhengJJianWei ZhangMMiles YangZZhao ZhongLLiefeng BoLLi Yuan
Published
January 23, 2026
Authors
12
Word Count
8,226
Code
Includes code

iFSQ: Bridging image generation models with one line.

Abstract

The field of image generation is currently bifurcated into autoregressive (AR) models operating on discrete tokens and diffusion models utilizing continuous latents. This divide, rooted in the distinction between VQ-VAEs and VAEs, hinders unified modeling and fair benchmarking. Finite Scalar Quantization (FSQ) offers a theoretical bridge, yet vanilla FSQ suffers from a critical flaw: its equal-interval quantization can cause activation collapse. This mismatch forces a trade-off between reconstruction fidelity and information efficiency. In this work, we resolve this dilemma by simply replacing the activation function in original FSQ with a distribution-matching mapping to enforce a uniform prior. Termed iFSQ, this simple strategy requires just one line of code yet mathematically guarantees both optimal bin utilization and reconstruction precision. Leveraging iFSQ as a controlled benchmark, we uncover two key insights: (1) The optimal equilibrium between discrete and continuous representations lies at approximately 4 bits per dimension. (2) Under identical reconstruction constraints, AR models exhibit rapid initial convergence, whereas diffusion models achieve a superior performance ceiling, suggesting that strict sequential ordering may limit the upper bounds of generation quality. Finally, we extend our analysis by adapting Representation Alignment (REPA) to AR models, yielding LlamaGen-REPA. Codes is available at https://github.com/Tencent-Hunyuan/iFSQ

Key Takeaways

  • 1

    iFSQ bridges autoregressive and diffusion models effectively.

  • 2

    Optimal equilibrium found at 4 bits per dimension.

  • 3

    Diffusion models achieve superior performance with more compute.

Limitations

  • Assumes Gaussian-like distribution of neural activations.

  • Requires careful tuning for integration into existing models.

Keywords

autoregressive modelsdiffusion modelsVQ-VAEsVAEsfinite scalar quantizationactivation collapsedistribution-matching mappingiFSQrepresentation alignmentLlamaGen-REPA

More in Generative AI

View all
iFSQ: Improving FSQ for Image Generation with 1 Line of Code | Paperchime