Large Language Models

Spectral Condition for μP under Width-Depth Scaling

CChenyu ZhengRRongzhen WangXXinyu ZhangCChongxuan Li
Published
February 28, 2026
Authors
4

Abstract

Generative foundation models are increasingly scaled in both width and depth, posing significant challenges for stable feature learning and reliable hyperparameter (HP) transfer across model sizes. While maximal update parameterization (μP) has provided a principled solution to both problems for width scaling, existing extensions to the joint width-depth scaling regime remain fragmented, architecture- and optimizer-specific, and often rely on technically involved theories. In this work, we develop a simple and unified spectral framework for μP under joint width-depth scaling. Considering residual networks of varying block depths, we first introduce a spectral μP condition that precisely characterizes how the norms of weights and their per-step updates should scale with width and depth, unifying previously disparate μP formulations as special cases. Building on this condition, we then derive a general recipe for implementing μP across a broad class of optimizers by mapping the spectral constraints to concrete HP parameterizations. This approach not only recovers existing μP formulations (e.g., for SGD and AdamW) but also naturally extends to a wider range of optimizers. Finally, experiments on GPT-2 style language models demonstrate that the proposed spectral μP condition preserves stable feature learning and enables robust HP transfer under width-depth scaling.

Keywords

generative foundation modelswidth-depth scalingmaximal update parameterizationspectral frameworkresidual networksweight normsper-step updatesoptimizer-specificGPT-2 style language modelsstable feature learninghyperparameter transfer

More in Large Language Models

View all
Spectral Condition for μP under Width-Depth Scaling | Paperchime