Speech & Audio AI

Qwen3-TTS Technical Report

HHangrui HuXXinfa ZhuTTing HeDDake GuoBBin ZhangXXiong WangZZhifang GuoZZiyue JiangHHongkun HaoZZishan GuoXXinyu ZhangPPei ZhangBBaosong YangJJin XuJJingren ZhouJJunyang Lin
arXiv ID
2601.15621
Published
January 22, 2026
Authors
16
Hugging Face Likes
31
Comments
1

Abstract

In this report, we present the Qwen3-TTS series, a family of advanced multilingual, controllable, robust, and streaming text-to-speech models. Qwen3-TTS supports state-of-the-art 3-second voice cloning and description-based control, allowing both the creation of entirely novel voices and fine-grained manipulation over the output speech. Trained on over 5 million hours of speech data spanning 10 languages, Qwen3-TTS adopts a dual-track LM architecture for real-time synthesis, coupled with two speech tokenizers: 1) Qwen-TTS-Tokenizer-25Hz is a single-codebook codec emphasizing semantic content, which offers seamlessly integration with Qwen-Audio and enables streaming waveform reconstruction via a block-wise DiT. 2) Qwen-TTS-Tokenizer-12Hz achieves extreme bitrate reduction and ultra-low-latency streaming, enabling immediate first-packet emission (97,ms) through its 12.5 Hz, 16-layer multi-codebook design and a lightweight causal ConvNet. Extensive experiments indicate state-of-the-art performance across diverse objective and subjective benchmark (e.g., TTS multilingual test set, InstructTTSEval, and our long speech test set). To facilitate community research and development, we release both tokenizers and models under the Apache 2.0 license.

Keywords

text-to-speechvoice cloningdual-track LM architecturespeech tokenizersQwen-TTS-Tokenizer-25HzQwen-TTS-Tokenizer-12HzDiTConvNetstreaming waveform reconstructionmultilingualcontrollable speech generation

More in Speech & Audio AI

View all
Qwen3-TTS Technical Report | Paperchime