Speech & Audio AI

FlashLabs Chroma 1.0: A Real-Time End-to-End Spoken Dialogue Model with Personalized Voice Cloning

TTanyu ChenTTairan ChenKKai ShenZZhenghua BaoZZhihui ZhangMMan YuanYYi Shi
arXiv ID
2601.11141
Published
January 16, 2026
Authors
7
Hugging Face Likes
18
Comments
3

Abstract

Recent end-to-end spoken dialogue systems leverage speech tokenizers and neural audio codecs to enable LLMs to operate directly on discrete speech representations. However, these models often exhibit limited speaker identity preservation, hindering personalized voice interaction. In this work, we present Chroma 1.0, the first open-source, real-time, end-to-end spoken dialogue model that achieves both low-latency interaction and high-fidelity personalized voice cloning. Chroma achieves sub-second end-to-end latency through an interleaved text-audio token schedule (1:2) that supports streaming generation, while maintaining high-quality personalized voice synthesis across multi-turn conversations. Our experimental results demonstrate that Chroma achieves a 10.96% relative improvement in speaker similarity over the human baseline, with a Real-Time Factor (RTF) of 0.43, while maintaining strong reasoning and dialogue capabilities. Our code and models are publicly available at https://github.com/FlashLabs-AI-Corp/FlashLabs-Chroma and https://huggingface.co/FlashLabs/Chroma-4B .

Keywords

speech tokenizersneural audio codecsLLMsend-to-end spoken dialogue systemsvoice cloningreal-time interactiondiscrete speech representationsinterleaved text-audio token schedulestreaming generationspeaker identity preservation

More in Speech & Audio AI

View all
FlashLabs Chroma 1.0: A Real-Time End-to-End Spoken Dialogue Model with Personalized Voice Cloning | Paperchime