Large Language Models

MeKi: Memory-based Expert Knowledge Injection for Efficient LLM Scaling

NNing DingFFangcheng LiuKKyungrae KimLLinji HaoKKyeng-Hun LeeHHyeonmok KoYYehui Tang
Published
February 3, 2026
Authors
7

Abstract

Scaling Large Language Models (LLMs) typically relies on increasing the number of parameters or test-time computations to boost performance. However, these strategies are impractical for edge device deployment due to limited RAM and NPU resources. Despite hardware constraints, deploying performant LLM on edge devices such as smartphone remains crucial for user experience. To address this, we propose MeKi (Memory-based Expert Knowledge Injection), a novel system that scales LLM capacity via storage space rather than FLOPs. MeKi equips each Transformer layer with token-level memory experts that injects pre-stored semantic knowledge into the generation process. To bridge the gap between training capacity and inference efficiency, we employ a re-parameterization strategy to fold parameter matrices used during training into a compact static lookup table. By offloading the knowledge to ROM, MeKi decouples model capacity from computational cost, introducing zero inference latency overhead. Extensive experiments demonstrate that MeKi significantly outperforms dense LLM baselines with identical inference speed, validating the effectiveness of memory-based scaling paradigm for on-device LLMs. Project homepage is at https://github.com/ningding-o/MeKi.

Keywords

Transformer layertoken-level memory expertsre-parameterization strategyparameter matriceslookup tableROMmemory-based scaling paradigm

More in Large Language Models

View all
MeKi: Memory-based Expert Knowledge Injection for Efficient LLM Scaling | Paperchime