MSN: A Memory-based Sparse Activation Scaling Framework for Large-scale Industrial Recommendation
- URL: http://arxiv.org/abs/2602.07526v1
- Date: Sat, 07 Feb 2026 12:43:51 GMT
- Title: MSN: A Memory-based Sparse Activation Scaling Framework for Large-scale Industrial Recommendation
- Authors: Shikang Wu, Hui Lu, Jinqiu Jin, Zheng Chai, Shiyong Hong, Junjie Zhang, Shanlei Mu, Kaiyuan Ma, Tianyi Liu, Yuchao Zheng, Zhe Wang, Jingjian Lin,
- Abstract summary: We propose MSN, a memory-based sparse activation scaling framework for recommendation models.<n> MSN retrieves personalized representations from a large parameterized memory and integrates them into downstream feature interaction modules.<n> MSN consistently improves recommendation performance while maintaining high efficiency.
- Score: 19.132874291460936
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scaling deep learning recommendation models is an effective way to improve model expressiveness. Existing approaches often incur substantial computational overhead, making them difficult to deploy in large-scale industrial systems under strict latency constraints. Recent sparse activation scaling methods, such as Sparse Mixture-of-Experts, reduce computation by activating only a subset of parameters, but still suffer from high memory access costs and limited personalization capacity due to the large size and small number of experts. To address these challenges, we propose MSN, a memory-based sparse activation scaling framework for recommendation models. MSN dynamically retrieves personalized representations from a large parameterized memory and integrates them into downstream feature interaction modules via a memory gating mechanism, enabling fine-grained personalization with low computational overhead. To enable further expansion of the memory capacity while keeping both computational and memory access costs under control, MSN adopts a Product-Key Memory (PKM) mechanism, which factorizes the memory retrieval complexity from linear time to sub-linear complexity. In addition, normalization and over-parameterization techniques are introduced to maintain balanced memory utilization and prevent memory retrieval collapse. We further design customized Sparse-Gather operator and adopt the AirTopK operator to improve training and inference efficiency in industrial settings. Extensive experiments demonstrate that MSN consistently improves recommendation performance while maintaining high efficiency. Moreover, MSN has been successfully deployed in the Douyin Search Ranking System, achieving significant gains over deployed state-of-the-art models in both offline evaluation metrics and large-scale online A/B test.
Related papers
- HyMem: Hybrid Memory Architecture with Dynamic Retrieval Scheduling [7.24393498822329]
HyMem is a hybrid memory architecture that enables dynamic on-demand scheduling through multi-granular memory representations.<n>We show that HyMem achieves strong performance on both the LOCOMO and LongMemEval benchmarks, outperforming full-context while reducing computational cost by 92.6%.
arXiv Detail & Related papers (2026-02-15T00:06:19Z) - MALLOC: Benchmarking the Memory-aware Long Sequence Compression for Large Sequential Recommendation [84.53415999381203]
MALLOC is a benchmark for memory-aware long sequence compression.<n>It is integrated into state-of-the-art recommenders, enabling a reproducible and evaluation platform.
arXiv Detail & Related papers (2026-01-28T04:11:50Z) - Agentic Memory: Learning Unified Long-Term and Short-Term Memory Management for Large Language Model Agents [57.38404718635204]
Large language model (LLM) agents face fundamental limitations in long-horizon reasoning due to finite context windows.<n>Existing methods typically handle long-term memory (LTM) and short-term memory (STM) as separate components.<n>We propose Agentic Memory (AgeMem), a unified framework that integrates LTM and STM management directly into the agent's policy.
arXiv Detail & Related papers (2026-01-05T08:24:16Z) - Mixture-of-Channels: Exploiting Sparse FFNs for Efficient LLMs Pre-Training and Inference [16.71963410333802]
Large language models (LLMs) have demonstrated remarkable success across diverse artificial intelligence tasks.<n>MoC substantially reduces activation memory during pre-training.<n>MoC delivers significant memory savings and throughput gains while maintaining competitive model performance.
arXiv Detail & Related papers (2025-11-12T13:30:57Z) - ExpertFlow: Adaptive Expert Scheduling and Memory Coordination for Efficient MoE Inference [8.296993547783808]
ExpertFlow is a runtime system for MoE inference that combines adaptive expert prefetching and cache-aware routing.<n>Our evaluation demonstrates that ExpertFlow reduces model stall time to less than 0.1% of the baseline.
arXiv Detail & Related papers (2025-10-30T17:29:27Z) - OptPipe: Memory- and Scheduling-Optimized Pipeline Parallelism for LLM Training [13.814101909348183]
Pipeline (PP) has become a standard technique for scaling large language model (LLM) training across multiple devices.<n>In this work, we revisit the pipeline scheduling problem from a principled optimization perspective.<n>We formulate scheduling as a constrained optimization problem that jointly accounts for memory capacity, activation reuse, and pipeline bubble minimization.
arXiv Detail & Related papers (2025-10-06T01:06:33Z) - The Curious Case of In-Training Compression of State Space Models [49.819321766705514]
State Space Models (SSMs) tackle long sequence modeling tasks efficiently, offer both parallelizable training and fast inference.<n>Key design challenge is striking the right balance between maximizing expressivity and limiting this computational burden.<n>Our approach, textscCompreSSM, applies to Linear Time-Invariant SSMs such as Linear Recurrent Units, but is also extendable to selective models.
arXiv Detail & Related papers (2025-10-03T09:02:33Z) - Quantifying Memory Utilization with Effective State-Size [73.52115209375343]
We develop a measure of textitmemory utilization'<n>This metric is tailored to the fundamental class of systems with textitinput-invariant and textitinput-varying linear operators
arXiv Detail & Related papers (2025-04-28T08:12:30Z) - CalibQuant: 1-Bit KV Cache Quantization for Multimodal LLMs [45.77132019859689]
CalibQuant is a visual quantization strategy that drastically reduces both memory and computational overhead.<n>We achieve a 10x throughput increase on InternVL models.
arXiv Detail & Related papers (2025-02-15T05:08:01Z) - Sparse Gradient Compression for Fine-Tuning Large Language Models [58.44973963468691]
Fine-tuning large language models (LLMs) for downstream tasks has become increasingly crucial due to their widespread use and the growing availability of open-source models.<n>High memory costs associated with fine-tuning remain a significant challenge, especially as models increase in size.<n>We propose sparse compression gradient (SGC) to address these limitations.
arXiv Detail & Related papers (2025-02-01T04:18:28Z) - SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning [63.93193829913252]
We propose an innovative METL strategy called SHERL for resource-limited scenarios.
In the early route, intermediate outputs are consolidated via an anti-redundancy operation.
In the late route, utilizing minimal late pre-trained layers could alleviate the peak demand on memory overhead.
arXiv Detail & Related papers (2024-07-10T10:22:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.