Choosing How to Remember: Adaptive Memory Structures for LLM Agents
- URL: http://arxiv.org/abs/2602.14038v1
- Date: Sun, 15 Feb 2026 07:56:24 GMT
- Title: Choosing How to Remember: Adaptive Memory Structures for LLM Agents
- Authors: Mingfei Lu, Mengjia Wu, Feng Liu, Jiawei Xu, Weikai Li, Haoyang Wang, Zhengdong Hu, Ying Ding, Yizhou Sun, Jie Lu, Yi Zhang,
- Abstract summary: Memory is critical for enabling large language model (LLM) based agents to maintain coherent behavior over long-horizon interactions.<n>We propose a unified framework, FluxMem, that enables adaptive memory organization for LLM agents.<n> Experiments on two long-horizon benchmarks, PERSONAMEM and LoCoMo, demonstrate that our method achieves average improvements of 9.18% and 6.14%.
- Score: 43.27579458682491
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Memory is critical for enabling large language model (LLM) based agents to maintain coherent behavior over long-horizon interactions. However, existing agent memory systems suffer from two key gaps: they rely on a one-size-fits-all memory structure and do not model memory structure selection as a context-adaptive decision, limiting their ability to handle heterogeneous interaction patterns and resulting in suboptimal performance. We propose a unified framework, FluxMem, that enables adaptive memory organization for LLM agents. Our framework equips agents with multiple complementary memory structures. It explicitly learns to select among these structures based on interaction-level features, using offline supervision derived from downstream response quality and memory utilization. To support robust long-horizon memory evolution, we further introduce a three-level memory hierarchy and a Beta Mixture Model-based probabilistic gate for distribution-aware memory fusion, replacing brittle similarity thresholds. Experiments on two long-horizon benchmarks, PERSONAMEM and LoCoMo, demonstrate that our method achieves average improvements of 9.18% and 6.14%.
Related papers
- AMemGym: Interactive Memory Benchmarking for Assistants in Long-Horizon Conversations [61.6579785305668]
AMemGym is an interactive environment enabling on-policy evaluation and optimization for memory-driven personalization.<n>Our framework provides a scalable, diagnostically rich environment for advancing memory capabilities in conversational agents.
arXiv Detail & Related papers (2026-03-02T15:15:11Z) - UMEM: Unified Memory Extraction and Management Framework for Generalizable Memory [46.87954895079213]
Self-evolving memory serves as the trainable parameters for Large Language Models (LLMs)<n>Existing methods predominately optimize memory management while treating memory extraction as a static process.<n>We propose Unified Memory Extraction and Management (UMEM) to jointly optimize a Large Language Model to simultaneous extract and manage memories.
arXiv Detail & Related papers (2026-02-11T08:58:41Z) - MemAdapter: Fast Alignment across Agent Memory Paradigms via Generative Subgraph Retrieval [25.68006224976726]
Memory mechanism is a core component of LLM-based agents, enabling reasoning and knowledge discovery over long-horizon contexts.<n>Existing agent memory systems are typically designed within isolated paradigms with tightly coupled retrieval methods.<n>MemAdapter is a memory retrieval framework that enables fast alignment across agent memory paradigms.
arXiv Detail & Related papers (2026-02-09T08:09:25Z) - LatentMem: Customizing Latent Memory for Multi-Agent Systems [44.59989123744384]
We propose LatentMem, a learnable multi-agent memory framework designed to customize agent-specific memories in a token-efficient manner.<n>Specifically, LatentMem comprises an experience bank that stores raw interaction trajectories in a lightweight form, and a memory composer that synthesizes compact latent memories conditioned on retrieved experience and agent-specific contexts.
arXiv Detail & Related papers (2026-02-03T03:03:16Z) - Agentic Memory: Learning Unified Long-Term and Short-Term Memory Management for Large Language Model Agents [57.38404718635204]
Large language model (LLM) agents face fundamental limitations in long-horizon reasoning due to finite context windows.<n>Existing methods typically handle long-term memory (LTM) and short-term memory (STM) as separate components.<n>We propose Agentic Memory (AgeMem), a unified framework that integrates LTM and STM management directly into the agent's policy.
arXiv Detail & Related papers (2026-01-05T08:24:16Z) - MemEvolve: Meta-Evolution of Agent Memory Systems [66.09735157017558]
Self-evolving memory systems are unprecedentedly reshaping the evolutionary paradigm of large language model (LLM)-based agents.<n>MemeEvolve is a meta-evolutionary framework that jointly evolves agents' experiential knowledge and their memory architecture.<n> EvolveLab is a unified self-evolving memory that distills twelve representative memory systems into a modular design space.
arXiv Detail & Related papers (2025-12-21T14:26:14Z) - CAM: A Constructivist View of Agentic Memory for LLM-Based Reading Comprehension [55.29309306566238]
Current Large Language Models (LLMs) are confronted with overwhelming information volume when comprehending long-form documents.<n>This challenge raises the imperative of a cohesive memory module, which can elevate vanilla LLMs into autonomous reading agents.<n>We draw inspiration from Jean Piaget's Constructivist Theory, illuminating three traits of the agentic memory -- structured schemata, flexible assimilation, and dynamic accommodation.
arXiv Detail & Related papers (2025-10-07T02:16:30Z) - Memory-R1: Enhancing Large Language Model Agents to Manage and Utilize Memories via Reinforcement Learning [89.55738101744657]
Large Language Models (LLMs) have demonstrated impressive capabilities across a wide range of NLP tasks, but they remain fundamentally stateless.<n>We present Memory-R1, a reinforcement learning framework that equips LLMs with the ability to actively manage and utilize external memory.
arXiv Detail & Related papers (2025-08-27T12:26:55Z) - Hierarchical Memory for High-Efficiency Long-Term Reasoning in LLM Agents [19.04968632268433]
We propose a hierarchical memory architecture for Large Language Model Agents (LLM Agents)<n>Each memory vector is embedded with a positional index encoding pointing to its semantically related sub-memories in the next layer.<n>During the reasoning phase, an index-based routing mechanism enables efficient, layer-by-layer retrieval without performing exhaustive similarity computations.
arXiv Detail & Related papers (2025-07-23T12:45:44Z) - G-Memory: Tracing Hierarchical Memory for Multi-Agent Systems [44.844636264484905]
Large language model (LLM)-powered multi-agent systems (MAS) have demonstrated cognitive and execution capabilities that far exceed those of single LLM agents.<n>We introduce G-Memory, a hierarchical, agentic memory system for MAS inspired by organizational memory theory.<n>G-Memory improves success rates in embodied action and accuracy in knowledge QA by up to $20.89%$ and $10.12%$, respectively.
arXiv Detail & Related papers (2025-06-09T03:43:46Z) - From Single to Multi-Granularity: Toward Long-Term Memory Association and Selection of Conversational Agents [79.87304940020256]
Large Language Models (LLMs) have been widely adopted in conversational agents.<n>MemGAS is a framework that enhances memory consolidation by constructing multi-granularity association, adaptive selection, and retrieval.<n> Experiments on four long-term memory benchmarks demonstrate that MemGAS outperforms state-of-the-art methods on both question answer and retrieval tasks.
arXiv Detail & Related papers (2025-05-26T06:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.