MAPLE: A Sub-Agent Architecture for Memory, Learning, and Personalization in Agentic AI Systems
- URL: http://arxiv.org/abs/2602.13258v1
- Date: Tue, 03 Feb 2026 03:46:39 GMT
- Title: MAPLE: A Sub-Agent Architecture for Memory, Learning, and Personalization in Agentic AI Systems
- Authors: Deepak Babu Piskala,
- Abstract summary: Large language model (LLM) agents have emerged as powerful tools for complex tasks, yet their ability to adapt to individual users remains fundamentally limited.<n>We argue this limitation stems from a critical architectural conflation: current systems treat memory, learning, and personalization as a unified capability rather than three distinct mechanisms.<n>We propose MAPLE, a principled decomposition where Memory handles storage and retrieval infrastructure; Learning extracts intelligence from accumulated interactions asynchronously; and Personalization applies learned knowledge in real-time within finite context budgets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language model (LLM) agents have emerged as powerful tools for complex tasks, yet their ability to adapt to individual users remains fundamentally limited. We argue this limitation stems from a critical architectural conflation: current systems treat memory, learning, and personalization as a unified capability rather than three distinct mechanisms requiring different infrastructure, operating on different timescales, and benefiting from independent optimization. We propose MAPLE (Memory-Adaptive Personalized LEarning), a principled decomposition where Memory handles storage and retrieval infrastructure; Learning extracts intelligence from accumulated interactions asynchronously; and Personalization applies learned knowledge in real-time within finite context budgets. Each component operates as a dedicated sub-agent with specialized tooling and well-defined interfaces. Experimental evaluation on the MAPLE-Personas benchmark demonstrates that our decomposition achieves a 14.6% improvement in personalization score compared to a stateless baseline (p < 0.01, Cohen's d = 0.95) and increases trait incorporation rate from 45% to 75% -- enabling agents that genuinely learn and adapt.
Related papers
- ALIGN-FL: Architecture-independent Learning through Invariant Generative component sharing in Federated Learning [4.073149631618324]
We present ALIGN-FL, a novel approach to distributed learning that addresses the challenge of learning from disjoint data distributions.<n>Instead of exchanging full model parameters, our framework enables privacy-preserving learning by transferring only generative capabilities across clients.
arXiv Detail & Related papers (2025-12-15T13:35:27Z) - Towards a Science of Scaling Agent Systems [79.64446272302287]
We formalize a definition for agent evaluation and characterize scaling laws as the interplay between agent quantity, coordination structure, modelic, and task properties.<n>We derive a predictive model using coordination metrics, that cross-validated R2=0, enabling prediction on unseen task domains.<n>We identify three effects: (1) a tool-coordination trade-off: under fixed computational budgets, tool-heavy tasks suffer disproportionately from multi-agent overhead, and (2) a capability saturation: coordination yields diminishing or negative returns once single-agent baselines exceed 45%.
arXiv Detail & Related papers (2025-12-09T06:52:21Z) - PersonaMem-v2: Towards Personalized Intelligence via Learning Implicit User Personas and Agentic Memory [56.81126490418336]
Personalization is one of the next milestones in advancing AI capability and alignment.<n> PersonaMem-v2 simulates 1,000 realistic user-chatbot interactions on 300+ scenarios, 20,000+ user preferences, and 128k-token context windows.<n>We train Qwen3-4B to outperforms GPT-5, reaching 53% accuracy in implicit personalization.
arXiv Detail & Related papers (2025-12-07T06:48:23Z) - O-Mem: Omni Memory System for Personalized, Long Horizon, Self-Evolving Agents [60.1848551962911]
O-Mem is a novel memory framework based on active user profiling.<n>O-Mem supports hierarchical retrieval of persona attributes and topic-related context.
arXiv Detail & Related papers (2025-11-17T16:55:19Z) - Enabling Personalized Long-term Interactions in LLM-based Agents through Persistent Memory and User Profiles [0.4885400580268118]
Large language models (LLMs) increasingly serve as the central control unit of AI agents.<n>We present a framework that integrates persistent memory, dynamic coordination, self-validation, and evolving user profiles to enable personalized long-term interactions.
arXiv Detail & Related papers (2025-10-09T08:22:16Z) - Memory Management and Contextual Consistency for Long-Running Low-Code Agents [0.0]
This paper proposes a novel hybrid memory system designed specifically for LCNC agents.<n>Inspired by cognitive science, our architecture combines episodic and semantic memory components with a proactive "Intelligent Decay" mechanism.<n>Key innovation is a user-centric visualization interface, aligned with the LCNC paradigm, which allows non-technical users to manage the agent's memory directly.
arXiv Detail & Related papers (2025-09-27T08:01:26Z) - MCP-Orchestrated Multi-Agent System for Automated Disinformation Detection [84.75972919995398]
This paper presents a multi-agent system that uses relation extraction to detect disinformation in news articles.<n>The proposed Agentic AI system combines four agents: (i) a machine learning agent (logistic regression), (ii) a Wikipedia knowledge check agent, and (iv) a web-scraped data analyzer.<n>Results demonstrate that the multi-agent ensemble achieves 95.3% accuracy with an F1 score of 0.964, significantly outperforming individual agents and traditional approaches.
arXiv Detail & Related papers (2025-08-13T19:14:48Z) - PersonaAgent: When Large Language Model Agents Meet Personalization at Test Time [87.99027488664282]
PersonaAgent is a framework designed to address versatile personalization tasks.<n>It integrates a personalized memory module and a personalized action module.<n>Test-time user-preference alignment strategy ensures real-time user preference alignment.
arXiv Detail & Related papers (2025-06-06T17:29:49Z) - Agent4Edu: Generating Learner Response Data by Generative Agents for Intelligent Education Systems [27.161576657380646]
Agent4Edu is a novel personalized learning simulator leveraging recent advancements in human intelligence through large language models (LLMs)<n>The learner profiles are using real-world response data, capturing practice styles and cognitive factors.<n>Each agent can interact with personalized learning algorithms, such as computerized adaptive testing.
arXiv Detail & Related papers (2025-01-17T18:05:04Z) - Kolb-Based Experiential Learning for Generalist Agents with Human-Level Kaggle Data Science Performance [81.05882480184587]
We propose a computational framework of Kolb's learning cycle with Vygotsky's ZPD for autonomous agents.<n>Agent K is the 1st AI system to successfully integrate Kolb- and Vygotsky-inspired human cognitive learning.<n>With 9 gold, 8 silver, and 12 bronze medals level performance - including 4 gold and 4 silver on prize-awarding competitions - Agent K is the 1st AI system to successfully integrate Kolb- and Vygotsky-inspired human cognitive learning.
arXiv Detail & Related papers (2024-11-05T23:55:23Z) - A Unified and Efficient Coordinating Framework for Autonomous DBMS
Tuning [34.85351481228439]
We propose a unified coordinating framework to efficiently utilize existing ML-based agents.
We show that it can effectively utilize different ML-based agents and find better configurations with 1.414.1X speedups on the workload execution time.
arXiv Detail & Related papers (2023-03-10T05:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.