Embodied Agents Meet Personalization: Investigating Challenges and Solutions Through the Lens of Memory Utilization
- URL: http://arxiv.org/abs/2505.16348v2
- Date: Thu, 23 Oct 2025 13:57:38 GMT
- Title: Embodied Agents Meet Personalization: Investigating Challenges and Solutions Through the Lens of Memory Utilization
- Authors: Taeyoon Kwon, Dongwook Choi, Hyojun Kim, Sunghwan Kim, Seungjun Moon, Beong-woo Kwak, Kuan-Hao Huang, Jinyoung Yeo,
- Abstract summary: LLM-powered embodied agents have shown success on conventional object-rearrangement tasks.<n>However, providing personalized assistance that leverages user-specific knowledge from past interactions presents new challenges.<n>We investigate these challenges through the lens of agents' memory utilization.
- Score: 26.34637576545121
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: LLM-powered embodied agents have shown success on conventional object-rearrangement tasks, but providing personalized assistance that leverages user-specific knowledge from past interactions presents new challenges. We investigate these challenges through the lens of agents' memory utilization along two critical dimensions: object semantics (identifying objects based on personal meaning) and user patterns (recalling sequences from behavioral routines). To assess these capabilities, we construct MEMENTO, an end-to-end two-stage evaluation framework comprising single-memory and joint-memory tasks. Our experiments reveal that current agents can recall simple object semantics but struggle to apply sequential user patterns to planning. Through in-depth analysis, we identify two critical bottlenecks: information overload and coordination failures when handling multiple memories. Based on these findings, we explore memory architectural approaches to address these challenges. Given our observation that episodic memory provides both personalized knowledge and in-context learning benefits, we design a hierarchical knowledge graph-based user-profile memory module that separately manages personalized knowledge, achieving substantial improvements on both single and joint-memory tasks. Project website: https://connoriginal.github.io/MEMENTO
Related papers
- Scene-Aware Memory Discrimination: Deciding Which Personal Knowledge Stays [14.981027641902221]
We introduce a Scene-Aware Memory Discrimination method (SAMD) to address large-scale interactions and diverse memory standards.<n>We show that SAMD successfully recalls the majority of memorable data and remains robust in dynamic scenarios.<n>When integrated into personalized applications, SAMD significantly enhances both the efficiency and quality of memory construction, leading to better organization of personal knowledge.
arXiv Detail & Related papers (2026-02-12T05:53:54Z) - Evaluating Memory Structure in LLM Agents [39.55225799412317]
StructMemEval is a benchmark that tests the agent's ability to organize its long-term memory, not just factual recall.<n>We gather a suite of tasks that humans solve by organizing their knowledge in a specific structure.<n>Our experiments show that simple retrieval-augmented LLMs struggle with these tasks, whereas memory agents can reliably solve them if prompted how to organize their memory.
arXiv Detail & Related papers (2026-02-11T17:32:23Z) - Graph-based Agent Memory: Taxonomy, Techniques, and Applications [63.70340159016138]
Memory emerges as the core module in the Large Language Model (LLM)-based agents for long-horizon complex tasks.<n>Among diverse paradigms, graph stands out as a powerful structure for agent memory due to the intrinsic capabilities to model relational dependencies.<n>This survey presents a comprehensive review of agent memory from the graph-based perspective.
arXiv Detail & Related papers (2026-02-05T13:49:05Z) - OP-Bench: Benchmarking Over-Personalization for Memory-Augmented Personalized Conversational Agents [55.27061195244624]
We formalize over-personalization into three types: Irrelevance, Repetition, and Sycophancy.<n>Agents tend to retrieve and over-attend to user memories even when unnecessary.<n>Our work takes an initial step toward more controllable and appropriate personalization in memory-augmented dialogue systems.
arXiv Detail & Related papers (2026-01-20T08:27:13Z) - Rethinking Memory Mechanisms of Foundation Agents in the Second Half: A Survey [211.01908189012184]
Memory, with hundreds of papers released this year, emerges as the critical solution to fill the utility gap.<n>We provide a unified view of foundation agent memory along three dimensions.<n>We then analyze how memory is instantiated and operated under different agent topologies.
arXiv Detail & Related papers (2026-01-14T07:38:38Z) - The AI Hippocampus: How Far are We From Human Memory? [77.04745635827278]
Implicit memory refers to the knowledge embedded within the internal parameters of pre-trained transformers.<n>Explicit memory involves external storage and retrieval components designed to augment model outputs with dynamic, queryable knowledge representations.<n>Agentic memory introduces persistent, temporally extended memory structures within autonomous agents.
arXiv Detail & Related papers (2026-01-14T03:24:08Z) - Evaluating Long-Term Memory for Long-Context Question Answering [100.1267054069757]
We present a systematic evaluation of memory-augmented methods using LoCoMo, a benchmark of synthetic long-context dialogues annotated for question-answering tasks.<n>Our findings show that memory-augmented approaches reduce token usage by over 90% while maintaining competitive accuracy.
arXiv Detail & Related papers (2025-10-27T18:03:50Z) - Explicit v.s. Implicit Memory: Exploring Multi-hop Complex Reasoning Over Personalized Information [13.292751023556221]
In large language model-based agents, memory serves as a critical capability for achieving personalization by storing and utilizing users' information.<n>We propose the multi-hop personalized reasoning task to explore how different memory mechanisms perform in multi-hop reasoning over personalized information.
arXiv Detail & Related papers (2025-08-18T13:34:37Z) - PRIME: Large Language Model Personalization with Cognitive Memory and Thought Processes [6.631626634132574]
Large language model (LLM) personalization aims to align model outputs with individuals' unique preferences and opinions.<n>We introduce a unified framework, PRIME, using episodic and semantic memory mechanisms.<n>Experiments validate PRIME's effectiveness across both long- and short-context scenarios.
arXiv Detail & Related papers (2025-07-07T01:54:34Z) - FindingDory: A Benchmark to Evaluate Memory in Embodied Agents [49.89792845476579]
We introduce a new benchmark for long-range embodied tasks in the Habitat simulator.<n>This benchmark evaluates memory-based capabilities across 60 tasks requiring sustained engagement and contextual awareness.
arXiv Detail & Related papers (2025-06-18T17:06:28Z) - PersonaAgent: When Large Language Model Agents Meet Personalization at Test Time [87.99027488664282]
PersonaAgent is a framework designed to address versatile personalization tasks.<n>It integrates a personalized memory module and a personalized action module.<n>Test-time user-preference alignment strategy ensures real-time user preference alignment.
arXiv Detail & Related papers (2025-06-06T17:29:49Z) - How Memory Management Impacts LLM Agents: An Empirical Study of Experience-Following Behavior [49.62361184944454]
Memory is a critical component in large language model (LLM)-based agents.<n>We study how memory management choices impact the LLM agents' behavior, especially their long-term performance.
arXiv Detail & Related papers (2025-05-21T22:35:01Z) - Are We Solving a Well-Defined Problem? A Task-Centric Perspective on Recommendation Tasks [46.705107776194616]
We analyze RecSys task formulations, emphasizing key components such as input-output structures, temporal dynamics, and candidate item selection.<n>We explore the balance between task specificity and model generalizability, highlighting how well-defined task formulations serve as the foundation for robust evaluation and effective solution development.
arXiv Detail & Related papers (2025-03-27T06:10:22Z) - On the Structural Memory of LLM Agents [20.529239764968654]
Memory plays a pivotal role in enabling large language model(LLM)-based agents to engage in complex and long-term interactions.<n>This paper investigates how memory structures and memory retrieval methods affect the performance of LLM-based agents.
arXiv Detail & Related papers (2024-12-17T04:30:00Z) - Identifying User Goals from UI Trajectories [19.492331502146886]
We propose a new task goal identification from observed UI trajectories.<n>We also introduce a novel evaluation methodology designed to assess whether two intent descriptions can be considered paraphrases.<n>To benchmark this task, we compare the performance of humans and state-of-the-art models, specifically GPT-4 and Gemini-1.5 Pro.
arXiv Detail & Related papers (2024-06-20T13:46:10Z) - PerLTQA: A Personal Long-Term Memory Dataset for Memory Classification,
Retrieval, and Synthesis in Question Answering [27.815507347725344]
This research introduces PerLTQA, an innovative QA dataset that combines semantic and episodic memories.
PerLTQA features two types of memory and a benchmark of 8,593 questions for 30 characters.
We propose a novel framework for memory integration and generation, consisting of three main components: Memory Classification, Memory Retrieval, and Memory Synthesis.
arXiv Detail & Related papers (2024-02-26T04:09:53Z) - Persona-DB: Efficient Large Language Model Personalization for Response Prediction with Collaborative Data Refinement [79.2400720115588]
We introduce Persona-DB, a simple yet effective framework consisting of a hierarchical construction process to improve generalization across task contexts.<n>In the evaluation of response prediction, Persona-DB demonstrates superior context efficiency in maintaining accuracy with a significantly reduced retrieval size.<n>Our experiments also indicate a marked improvement of over 10% under cold-start scenarios, when users have extremely sparse data.
arXiv Detail & Related papers (2024-02-16T20:20:43Z) - Tell Me More! Towards Implicit User Intention Understanding of Language
Model Driven Agents [110.25679611755962]
Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions.
We introduce Intention-in-Interaction (IN3), a novel benchmark designed to inspect users' implicit intentions through explicit queries.
We empirically train Mistral-Interact, a powerful model that proactively assesses task vagueness, inquires user intentions, and refines them into actionable goals.
arXiv Detail & Related papers (2024-02-14T14:36:30Z) - Personalized Large Language Model Assistant with Evolving Conditional Memory [15.780762727225122]
We present a plug-and-play framework that could facilitate personalized large language model assistants with evolving conditional memory.
The personalized assistant focuses on intelligently preserving the knowledge and experience from the history dialogue with the user.
arXiv Detail & Related papers (2023-12-22T02:39:15Z) - Localizing Active Objects from Egocentric Vision with Symbolic World
Knowledge [62.981429762309226]
The ability to actively ground task instructions from an egocentric view is crucial for AI agents to accomplish tasks or assist humans virtually.
We propose to improve phrase grounding models' ability on localizing the active objects by: learning the role of objects undergoing change and extracting them accurately from the instructions.
We evaluate our framework on Ego4D and Epic-Kitchens datasets.
arXiv Detail & Related papers (2023-10-23T16:14:05Z) - Diagnosis, Feedback, Adaptation: A Human-in-the-Loop Framework for
Test-Time Policy Adaptation [20.266695694005943]
Policies often fail due to distribution shift -- changes in the state and reward that occur when a policy is deployed in new environments.
Data augmentation can increase robustness by making the model invariant to task-irrelevant changes in the agent's observation.
We propose an interactive framework to leverage feedback directly from the user to identify personalized task-irrelevant concepts.
arXiv Detail & Related papers (2023-07-12T17:55:08Z) - Information-Theoretic Odometry Learning [83.36195426897768]
We propose a unified information theoretic framework for learning-motivated methods aimed at odometry estimation.
The proposed framework provides an elegant tool for performance evaluation and understanding in information-theoretic language.
arXiv Detail & Related papers (2022-03-11T02:37:35Z) - PeTra: A Sparsely Supervised Memory Model for People Tracking [50.98911178059019]
We propose PeTra, a memory-augmented neural network designed to track entities in its memory slots.
We empirically compare key modeling choices, finding that we can simplify several aspects of the design of the memory module while retaining strong performance.
PeTra is highly effective in both evaluations, demonstrating its ability to track people in its memory despite being trained with limited annotation.
arXiv Detail & Related papers (2020-05-06T17:45:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.