Personalized Large Language Model Assistant with Evolving Conditional Memory
- URL: http://arxiv.org/abs/2312.17257v2
- Date: Sun, 13 Oct 2024 03:34:41 GMT
- Title: Personalized Large Language Model Assistant with Evolving Conditional Memory
- Authors: Ruifeng Yuan, Shichao Sun, Yongqi Li, Zili Wang, Ziqiang Cao, Wenjie Li,
- Abstract summary: We present a plug-and-play framework that could facilitate personalized large language model assistants with evolving conditional memory.
The personalized assistant focuses on intelligently preserving the knowledge and experience from the history dialogue with the user.
- Score: 15.780762727225122
- License:
- Abstract: With the rapid development of large language models, AI assistants like ChatGPT have become increasingly integrated into people's works and lives but are limited in personalized services. In this paper, we present a plug-and-play framework that could facilitate personalized large language model assistants with evolving conditional memory. The personalized assistant focuses on intelligently preserving the knowledge and experience from the history dialogue with the user, which can be applied to future tailored responses that better align with the user's preferences. Generally, the assistant generates a set of records from the dialogue dialogue, stores them in a memory bank, and retrieves related memory to improve the quality of the response. For the crucial memory design, we explore different ways of constructing the memory and propose a new memorizing mechanism named conditional memory. We also investigate the retrieval and usage of memory in the generation process. We build the first benchmark to evaluate personalized assistants' ability from three aspects. The experimental results illustrate the effectiveness of our method.
Related papers
- LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory [68.97819665784442]
This paper introduces LongMemEval, a benchmark designed to evaluate five core long-term memory abilities of chat assistants.
LongMemEval presents a significant challenge to existing long-term memory systems.
We present a unified framework that breaks down the long-term memory design into four design choices.
arXiv Detail & Related papers (2024-10-14T17:59:44Z) - Introducing MeMo: A Multimodal Dataset for Memory Modelling in Multiparty Conversations [1.8896253910986929]
MeMo corpus is the first dataset annotated with participants' memory retention reports.
It integrates validated behavioural and perceptual measures, audio, video, and multimodal annotations.
This paper aims to pave the way for future research in conversational memory modelling for intelligent system development.
arXiv Detail & Related papers (2024-09-07T16:09:36Z) - Ever-Evolving Memory by Blending and Refining the Past [30.63352929849842]
CREEM is a novel memory system for long-term conversation.
It seamlessly connects past and present information, while also possessing the ability to forget obstructive information.
arXiv Detail & Related papers (2024-03-03T08:12:59Z) - Compress to Impress: Unleashing the Potential of Compressive Memory in Real-World Long-Term Conversations [39.05338079159942]
This study introduces a novel framework, COmpressive Memory-Enhanced Dialogue sYstems (COMEDY), which eschews traditional retrieval modules and memory databases.
Central to COMEDY is the concept of compressive memory, which intergrates session-specific summaries, user-bot dynamics, and past events into a concise memory format.
arXiv Detail & Related papers (2024-02-19T09:19:50Z) - Recursively Summarizing Enables Long-Term Dialogue Memory in Large
Language Models [75.98775135321355]
Given a long conversation, large language models (LLMs) fail to recall past information and tend to generate inconsistent responses.
We propose to generate summaries/ memory using large language models (LLMs) to enhance long-term memory ability.
arXiv Detail & Related papers (2023-08-29T04:59:53Z) - Encode-Store-Retrieve: Augmenting Human Memory through Language-Encoded Egocentric Perception [19.627636189321393]
A promising avenue for memory augmentation is through the use of augmented reality head-mounted displays to capture and preserve egocentric videos.
The current technology lacks the capability to encode and store such large amounts of data efficiently.
We propose a memory augmentation agent that involves leveraging natural language encoding for video data and storing them in a vector database.
arXiv Detail & Related papers (2023-08-10T18:43:44Z) - RET-LLM: Towards a General Read-Write Memory for Large Language Models [53.288356721954514]
RET-LLM is a novel framework that equips large language models with a general write-read memory unit.
Inspired by Davidsonian semantics theory, we extract and save knowledge in the form of triplets.
Our framework exhibits robust performance in handling temporal-based question answering tasks.
arXiv Detail & Related papers (2023-05-23T17:53:38Z) - MemoryBank: Enhancing Large Language Models with Long-Term Memory [7.654404043517219]
We propose MemoryBank, a novel memory mechanism tailored for Large Language Models.
MemoryBank enables the models to summon relevant memories, continually evolve through continuous memory updates, comprehend, and adapt to a user personality by synthesizing information from past interactions.
arXiv Detail & Related papers (2023-05-17T14:40:29Z) - Lift Yourself Up: Retrieval-augmented Text Generation with Self Memory [72.36736686941671]
We propose a novel framework, selfmem, for improving retrieval-augmented generation models.
Selfmem iteratively employs a retrieval-augmented generator to create an unbounded memory pool and using a memory selector to choose one output as memory for the subsequent generation round.
We evaluate the effectiveness of selfmem on three distinct text generation tasks.
arXiv Detail & Related papers (2023-05-03T21:40:54Z) - There Are a Thousand Hamlets in a Thousand People's Eyes: Enhancing
Knowledge-grounded Dialogue with Personal Memory [67.24942840683904]
We introduce personal memory into knowledge selection in Knowledge-grounded conversation.
We devise a learning scheme in which the forward mapping from personal memory to knowledge and its inverse mapping is included in a closed loop.
Experiment results show that our method outperforms existing KGC methods significantly on both automatic evaluation and human evaluation.
arXiv Detail & Related papers (2022-04-06T07:06:37Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.