Evolving Large Language Model Assistant with Long-Term Conditional
Memory
- URL: http://arxiv.org/abs/2312.17257v1
- Date: Fri, 22 Dec 2023 02:39:15 GMT
- Title: Evolving Large Language Model Assistant with Long-Term Conditional
Memory
- Authors: Ruifeng Yuan, Shichao Sun, Zili Wang, Ziqiang Cao, Wenjie Li
- Abstract summary: We present an evolving large language model assistant that utilizes verbal long-term memory.
The model generates a set of records for each finished dialogue and stores them in the memory.
In later usage, given a new user input, the model uses it to retrieve its related memory to improve the quality of the response.
- Score: 16.91211676915775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rapid development of large language models, AI assistants like
ChatGPT have widely entered people's works and lives. In this paper, we present
an evolving large language model assistant that utilizes verbal long-term
memory. It focuses on preserving the knowledge and experience from the history
dialogue between the user and AI assistant, which can be applied to future
dialogue for generating a better response. The model generates a set of records
for each finished dialogue and stores them in the memory. In later usage, given
a new user input, the model uses it to retrieve its related memory to improve
the quality of the response. To find the best form of memory, we explore
different ways of constructing the memory and propose a new memorizing
mechanism called conditional memory to solve the problems in previous methods.
We also investigate the retrieval and usage of memory in the generation
process. The assistant uses GPT-4 as the backbone and we evaluate it on three
constructed test datasets focusing on different abilities required by an AI
assistant with long-term memory.
Related papers
- A Grounded Memory System For Smart Personal Assistants [1.5267291767316298]
A wide variety of agentic AI applications - ranging from cognitive assistants for dementia patients to robotics - demand a robust memory system grounded in reality.<n>We propose such a memory system consisting of three components.<n>First, we combine Vision Language Models for image captioning and entity disambiguation with Large Language Models for consistent information extraction during perception.<n>Second, the extracted information is represented in a memory consisting of a knowledge graph enhanced by vector embeddings to efficiently manage relational information.<n>Third, we combine semantic search and graph query generation for question answering via Retrieval Augmented Generation.
arXiv Detail & Related papers (2025-05-09T10:08:22Z) - From Human Memory to AI Memory: A Survey on Memory Mechanisms in the Era of LLMs [34.361000444808454]
Memory is the process of encoding, storing, and retrieving information.
In the era of large language models (LLMs), memory refers to the ability of an AI system to retain, recall, and use information from past interactions to improve future responses and interactions.
arXiv Detail & Related papers (2025-04-22T15:05:04Z) - LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory [68.97819665784442]
This paper introduces LongMemEval, a benchmark designed to evaluate five core long-term memory abilities of chat assistants.
LongMemEval presents a significant challenge to existing long-term memory systems.
We present a unified framework that breaks down the long-term memory design into four design choices.
arXiv Detail & Related papers (2024-10-14T17:59:44Z) - Introducing MeMo: A Multimodal Dataset for Memory Modelling in Multiparty Conversations [1.8896253910986929]
MeMo corpus is the first dataset annotated with participants' memory retention reports.
It integrates validated behavioural and perceptual measures, audio, video, and multimodal annotations.
This paper aims to pave the way for future research in conversational memory modelling for intelligent system development.
arXiv Detail & Related papers (2024-09-07T16:09:36Z) - MemoCRS: Memory-enhanced Sequential Conversational Recommender Systems with Large Language Models [51.65439315425421]
We propose a Memory-enhanced Conversational Recommender System Framework with Large Language Models (dubbed MemoCRS)
User-specific memory is tailored to each user for their personalized interests.
The general memory, encapsulating collaborative knowledge and reasoning guidelines, can provide shared knowledge for users.
arXiv Detail & Related papers (2024-07-06T04:57:25Z) - Ever-Evolving Memory by Blending and Refining the Past [30.63352929849842]
CREEM is a novel memory system for long-term conversation.
It seamlessly connects past and present information, while also possessing the ability to forget obstructive information.
arXiv Detail & Related papers (2024-03-03T08:12:59Z) - Compress to Impress: Unleashing the Potential of Compressive Memory in Real-World Long-Term Conversations [39.05338079159942]
This study introduces a novel framework, COmpressive Memory-Enhanced Dialogue sYstems (COMEDY), which eschews traditional retrieval modules and memory databases.
Central to COMEDY is the concept of compressive memory, which intergrates session-specific summaries, user-bot dynamics, and past events into a concise memory format.
arXiv Detail & Related papers (2024-02-19T09:19:50Z) - Recursively Summarizing Enables Long-Term Dialogue Memory in Large
Language Models [75.98775135321355]
Given a long conversation, large language models (LLMs) fail to recall past information and tend to generate inconsistent responses.
We propose to generate summaries/ memory using large language models (LLMs) to enhance long-term memory ability.
arXiv Detail & Related papers (2023-08-29T04:59:53Z) - Encode-Store-Retrieve: Augmenting Human Memory through Language-Encoded Egocentric Perception [19.627636189321393]
A promising avenue for memory augmentation is through the use of augmented reality head-mounted displays to capture and preserve egocentric videos.
The current technology lacks the capability to encode and store such large amounts of data efficiently.
We propose a memory augmentation agent that involves leveraging natural language encoding for video data and storing them in a vector database.
arXiv Detail & Related papers (2023-08-10T18:43:44Z) - RET-LLM: Towards a General Read-Write Memory for Large Language Models [53.288356721954514]
RET-LLM is a novel framework that equips large language models with a general write-read memory unit.
Inspired by Davidsonian semantics theory, we extract and save knowledge in the form of triplets.
Our framework exhibits robust performance in handling temporal-based question answering tasks.
arXiv Detail & Related papers (2023-05-23T17:53:38Z) - MemoryBank: Enhancing Large Language Models with Long-Term Memory [7.654404043517219]
We propose MemoryBank, a novel memory mechanism tailored for Large Language Models.
MemoryBank enables the models to summon relevant memories, continually evolve through continuous memory updates, comprehend, and adapt to a user personality by synthesizing information from past interactions.
arXiv Detail & Related papers (2023-05-17T14:40:29Z) - Lift Yourself Up: Retrieval-augmented Text Generation with Self Memory [72.36736686941671]
We propose a novel framework, selfmem, for improving retrieval-augmented generation models.
Selfmem iteratively employs a retrieval-augmented generator to create an unbounded memory pool and using a memory selector to choose one output as memory for the subsequent generation round.
We evaluate the effectiveness of selfmem on three distinct text generation tasks.
arXiv Detail & Related papers (2023-05-03T21:40:54Z) - There Are a Thousand Hamlets in a Thousand People's Eyes: Enhancing
Knowledge-grounded Dialogue with Personal Memory [67.24942840683904]
We introduce personal memory into knowledge selection in Knowledge-grounded conversation.
We devise a learning scheme in which the forward mapping from personal memory to knowledge and its inverse mapping is included in a closed loop.
Experiment results show that our method outperforms existing KGC methods significantly on both automatic evaluation and human evaluation.
arXiv Detail & Related papers (2022-04-06T07:06:37Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.