RAM: Towards an Ever-Improving Memory System by Learning from Communications
- URL: http://arxiv.org/abs/2404.12045v2
- Date: Fri, 5 Jul 2024 04:57:54 GMT
- Title: RAM: Towards an Ever-Improving Memory System by Learning from Communications
- Authors: Jiaqi Li, Xiaobo Wang, Wentao Ding, Zihao Wang, Yipeng Kang, Zixia Jia, Zilong Zheng,
- Abstract summary: We introduce an innovative RAG-based framework with an ever-improving memory, called RAM.
Experiments with both simulated and real users demonstrate significant improvements over traditional RAG and self-knowledge methods.
RAM exhibits promising adaptability to various feedback and retrieval methods, showcasing its potential for advancing AI capabilities in dynamic knowledge acquisition and lifelong learning.
- Score: 32.904507659027516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce an innovative RAG-based framework with an ever-improving memory. Inspired by humans'pedagogical process, RAM utilizes recursively reasoning-based retrieval and experience reflections to continually update the memory and learn from users' communicative feedback, namely communicative learning. Extensive experiments with both simulated and real users demonstrate significant improvements over traditional RAG and self-knowledge methods, particularly excelling in handling false premise and multi-hop questions. Furthermore, RAM exhibits promising adaptability to various feedback and retrieval methods, showcasing its potential for advancing AI capabilities in dynamic knowledge acquisition and lifelong learning.
Related papers
- Stable Hadamard Memory: Revitalizing Memory-Augmented Agents for Reinforcement Learning [64.93848182403116]
Current deep-learning memory models struggle in reinforcement learning environments that are partially observable and long-term.
We introduce the Stable Hadamard Memory, a novel memory model for reinforcement learning agents.
Our approach significantly outperforms state-of-the-art memory-based methods on challenging partially observable benchmarks.
arXiv Detail & Related papers (2024-10-14T03:50:17Z) - Composite Learning Units: Generalized Learning Beyond Parameter Updates to Transform LLMs into Adaptive Reasoners [0.0]
We introduce Composite Learning Units (CLUs) designed to transform reasoners into learners capable of continuous learning.
CLUs are built on an architecture that allows a reasoning model to maintain and evolve a dynamic knowledge repository.
We demonstrate CLUs' effectiveness through a cryptographic reasoning task, where they continuously evolve their understanding through feedback to uncover hidden transformation rules.
arXiv Detail & Related papers (2024-10-09T02:27:58Z) - Auto-selected Knowledge Adapters for Lifelong Person Re-identification [54.42307214981537]
Lifelong Person Re-Identification requires systems to continually learn from non-overlapping datasets across different times and locations.
Existing approaches, either rehearsal-free or rehearsal-based, still suffer from the problem of catastrophic forgetting.
We introduce a novel framework AdalReID, that adopts knowledge adapters and a parameter-free auto-selection mechanism for lifelong learning.
arXiv Detail & Related papers (2024-05-29T11:42:02Z) - In-Memory Learning: A Declarative Learning Framework for Large Language
Models [56.62616975119192]
We propose a novel learning framework that allows agents to align with their environment without relying on human-labeled data.
This entire process transpires within the memory components and is implemented through natural language.
We demonstrate the effectiveness of our framework and provide insights into this problem.
arXiv Detail & Related papers (2024-03-05T08:25:11Z) - Saliency-Guided Hidden Associative Replay for Continual Learning [13.551181595881326]
Continual Learning is a burgeoning domain in next-generation AI, focusing on training neural networks over a sequence of tasks akin to human learning.
This paper presents the Saliency Guided Hidden Associative Replay for Continual Learning.
This novel framework synergizes associative memory with replay-based strategies. SHARC primarily archives salient data segments via sparse memory encoding.
arXiv Detail & Related papers (2023-10-06T15:54:12Z) - RecallM: An Adaptable Memory Mechanism with Temporal Understanding for
Large Language Models [3.9770715318303353]
RecallM is a novel architecture for providing Large Language Models with an adaptable and updatable long-term memory mechanism.
We show that RecallM is four times more effective than using a vector database for updating knowledge previously stored in long-term memory.
We also demonstrate that RecallM shows competitive performance on general question-answering and in-context learning tasks.
arXiv Detail & Related papers (2023-07-06T02:51:54Z) - Decoupling Knowledge from Memorization: Retrieval-augmented Prompt
Learning [113.58691755215663]
We develop RetroPrompt to help a model strike a balance between generalization and memorization.
In contrast with vanilla prompt learning, RetroPrompt constructs an open-book knowledge-store from training instances.
Extensive experiments demonstrate that RetroPrompt can obtain better performance in both few-shot and zero-shot settings.
arXiv Detail & Related papers (2022-05-29T16:07:30Z) - Learning Fast, Learning Slow: A General Continual Learning Method based
on Complementary Learning System [13.041607703862724]
We propose CLS-ER, a novel dual memory experience replay (ER) method.
New knowledge is acquired while aligning the decision boundaries with the semantic memories.
Our approach achieves state-of-the-art performance on standard benchmarks.
arXiv Detail & Related papers (2022-01-29T15:15:23Z) - Learning to Learn Variational Semantic Memory [132.39737669936125]
We introduce variational semantic memory into meta-learning to acquire long-term knowledge for few-shot learning.
The semantic memory is grown from scratch and gradually consolidated by absorbing information from tasks it experiences.
We formulate memory recall as the variational inference of a latent memory variable from addressed contents.
arXiv Detail & Related papers (2020-10-20T15:05:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.