Neural Storage: A New Paradigm of Elastic Memory
- URL: http://arxiv.org/abs/2101.02729v1
- Date: Thu, 7 Jan 2021 19:19:25 GMT
- Title: Neural Storage: A New Paradigm of Elastic Memory
- Authors: Prabuddha Chakraborty and Swarup Bhunia
- Abstract summary: Storage and retrieval of data in a computer memory plays a major role in system performance.
We introduce Neural Storage (NS), a brain-inspired learning memory paradigm that organizes the memory as a flexible neural memory network.
NS achieves an order of magnitude improvement in memory access performance for two representative applications.
- Score: 4.307341575886927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Storage and retrieval of data in a computer memory plays a major role in
system performance. Traditionally, computer memory organization is static -
i.e., they do not change based on the application-specific characteristics in
memory access behaviour during system operation. Specifically, the association
of a data block with a search pattern (or cues) as well as the granularity of a
stored data do not evolve. Such a static nature of computer memory, we observe,
not only limits the amount of data we can store in a given physical storage,
but it also misses the opportunity for dramatic performance improvement in
various applications. On the contrary, human memory is characterized by
seemingly infinite plasticity in storing and retrieving data - as well as
dynamically creating/updating the associations between data and corresponding
cues. In this paper, we introduce Neural Storage (NS), a brain-inspired
learning memory paradigm that organizes the memory as a flexible neural memory
network. In NS, the network structure, strength of associations, and
granularity of the data adjust continuously during system operation, providing
unprecedented plasticity and performance benefits. We present the associated
storage/retrieval/retention algorithms in NS, which integrate a formalized
learning process. Using a full-blown operational model, we demonstrate that NS
achieves an order of magnitude improvement in memory access performance for two
representative applications when compared to traditional content-based memory.
Related papers
- Stable Hadamard Memory: Revitalizing Memory-Augmented Agents for Reinforcement Learning [64.93848182403116]
Current deep-learning memory models struggle in reinforcement learning environments that are partially observable and long-term.
We introduce the Stable Hadamard Memory, a novel memory model for reinforcement learning agents.
Our approach significantly outperforms state-of-the-art memory-based methods on challenging partially observable benchmarks.
arXiv Detail & Related papers (2024-10-14T03:50:17Z) - B'MOJO: Hybrid State Space Realizations of Foundation Models with Eidetic and Fading Memory [91.81390121042192]
We develop a class of models called B'MOJO to seamlessly combine eidetic and fading memory within an composable module.
B'MOJO's ability to modulate eidetic and fading memory results in better inference on longer sequences tested up to 32K tokens.
arXiv Detail & Related papers (2024-07-08T18:41:01Z) - Differentiable Neural Computers with Memory Demon [0.0]
We show that information theoretic properties of the memory contents play an important role in the performance of such architectures.
We introduce a novel concept of memory demon to DNC architectures which modifies the memory contents implicitly via additive input encoding.
arXiv Detail & Related papers (2022-11-05T22:24:47Z) - A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental
Learning [56.450090618578]
Class-Incremental Learning (CIL) aims to train a model with limited memory size to meet this requirement.
We show that when counting the model size into the total budget and comparing methods with aligned memory size, saving models do not consistently work.
We propose a simple yet effective baseline, denoted as MEMO for Memory-efficient Expandable MOdel.
arXiv Detail & Related papers (2022-05-26T08:24:01Z) - Recurrent Dynamic Embedding for Video Object Segmentation [54.52527157232795]
We propose a Recurrent Dynamic Embedding (RDE) to build a memory bank of constant size.
We propose an unbiased guidance loss during the training stage, which makes SAM more robust in long videos.
We also design a novel self-correction strategy so that the network can repair the embeddings of masks with different qualities in the memory bank.
arXiv Detail & Related papers (2022-05-08T02:24:43Z) - Memory-Guided Semantic Learning Network for Temporal Sentence Grounding [55.31041933103645]
We propose a memory-augmented network that learns and memorizes the rarely appeared content in TSG tasks.
MGSL-Net consists of three main parts: a cross-modal inter-action module, a memory augmentation module, and a heterogeneous attention module.
arXiv Detail & Related papers (2022-01-03T02:32:06Z) - Pinpointing the Memory Behaviors of DNN Training [37.78973307051419]
Training of deep neural networks (DNNs) is usually memory-hungry due to the limited device memory capacity of accelerators.
In this work, we pinpoint the memory behaviors of each device memory block of GPU during training by instrumenting the memory allocators of the runtime system.
arXiv Detail & Related papers (2021-04-01T05:30:03Z) - CNN with large memory layers [2.368995563245609]
This work is centred around the recently proposed product key memory structure citelarge_memory, implemented for a number of computer vision applications.
The memory structure can be regarded as a simple computation primitive suitable to be augmented to nearly all neural network architectures.
arXiv Detail & Related papers (2021-01-27T20:58:20Z) - Memformer: A Memory-Augmented Transformer for Sequence Modeling [55.780849185884996]
We present Memformer, an efficient neural network for sequence modeling.
Our model achieves linear time complexity and constant memory space complexity when processing long sequences.
arXiv Detail & Related papers (2020-10-14T09:03:36Z) - Robust High-dimensional Memory-augmented Neural Networks [13.82206983716435]
Memory-augmented neural networks enhance neural networks with an explicit memory to overcome these issues.
Access to this explicit memory occurs via soft read and write operations involving every individual memory entry.
We propose a robust architecture that employs a computational memory unit as the explicit memory performing analog in-memory computation on high-dimensional (HD) vectors.
arXiv Detail & Related papers (2020-10-05T12:01:56Z) - Distributed Associative Memory Network with Memory Refreshing Loss [5.5792083698526405]
We introduce a novel Distributed Associative Memory architecture (DAM) with Memory Refreshing Loss (MRL)
Inspired by how the human brain works, our framework encodes data with distributed representation across multiple memory blocks.
MRL enables MANN to reinforce an association between input data and task objective by reproducing input data from stored memory contents.
arXiv Detail & Related papers (2020-07-21T07:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.