Universal Recurrent Event Memories for Streaming Data
- URL: http://arxiv.org/abs/2307.15694v1
- Date: Fri, 28 Jul 2023 17:40:58 GMT
- Title: Universal Recurrent Event Memories for Streaming Data
- Authors: Ran Dou and Jose Principe
- Abstract summary: We propose a new event memory architecture (MemNet) for recurrent neural networks.
MemNet stores key-value pairs, which separate the information for addressing and for content.
MemNet architecture can be applied without modifications to scalar time series, logic operators on strings, and also to natural language processing.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a new event memory architecture (MemNet) for
recurrent neural networks, which is universal for different types of time
series data such as scalar, multivariate or symbolic. Unlike other external
neural memory architectures, it stores key-value pairs, which separate the
information for addressing and for content to improve the representation, as in
the digital archetype. Moreover, the key-value pairs also avoid the compromise
between memory depth and resolution that applies to memories constructed by the
model state. One of the MemNet key characteristics is that it requires only
linear adaptive mapping functions while implementing a nonlinear operation on
the input data. MemNet architecture can be applied without modifications to
scalar time series, logic operators on strings, and also to natural language
processing, providing state-of-the-art results in all application domains such
as the chaotic time series, the symbolic operation tasks, and the
question-answering tasks (bAbI). Finally, controlled by five linear layers,
MemNet requires a much smaller number of training parameters than other
external memory networks as well as the transformer network. The space
complexity of MemNet equals a single self-attention layer. It greatly improves
the efficiency of the attention mechanism and opens the door for IoT
applications.
Related papers
- Dr$^2$Net: Dynamic Reversible Dual-Residual Networks for Memory-Efficient Finetuning [81.0108753452546]
We propose Dynamic Reversible Dual-Residual Networks, or Dr$2$Net, to finetune a pretrained model with substantially reduced memory consumption.
Dr$2$Net contains two types of residual connections, one maintaining the residual structure in the pretrained models, and the other making the network reversible.
We show that Dr$2$Net can reach comparable performance to conventional finetuning but with significantly less memory usage.
arXiv Detail & Related papers (2024-01-08T18:59:31Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Recurrent Memory Transformer [0.3529736140137003]
We study a memory-augmented segment-level recurrent Transformer (Recurrent Memory Transformer)
We implement a memory mechanism with no changes to Transformer model by adding special memory tokens to the input or output sequence.
Our model performs on par with the Transformer-XL on language modeling for smaller memory sizes and outperforms it for tasks that require longer sequence processing.
arXiv Detail & Related papers (2022-07-14T13:00:22Z) - Neural Storage: A New Paradigm of Elastic Memory [4.307341575886927]
Storage and retrieval of data in a computer memory plays a major role in system performance.
We introduce Neural Storage (NS), a brain-inspired learning memory paradigm that organizes the memory as a flexible neural memory network.
NS achieves an order of magnitude improvement in memory access performance for two representative applications.
arXiv Detail & Related papers (2021-01-07T19:19:25Z) - Memformer: A Memory-Augmented Transformer for Sequence Modeling [55.780849185884996]
We present Memformer, an efficient neural network for sequence modeling.
Our model achieves linear time complexity and constant memory space complexity when processing long sequences.
arXiv Detail & Related papers (2020-10-14T09:03:36Z) - Robust High-dimensional Memory-augmented Neural Networks [13.82206983716435]
Memory-augmented neural networks enhance neural networks with an explicit memory to overcome these issues.
Access to this explicit memory occurs via soft read and write operations involving every individual memory entry.
We propose a robust architecture that employs a computational memory unit as the explicit memory performing analog in-memory computation on high-dimensional (HD) vectors.
arXiv Detail & Related papers (2020-10-05T12:01:56Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Distributed Associative Memory Network with Memory Refreshing Loss [5.5792083698526405]
We introduce a novel Distributed Associative Memory architecture (DAM) with Memory Refreshing Loss (MRL)
Inspired by how the human brain works, our framework encodes data with distributed representation across multiple memory blocks.
MRL enables MANN to reinforce an association between input data and task objective by reproducing input data from stored memory contents.
arXiv Detail & Related papers (2020-07-21T07:34:33Z) - Video Object Segmentation with Episodic Graph Memory Networks [198.74780033475724]
A graph memory network is developed to address the novel idea of "learning to update the segmentation model"
We exploit an episodic memory network, organized as a fully connected graph, to store frames as nodes and capture cross-frame correlations by edges.
The proposed graph memory network yields a neat yet principled framework, which can generalize well both one-shot and zero-shot video object segmentation tasks.
arXiv Detail & Related papers (2020-07-14T13:19:19Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - In-memory Implementation of On-chip Trainable and Scalable ANN for AI/ML
Applications [0.0]
This paper presents an in-memory computing architecture for ANN enabling artificial intelligence (AI) and machine learning (ML) applications.
Our novel on-chip training and inference in-memory architecture reduces energy cost and enhances throughput by simultaneously accessing the multiple rows of array per precharge cycle.
The proposed architecture was trained and tested on the IRIS dataset which exhibits $46times$ more energy efficient per MAC (multiply and accumulate) operation compared to earlier classifiers.
arXiv Detail & Related papers (2020-05-19T15:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.