Memory Augmented Neural Model for Incremental Session-based
Recommendation
- URL: http://arxiv.org/abs/2005.01573v1
- Date: Tue, 28 Apr 2020 19:07:20 GMT
- Title: Memory Augmented Neural Model for Incremental Session-based
Recommendation
- Authors: Fei Mi, Boi Faltings
- Abstract summary: We show that existing neural recommenders can be used in incremental Session-based Recommendation scenarios.
We propose a general framework called Memory Augmented Neural model (MAN)
MAN augments a base neural recommender with a continuously queried and updated nonparametric memory.
- Score: 36.33193124174747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Increasing concerns with privacy have stimulated interests in Session-based
Recommendation (SR) using no personal data other than what is observed in the
current browser session. Existing methods are evaluated in static settings
which rarely occur in real-world applications. To better address the dynamic
nature of SR tasks, we study an incremental SR scenario, where new items and
preferences appear continuously. We show that existing neural recommenders can
be used in incremental SR scenarios with small incremental updates to alleviate
computation overhead and catastrophic forgetting. More importantly, we propose
a general framework called Memory Augmented Neural model (MAN). MAN augments a
base neural recommender with a continuously queried and updated nonparametric
memory, and the predictions from the neural and the memory components are
combined through another lightweight gating network. We empirically show that
MAN is well-suited for the incremental SR task, and it consistently outperforms
state-of-the-art neural and nonparametric methods. We analyze the results and
demonstrate that it is particularly good at incrementally learning preferences
on new and infrequent items.
Related papers
- Memory-augmented conformer for improved end-to-end long-form ASR [9.876354589883002]
We propose a memory-augmented neural network between the encoder and decoder of a conformer.
This external memory can enrich the generalization for longer utterances.
We show that the proposed system outperforms the baseline conformer without memory for long utterances.
arXiv Detail & Related papers (2023-09-22T17:44:58Z) - Selective Memory Recursive Least Squares: Recast Forgetting into Memory
in RBF Neural Network Based Real-Time Learning [2.31120983784623]
In radial basis function neural network (RBFNN) based real-time learning tasks, forgetting mechanisms are widely used.
This paper proposes a real-time training method named selective memory recursive least squares (SMRLS) in which the classical forgetting mechanisms are recast into a memory mechanism.
With SMRLS, the input space of the RBFNN is evenly divided into a finite number of partitions and a synthesized objective function is developed using synthesized samples from each partition.
arXiv Detail & Related papers (2022-11-15T05:29:58Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - The impact of memory on learning sequence-to-sequence tasks [6.603326895384289]
Recent success of neural networks in natural language processing has drawn renewed attention to learning sequence-to-sequence (seq2seq) tasks.
We propose a model for a seq2seq task that has the advantage of providing explicit control over the degree of memory, or non-Markovianity, in the sequences.
arXiv Detail & Related papers (2022-05-29T14:57:33Z) - Deep Impulse Responses: Estimating and Parameterizing Filters with Deep
Networks [76.830358429947]
Impulse response estimation in high noise and in-the-wild settings is a challenging problem.
We propose a novel framework for parameterizing and estimating impulse responses based on recent advances in neural representation learning.
arXiv Detail & Related papers (2022-02-07T18:57:23Z) - Reinforcement Learning based Path Exploration for Sequential Explainable
Recommendation [57.67616822888859]
We propose a novel Temporal Meta-path Guided Explainable Recommendation leveraging Reinforcement Learning (TMER-RL)
TMER-RL utilizes reinforcement item-item path modelling between consecutive items with attention mechanisms to sequentially model dynamic user-item evolutions on dynamic knowledge graph for explainable recommendation.
Extensive evaluations of TMER on two real-world datasets show state-of-the-art performance compared against recent strong baselines.
arXiv Detail & Related papers (2021-11-24T04:34:26Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Factorized Neural Processes for Neural Processes: $K$-Shot Prediction of
Neural Responses [9.792408261365043]
We develop a Factorized Neural Process to infer a neuron's tuning function from a small set of stimulus-response pairs.
We show on simulated responses that the predictions and reconstructed receptive fields from the Neural Process approach ground truth with increasing number of trials.
We believe this novel deep learning systems identification framework will facilitate better real-time integration of artificial neural network modeling into neuroscience experiments.
arXiv Detail & Related papers (2020-10-22T15:43:59Z) - On the Curse of Memory in Recurrent Neural Networks: Approximation and Optimization Analysis [30.75240284934018]
We consider the simple but representative setting of using continuous-time linear RNNs to learn from data generated by linear relationships.
We prove a universal approximation theorem of such linear functionals, and characterize the approximation rate and its relation with memory.
A unifying theme uncovered is the non-trivial effect of memory, a notion that can be made precise in our framework.
arXiv Detail & Related papers (2020-09-16T16:48:28Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.