On the Relationship Between Variational Inference and Auto-Associative
Memory
- URL: http://arxiv.org/abs/2210.08013v1
- Date: Fri, 14 Oct 2022 14:18:47 GMT
- Title: On the Relationship Between Variational Inference and Auto-Associative
Memory
- Authors: Louis Annabi, Alexandre Pitti and Mathias Quoy
- Abstract summary: We study how different neural network approaches to variational inference can be applied in this framework.
We evaluate the obtained algorithms on the CIFAR10 and CLEVR image datasets and compare them with other associative memory models.
- Score: 68.8204255655161
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this article, we propose a variational inference formulation of
auto-associative memories, allowing us to combine perceptual inference and
memory retrieval into the same mathematical framework. In this formulation, the
prior probability distribution onto latent representations is made memory
dependent, thus pulling the inference process towards previously stored
representations. We then study how different neural network approaches to
variational inference can be applied in this framework. We compare methods
relying on amortized inference such as Variational Auto Encoders and methods
relying on iterative inference such as Predictive Coding and suggest combining
both approaches to design new auto-associative memory models. We evaluate the
obtained algorithms on the CIFAR10 and CLEVR image datasets and compare them
with other associative memory models such as Hopfield Networks, End-to-End
Memory Networks and Neural Turing Machines.
Related papers
- Demolition and Reinforcement of Memories in Spin-Glass-like Neural
Networks [0.0]
The aim of this thesis is to understand the effectiveness of Unlearning in both associative memory models and generative models.
The selection of structured data enables an associative memory model to retrieve concepts as attractors of a neural dynamics with considerable basins of attraction.
A novel regularization technique for Boltzmann Machines is presented, proving to outperform previously developed methods in learning hidden probability distributions from data-sets.
arXiv Detail & Related papers (2024-03-04T23:12:42Z) - Bridging Associative Memory and Probabilistic Modeling [29.605203018237457]
Associative memory and probabilistic modeling are two fundamental topics in artificial intelligence.
We build a bridge between the two that enables useful flow of ideas in both directions.
arXiv Detail & Related papers (2024-02-15T18:56:46Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Classification and Generation of real-world data with an Associative
Memory Model [0.0]
We extend the capabilities of the basic Associative Memory Model by using a Multiple-Modality framework.
By storing both the images and labels as modalities, a single Memory can be used to retrieve and complete patterns.
arXiv Detail & Related papers (2022-07-11T12:51:27Z) - Hybrid Predictive Coding: Inferring, Fast and Slow [62.997667081978825]
We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner.
We demonstrate that our model is inherently sensitive to its uncertainty and adaptively balances balances to obtain accurate beliefs using minimum computational expense.
arXiv Detail & Related papers (2022-04-05T12:52:45Z) - Universal Hopfield Networks: A General Framework for Single-Shot
Associative Memory Models [41.58529335439799]
We propose a general framework for understanding the operation of memory networks as a sequence of three operations.
We derive all these memory models as instances of our general framework with differing similarity and separation functions.
arXiv Detail & Related papers (2022-02-09T16:48:06Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z) - Learning to Learn Variational Semantic Memory [132.39737669936125]
We introduce variational semantic memory into meta-learning to acquire long-term knowledge for few-shot learning.
The semantic memory is grown from scratch and gradually consolidated by absorbing information from tasks it experiences.
We formulate memory recall as the variational inference of a latent memory variable from addressed contents.
arXiv Detail & Related papers (2020-10-20T15:05:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.