Memory semantization through perturbed and adversarial dreaming
- URL: http://arxiv.org/abs/2109.04261v1
- Date: Thu, 9 Sep 2021 13:31:13 GMT
- Title: Memory semantization through perturbed and adversarial dreaming
- Authors: Nicolas Deperrois, Mihai A. Petrovici, Walter Senn, and Jakob Jordan
- Abstract summary: We propose that rapid-eye-movement (REM) dreaming is essential for efficient memory semantization.
We implement a cortical architecture with hierarchically organized feedforward and feedback pathways, inspired by generative adversarial networks (GANs)
Our results suggest that adversarial dreaming during REM sleep is essential for extracting memory contents, while dreaming during NREM sleep improves the robustness of the latent representation to noisy sensory inputs.
- Score: 0.7874708385247353
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classical theories of memory consolidation emphasize the importance of replay
in extracting semantic information from episodic memories. However, the
characteristic creative nature of dreams suggests that memory semantization may
go beyond merely replaying previous experiences. We propose that
rapid-eye-movement (REM) dreaming is essential for efficient memory
semantization by randomly combining episodic memories to create new, virtual
sensory experiences. We support this hypothesis by implementing a cortical
architecture with hierarchically organized feedforward and feedback pathways,
inspired by generative adversarial networks (GANs). Learning in our model is
organized across three different global brain states mimicking wakefulness,
non-REM (NREM) and REM sleep, optimizing different, but complementary objective
functions. We train the model in an unsupervised fashion on standard datasets
of natural images and evaluate the quality of the learned representations. Our
results suggest that adversarial dreaming during REM sleep is essential for
extracting memory contents, while perturbed dreaming during NREM sleep improves
robustness of the latent representation to noisy sensory inputs. The model
provides a new computational perspective on sleep states, memory replay and
dreams and suggests a cortical implementation of GANs.
Related papers
- MuDreamer: Learning Predictive World Models without Reconstruction [58.0159270859475]
We present MuDreamer, a robust reinforcement learning agent that builds upon the DreamerV3 algorithm by learning a predictive world model without the need for reconstructing input signals.
Our method achieves comparable performance on the Atari100k benchmark while benefiting from faster training.
arXiv Detail & Related papers (2024-05-23T22:09:01Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Wake-Sleep Consolidated Learning [9.596781985154927]
We propose Wake-Sleep Consolidated Learning to improve the performance of deep neural networks for visual classification tasks.
Our method learns continually via the synchronization between distinct wake and sleep phases.
We evaluate the effectiveness of our approach on three benchmark datasets.
arXiv Detail & Related papers (2023-12-06T18:15:08Z) - Saliency-Guided Hidden Associative Replay for Continual Learning [13.551181595881326]
Continual Learning is a burgeoning domain in next-generation AI, focusing on training neural networks over a sequence of tasks akin to human learning.
This paper presents the Saliency Guided Hidden Associative Replay for Continual Learning.
This novel framework synergizes associative memory with replay-based strategies. SHARC primarily archives salient data segments via sparse memory encoding.
arXiv Detail & Related papers (2023-10-06T15:54:12Z) - Learning beyond sensations: how dreams organize neuronal representations [1.749248408967819]
We discuss two complementary learning principles that organize representations through the generation of virtual experiences.
These principles are compatible with known cortical structure and dynamics and the phenomenology of sleep.
arXiv Detail & Related papers (2023-08-03T15:45:12Z) - Memory-Augmented Theory of Mind Network [59.9781556714202]
Social reasoning requires the capacity of theory of mind (ToM) to contextualise and attribute mental states to others.
Recent machine learning approaches to ToM have demonstrated that we can train the observer to read the past and present behaviours of other agents.
We tackle the challenges by equipping the observer with novel neural memory mechanisms to encode, and hierarchical attention to selectively retrieve information about others.
This results in ToMMY, a theory of mind model that learns to reason while making little assumptions about the underlying mental processes.
arXiv Detail & Related papers (2023-01-17T14:48:58Z) - Saliency-Augmented Memory Completion for Continual Learning [8.243137410556495]
How to forget is a problem continual learning must address.
Our paper proposes a new saliency-augmented memory completion framework for continual learning.
arXiv Detail & Related papers (2022-12-26T18:06:39Z) - Continual learning benefits from multiple sleep mechanisms: NREM, REM,
and Synaptic Downscaling [51.316408685035526]
Learning new tasks and skills in succession without losing prior learning is a computational challenge for both artificial and biological neural networks.
Here, we investigate how modeling three distinct components of mammalian sleep together affects continual learning in artificial neural networks.
arXiv Detail & Related papers (2022-09-09T13:45:27Z) - The Tensor Brain: A Unified Theory of Perception, Memory and Semantic
Decoding [16.37225919719441]
We present a unified computational theory of perception and memory.
In our model, perception, episodic memory, and semantic memory are realized by different functional and operational modes.
arXiv Detail & Related papers (2021-09-27T23:32:44Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.