Intrinsic Motivation and Episodic Memories for Robot Exploration of
High-Dimensional Sensory Spaces
- URL: http://arxiv.org/abs/2001.01982v1
- Date: Tue, 7 Jan 2020 11:39:20 GMT
- Title: Intrinsic Motivation and Episodic Memories for Robot Exploration of
High-Dimensional Sensory Spaces
- Authors: Guido Schillaci, Antonio Pico Villalpando, Verena Vanessa Hafner,
Peter Hanappe, David Colliaux, Timoth\'ee Wintz
- Abstract summary: This work presents an architecture that generates curiosity-driven goal-directed exploration behaviours for an image sensor of a microfarming robot.
A combination of deep neural networks for offline unsupervised learning of low-dimensional features from images, and of online learning of shallow neural networks representing the inverse and forward kinematics of the system have been used.
The artificial curiosity system assigns interest values to a set of pre-defined goals, and drives the exploration towards those that are expected to maximise the learning progress.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work presents an architecture that generates curiosity-driven
goal-directed exploration behaviours for an image sensor of a microfarming
robot. A combination of deep neural networks for offline unsupervised learning
of low-dimensional features from images, and of online learning of shallow
neural networks representing the inverse and forward kinematics of the system
have been used. The artificial curiosity system assigns interest values to a
set of pre-defined goals, and drives the exploration towards those that are
expected to maximise the learning progress. We propose the integration of an
episodic memory in intrinsic motivation systems to face catastrophic forgetting
issues, typically experienced when performing online updates of artificial
neural networks. Our results show that adopting an episodic memory system not
only prevents the computational models from quickly forgetting knowledge that
has been previously acquired, but also provides new avenues for modulating the
balance between plasticity and stability of the models.
Related papers
- Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Spiking representation learning for associative memories [0.0]
We introduce a novel artificial spiking neural network (SNN) that performs unsupervised representation learning and associative memory operations.
The architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories.
arXiv Detail & Related papers (2024-06-05T08:30:11Z) - Visual Episodic Memory-based Exploration [0.6374763930914523]
In humans, intrinsic motivation is an important mechanism for open-ended cognitive development; in robots, it has been to be valuable for exploration.
This paper explores the use of visual episodic memory as a source of motivation for robotic exploration problems.
arXiv Detail & Related papers (2024-05-18T13:58:47Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Memory-enriched computation and learning in spiking neural networks
through Hebbian plasticity [9.453554184019108]
Hebbian plasticity is believed to play a pivotal role in biological memory.
We introduce a novel spiking neural network architecture that is enriched by Hebbian synaptic plasticity.
We show that Hebbian enrichment renders spiking neural networks surprisingly versatile in terms of their computational as well as learning capabilities.
arXiv Detail & Related papers (2022-05-23T12:48:37Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Lifelong 3D Object Recognition and Grasp Synthesis Using Dual Memory
Recurrent Self-Organization Networks [0.0]
Humans learn to recognize and manipulate new objects in lifelong settings without forgetting the previously gained knowledge.
In most conventional deep neural networks, this is not possible due to the problem of catastrophic forgetting.
We propose a hybrid model architecture consisting of a dual-memory recurrent neural network and an autoencoder to tackle object recognition and grasping simultaneously.
arXiv Detail & Related papers (2021-09-23T11:14:13Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.