Tuning Synaptic Connections instead of Weights by Genetic Algorithm in
Spiking Policy Network
- URL: http://arxiv.org/abs/2301.10292v1
- Date: Thu, 29 Dec 2022 12:36:36 GMT
- Title: Tuning Synaptic Connections instead of Weights by Genetic Algorithm in
Spiking Policy Network
- Authors: Duzhen Zhang, Tielin Zhang, Shuncheng Jia, Qingyu Wang, Bo Xu
- Abstract summary: We study the integration of spiking communication between neurons and biologically-plausible synaptic plasticity.
We optimize a spiking policy network (SPN) by a genetic algorithm as an energy-efficient alternative to DRL.
Our method can achieve the performance level of mainstream DRL methods and exhibit significantly higher energy efficiency.
- Score: 5.371345045382104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning from the interaction is the primary way biological agents know about
the environment and themselves. Modern deep reinforcement learning (DRL)
explores a computational approach to learning from interaction and has
significantly progressed in solving various tasks. However, the powerful DRL is
still far from biological agents in energy efficiency. Although the underlying
mechanisms are not fully understood, we believe that the integration of spiking
communication between neurons and biologically-plausible synaptic plasticity
plays a prominent role. Following this biological intuition, we optimize a
spiking policy network (SPN) by a genetic algorithm as an energy-efficient
alternative to DRL. Our SPN mimics the sensorimotor neuron pathway of insects
and communicates through event-based spikes. Inspired by biological research
that the brain forms memories by forming new synaptic connections and rewires
these connections based on new experiences, we tune the synaptic connections
instead of weights in SPN to solve given tasks. Experimental results on several
robotic control tasks show that our method can achieve the performance level of
mainstream DRL methods and exhibit significantly higher energy efficiency.
Related papers
- Biologically-Plausible Topology Improved Spiking Actor Network for Efficient Deep Reinforcement Learning [15.143466733327566]
Recent advances in neuroscience have unveiled that the human brain achieves efficient reward-based learning.
The success of Deep Reinforcement Learning (DRL) is largely attributed to utilizing Artificial Neural Networks (ANNs) as function approximators.
We propose a novel alternative for function approximator, the Biologically-Plausible Topology improved Spiking Actor Network (BPT-SAN)
arXiv Detail & Related papers (2024-03-29T13:25:19Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - A Spiking Binary Neuron -- Detector of Causal Links [0.0]
Causal relationship recognition is a fundamental operation in neural networks aimed at learning behavior, action planning, and inferring external world dynamics.
This research paper presents a novel approach to realize causal relationship recognition using a simple spiking binary neuron.
arXiv Detail & Related papers (2023-09-15T15:34:17Z) - Contrastive-Signal-Dependent Plasticity: Forward-Forward Learning of
Spiking Neural Systems [73.18020682258606]
We develop a neuro-mimetic architecture, composed of spiking neuronal units, where individual layers of neurons operate in parallel.
We propose an event-based generalization of forward-forward learning, which we call contrastive-signal-dependent plasticity (CSDP)
Our experimental results on several pattern datasets demonstrate that the CSDP process works well for training a dynamic recurrent spiking network capable of both classification and reconstruction.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Modeling Associative Plasticity between Synapses to Enhance Learning of
Spiking Neural Networks [4.736525128377909]
Spiking Neural Networks (SNNs) are the third generation of artificial neural networks that enable energy-efficient implementation on neuromorphic hardware.
We propose a robust and effective learning mechanism by modeling the associative plasticity between synapses.
Our approaches achieve superior performance on static and state-of-the-art neuromorphic datasets.
arXiv Detail & Related papers (2022-07-24T06:12:23Z) - Deep Reinforcement Learning with Spiking Q-learning [51.386945803485084]
spiking neural networks (SNNs) are expected to realize artificial intelligence (AI) with less energy consumption.
It provides a promising energy-efficient way for realistic control tasks by combining SNNs with deep reinforcement learning (RL)
arXiv Detail & Related papers (2022-01-21T16:42:11Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Embodied Synaptic Plasticity with Online Reinforcement learning [5.6006805285925445]
This paper contributes to bringing the fields of computational neuroscience and robotics closer together by integrating open-source software components from these two fields.
We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks.
arXiv Detail & Related papers (2020-03-03T10:29:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.