A Novel Feature Learning-based Bio-inspired Neural Network for Real-time
Collision-free Rescue of Multi-Robot Systems
- URL: http://arxiv.org/abs/2403.08238v1
- Date: Wed, 13 Mar 2024 04:43:10 GMT
- Title: A Novel Feature Learning-based Bio-inspired Neural Network for Real-time
Collision-free Rescue of Multi-Robot Systems
- Authors: Junfei Li, Simon X. Yang
- Abstract summary: A bio-inspired neural network is proposed to generate a rescue path in complex and dynamic environments.
The proposed FLBBINN aims to reduce the computational complexity of the neural network-based approach.
The results show that the proposed FLBBINN would significantly improve the speed, efficiency, and optimality for rescue operations.
- Score: 5.478000072204037
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Natural disasters and urban accidents drive the demand for rescue robots to
provide safer, faster, and more efficient rescue trajectories. In this paper, a
feature learning-based bio-inspired neural network (FLBBINN) is proposed to
quickly generate a heuristic rescue path in complex and dynamic environments,
as traditional approaches usually cannot provide a satisfactory solution to
real-time responses to sudden environmental changes. The neurodynamic model is
incorporated into the feature learning method that can use environmental
information to improve path planning strategies. Task assignment and
collision-free rescue trajectory are generated through robot poses and the
dynamic landscape of neural activity. A dual-channel scale filter, a neural
activity channel, and a secondary distance fusion are employed to extract and
filter feature neurons. After completion of the feature learning process, a
neurodynamics-based feature matrix is established to quickly generate the new
heuristic rescue paths with parameter-driven topological adaptability. The
proposed FLBBINN aims to reduce the computational complexity of the neural
network-based approach and enable the feature learning method to achieve
real-time responses to environmental changes. Several simulations and
experiments have been conducted to evaluate the performance of the proposed
FLBBINN. The results show that the proposed FLBBINN would significantly improve
the speed, efficiency, and optimality for rescue operations.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Generative flow induced neural architecture search: Towards discovering optimal architecture in wavelet neural operator [0.8192907805418583]
We propose a generative flow-induced neural architecture search algorithm.
The proposed framework generates the most probable sequence based on the positive reward from the terminal state.
arXiv Detail & Related papers (2024-05-11T04:38:07Z) - Back-stepping Experience Replay with Application to Model-free Reinforcement Learning for a Soft Snake Robot [15.005962159112002]
Back-stepping Experience Replay (BER) is compatible with arbitrary off-policy reinforcement learning algorithms.
We present an application of BER in a model-free RL approach for the locomotion and navigation of a soft snake robot.
arXiv Detail & Related papers (2024-01-21T02:17:16Z) - Robust Neural Pruning with Gradient Sampling Optimization for Residual Neural Networks [0.0]
This research embarks on pioneering the integration of gradient sampling optimization techniques, particularly StochGradAdam, into the pruning process of neural networks.
Our main objective is to address the significant challenge of maintaining accuracy in pruned neural models, critical in resource-constrained scenarios.
arXiv Detail & Related papers (2023-12-26T12:19:22Z) - Evolutionary algorithms as an alternative to backpropagation for
supervised training of Biophysical Neural Networks and Neural ODEs [12.357635939839696]
We investigate the use of "gradient-estimating" evolutionary algorithms for training biophysically based neural networks.
We find that EAs have several advantages making them desirable over direct BP.
Our findings suggest that biophysical neurons could provide useful benchmarks for testing the limits of BP methods.
arXiv Detail & Related papers (2023-11-17T20:59:57Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - The Predictive Forward-Forward Algorithm [79.07468367923619]
We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems.
We design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit.
PFF efficiently learns to propagate learning signals and updates synapses with forward passes only.
arXiv Detail & Related papers (2023-01-04T05:34:48Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Learning to Modulate Random Weights: Neuromodulation-inspired Neural
Networks For Efficient Continual Learning [1.9580473532948401]
We introduce a novel neural network architecture inspired by neuromodulation in biological nervous systems.
We show that this approach has strong learning performance per task despite the very small number of learnable parameters.
arXiv Detail & Related papers (2022-04-08T21:12:13Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.