Enhancing Energy Efficiency and Reliability in Autonomous Systems
Estimation using Neuromorphic Approach
- URL: http://arxiv.org/abs/2307.07963v1
- Date: Sun, 16 Jul 2023 06:47:54 GMT
- Title: Enhancing Energy Efficiency and Reliability in Autonomous Systems
Estimation using Neuromorphic Approach
- Authors: Reza Ahmadvand, Sarah Safura Sharif, Yaser Mike Banad
- Abstract summary: This study focuses on introducing an estimation framework based on spike coding theories and spiking neural networks (SNN)
We propose an SNN-based Kalman filter (KF), a fundamental and widely adopted optimal strategy for well-defined linear systems.
Based on the modified sliding innovation filter (MSIF) we present a robust strategy called SNN-MSIF.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Energy efficiency and reliability have long been crucial factors for ensuring
cost-effective and safe missions in autonomous systems computers. With the
rapid evolution of industries such as space robotics and advanced air mobility,
the demand for these low size, weight, and power (SWaP) computers has grown
significantly. This study focuses on introducing an estimation framework based
on spike coding theories and spiking neural networks (SNN), leveraging the
efficiency and scalability of neuromorphic computers. Therefore, we propose an
SNN-based Kalman filter (KF), a fundamental and widely adopted optimal strategy
for well-defined linear systems. Furthermore, based on the modified sliding
innovation filter (MSIF) we present a robust strategy called SNN-MSIF. Notably,
the weight matrices of the networks are designed according to the system model,
eliminating the need for learning. To evaluate the effectiveness of the
proposed strategies, we compare them to their algorithmic counterparts, namely
the KF and the MSIF, using Monte Carlo simulations. Additionally, we assess the
robustness of SNN-MSIF by comparing it to SNN-KF in the presence of modeling
uncertainties and neuron loss. Our results demonstrate the applicability of the
proposed methods and highlight the superior performance of SNN-MSIF in terms of
accuracy and robustness. Furthermore, the spiking pattern observed from the
networks serves as evidence of the energy efficiency achieved by the proposed
methods, as they exhibited an impressive reduction of approximately 97 percent
in emitted spikes compared to possible spikes.
Related papers
- Efficient and Trustworthy Block Propagation for Blockchain-enabled Mobile Embodied AI Networks: A Graph Resfusion Approach [60.80257080226662]
We propose a graph Resfusion model-based trustworthy block propagation optimization framework for consortium blockchain-enabled MEANETs.
Specifically, we propose an innovative trust calculation mechanism based on the trust cloud model.
By leveraging the strengths of graph neural networks and diffusion models, we develop a graph Resfusion model to effectively and adaptively generate the optimal block propagation trajectory.
arXiv Detail & Related papers (2025-01-26T07:47:05Z) - Combining Aggregated Attention and Transformer Architecture for Accurate and Efficient Performance of Spiking Neural Networks [44.145870290310356]
Spiking Neural Networks have attracted significant attention in recent years due to their distinctive low-power characteristics.
Transformers models, known for their powerful self-attention mechanisms and parallel processing capabilities, have demonstrated exceptional performance across various domains.
Despite the significant advantages of both SNNs and Transformers, directly combining the low-power benefits of SNNs with the high performance of Transformers remains challenging.
arXiv Detail & Related papers (2024-12-18T07:07:38Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.
A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.
The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Synergistic Development of Perovskite Memristors and Algorithms for Robust Analog Computing [53.77822620185878]
We propose a synergistic methodology to concurrently optimize perovskite memristor fabrication and develop robust analog DNNs.
We develop "BayesMulti", a training strategy utilizing BO-guided noise injection to improve the resistance of analog DNNs to memristor imperfections.
Our integrated approach enables use of analog computing in much deeper and wider networks, achieving up to 100-fold improvements.
arXiv Detail & Related papers (2024-12-03T19:20:08Z) - FL-QDSNNs: Federated Learning with Quantum Dynamic Spiking Neural Networks [4.635820333232683]
This paper introduces the Federated Learning-Quantum Dynamic Spiking Neural Networks (FL-QDSNNs) framework.
Central to our framework is a novel dynamic threshold mechanism for activating quantum gates in Quantum Spiking Neural Networks (QSNNs)
Our FL-QDSNNs framework has demonstrated superior accuracies-up to 94% on the Iris dataset and markedly outperforms existing Quantum Federated Learning (QFL) approaches.
arXiv Detail & Related papers (2024-12-03T09:08:33Z) - Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Q-SNNs: Quantized Spiking Neural Networks [12.719590949933105]
Spiking Neural Networks (SNNs) leverage sparse spikes to represent information and process them in an event-driven manner.
We introduce a lightweight and hardware-friendly Quantized SNN that applies quantization to both synaptic weights and membrane potentials.
We present a new Weight-Spike Dual Regulation (WS-DR) method inspired by information entropy theory.
arXiv Detail & Related papers (2024-06-19T16:23:26Z) - Enhancing Adversarial Robustness in SNNs with Sparse Gradients [46.15229142258264]
Spiking Neural Networks (SNNs) have attracted great attention for their energy-efficient operations and biologically inspired structures.
Existing techniques, whether adapted from ANNs or specifically designed for SNNs, exhibit limitations in training SNNs or defending against strong attacks.
We propose a novel approach to enhance the robustness of SNNs through gradient sparsity regularization.
arXiv Detail & Related papers (2024-05-30T05:39:27Z) - Neuromorphic Robust Estimation of Nonlinear Dynamical Systems Applied to Satellite Rendezvous [0.0]
This study introduces a neuromorphic approach for robust filtering of nonlinear dynamical systems: SNN-EMSIF.
SNN-EMSIF combines the computational efficiency and scalability of SNNs with the robustness of EMSIF.
Results demonstrate the superior accuracy and robustness of SNN-EMSIF.
arXiv Detail & Related papers (2024-05-14T07:43:10Z) - Understanding the Functional Roles of Modelling Components in Spiking Neural Networks [9.448298335007465]
Spiking neural networks (SNNs) are promising in achieving high computational efficiency with biological fidelity.
We investigate the functional roles of key modelling components, leakage, reset, and recurrence, in leaky integrate-and-fire (LIF) based SNNs.
Specifically, we find that the leakage plays a crucial role in balancing memory retention and robustness, the reset mechanism is essential for uninterrupted temporal processing and computational efficiency, and the recurrence enriches the capability to model complex dynamics at a cost of robustness degradation.
arXiv Detail & Related papers (2024-03-25T12:13:20Z) - Adaptive Calibration: A Unified Conversion Framework of Spiking Neural Networks [1.632439547798896]
Spiking Neural Networks (SNNs) have emerged as a promising energy-efficient alternative to traditional Artificial Neural Networks (ANNs)
This paper focuses on addressing the dual objectives of enhancing the performance and efficiency of SNNs through the established SNN conversion framework.
arXiv Detail & Related papers (2023-11-24T03:43:59Z) - Astrocyte-Integrated Dynamic Function Exchange in Spiking Neural
Networks [0.0]
This paper presents an innovative methodology for improving the robustness and computational efficiency of Spiking Neural Networks (SNNs)
The proposed approach integrates astrocytes, a type of glial cell prevalent in the human brain, into SNNs, creating astrocyte-augmented networks.
Notably, our astrocyte-augmented SNN displays near-zero latency and theoretically infinite throughput, implying exceptional computational efficiency.
arXiv Detail & Related papers (2023-09-15T08:02:29Z) - Energy-Efficient On-Board Radio Resource Management for Satellite
Communications via Neuromorphic Computing [59.40731173370976]
We investigate the application of energy-efficient brain-inspired machine learning models for on-board radio resource management.
For relevant workloads, spiking neural networks (SNNs) implemented on Loihi 2 yield higher accuracy, while reducing power consumption by more than 100$times$ as compared to the CNN-based reference platform.
arXiv Detail & Related papers (2023-08-22T03:13:57Z) - Understanding Self-attention Mechanism via Dynamical System Perspective [58.024376086269015]
Self-attention mechanism (SAM) is widely used in various fields of artificial intelligence.
We show that intrinsic stiffness phenomenon (SP) in the high-precision solution of ordinary differential equations (ODEs) also widely exists in high-performance neural networks (NN)
We show that the SAM is also a stiffness-aware step size adaptor that can enhance the model's representational ability to measure intrinsic SP.
arXiv Detail & Related papers (2023-08-19T08:17:41Z) - SymNMF-Net for The Symmetric NMF Problem [62.44067422984995]
We propose a neural network called SymNMF-Net for the Symmetric NMF problem.
We show that the inference of each block corresponds to a single iteration of the optimization.
Empirical results on real-world datasets demonstrate the superiority of our SymNMF-Net.
arXiv Detail & Related papers (2022-05-26T08:17:39Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.