Exploring the Potential of Spiking Neural Networks in UWB Channel Estimation
- URL: http://arxiv.org/abs/2512.23975v1
- Date: Tue, 30 Dec 2025 04:10:18 GMT
- Title: Exploring the Potential of Spiking Neural Networks in UWB Channel Estimation
- Authors: Youdong Zhang, Xu He, Xiaolin Meng,
- Abstract summary: This letter explores the potential of Spiking Neural Networks (SNNs) for channel estimation.<n>We develop a fully unsupervised SNN solution that attains 80% test accuracy.<n>Compared with complex deep learning methods, our SNN implementation is inherently suited to neuromorphic deployment.
- Score: 7.52520480528178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although existing deep learning-based Ultra-Wide Band (UWB) channel estimation methods achieve high accuracy, their computational intensity clashes sharply with the resource constraints of low-cost edge devices. Motivated by this, this letter explores the potential of Spiking Neural Networks (SNNs) for this task and develops a fully unsupervised SNN solution. To enable a comprehensive performance analysis, we devise an extensive set of comparative strategies and evaluate them on a compelling public benchmark. Experimental results show that our unsupervised approach still attains 80% test accuracy, on par with several supervised deep learning-based strategies. Moreover, compared with complex deep learning methods, our SNN implementation is inherently suited to neuromorphic deployment and offers a drastic reduction in model complexity, bringing significant advantages for future neuromorphic practice.
Related papers
- Convergence Analysis for Deep Sparse Coding via Convolutional Neural Networks [7.956678963695681]
We explore intersections between sparse coding and deep learning to enhance our understanding of feature extraction capabilities.<n>We derive convergence rates for convolutional neural networks (CNNs) in their ability to extract sparse features.<n>Inspired by the strong connection between sparse coding and CNNs, we explore training strategies to encourage neural networks to learn more sparse features.
arXiv Detail & Related papers (2024-08-10T12:43:55Z) - A simple algorithm for output range analysis for deep neural networks [0.0]
This paper presents a novel approach for the output range estimation problem in Deep Neural Networks (DNNs) by integrating a Simulated Annealing (SA) algorithm.<n>The method effectively addresses the challenges by the lack of geometric information and non-linearity inherent inResNets.
arXiv Detail & Related papers (2024-07-02T22:47:40Z) - Towards Chip-in-the-loop Spiking Neural Network Training via
Metropolis-Hastings Sampling [0.9025833922570009]
This paper studies the use of Metropolis-Hastings sampling for training Spiking Neural Network (SNN) hardware subject to strong unknown non-idealities.
Our results show that the proposed approach strongly outperforms the use of backprop by up to $27%$ higher accuracy when subject to strong hardware non-idealities.
arXiv Detail & Related papers (2024-02-09T09:49:05Z) - Direct Learning-Based Deep Spiking Neural Networks: A Review [17.255056657521195]
spiking neural network (SNN) is a promising brain-inspired computational model with binary spike information transmission mechanism.
In this paper, we present a survey of direct learning-based deep SNN works, mainly categorized into accuracy improvement methods, efficiency improvement methods, and temporal dynamics utilization methods.
arXiv Detail & Related papers (2023-05-31T10:32:16Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - On Feature Learning in Neural Networks with Global Convergence
Guarantees [49.870593940818715]
We study the optimization of wide neural networks (NNs) via gradient flow (GF)
We show that when the input dimension is no less than the size of the training set, the training loss converges to zero at a linear rate under GF.
We also show empirically that, unlike in the Neural Tangent Kernel (NTK) regime, our multi-layer model exhibits feature learning and can achieve better generalization performance than its NTK counterpart.
arXiv Detail & Related papers (2022-04-22T15:56:43Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.