Spiking Neural Networks for Low-Power Vibration-Based Predictive Maintenance
- URL: http://arxiv.org/abs/2506.13416v1
- Date: Mon, 16 Jun 2025 12:33:09 GMT
- Title: Spiking Neural Networks for Low-Power Vibration-Based Predictive Maintenance
- Authors: Alexandru Vasilache, Sven Nitzsche, Christian Kneidl, Mikael Tekneyan, Moritz Neher, Juergen Becker,
- Abstract summary: Spiking Neural Networks (SNNs) offer a promising pathway toward energy-efficient on-device processing.<n>This paper investigates a recurrent SNN for simultaneous regression (flow, pressure, pump speed) and multi-label classification (normal, overpressure, cavitation) for an industrial progressing cavity pump.<n>Results demonstrate high classification accuracy (>97%) with zero False Negative Rates for critical Overpressure and Cavitation faults.
- Score: 38.09509006669997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advancements in Industrial Internet of Things (IIoT) sensors enable sophisticated Predictive Maintenance (PM) with high temporal resolution. For cost-efficient solutions, vibration-based condition monitoring is especially of interest. However, analyzing high-resolution vibration data via traditional cloud approaches incurs significant energy and communication costs, hindering battery-powered edge deployments. This necessitates shifting intelligence to the sensor edge. Due to their event-driven nature, Spiking Neural Networks (SNNs) offer a promising pathway toward energy-efficient on-device processing. This paper investigates a recurrent SNN for simultaneous regression (flow, pressure, pump speed) and multi-label classification (normal, overpressure, cavitation) for an industrial progressing cavity pump (PCP) using 3-axis vibration data. Furthermore, we provide energy consumption estimates comparing the SNN approach on conventional (x86, ARM) and neuromorphic (Loihi) hardware platforms. Results demonstrate high classification accuracy (>97%) with zero False Negative Rates for critical Overpressure and Cavitation faults. Smoothed regression outputs achieve Mean Relative Percentage Errors below 1% for flow and pump speed, approaching industrial sensor standards, although pressure prediction requires further refinement. Energy estimates indicate significant power savings, with the Loihi consumption (0.0032 J/inf) being up to 3 orders of magnitude less compared to the estimated x86 CPU (11.3 J/inf) and ARM CPU (1.18 J/inf) execution. Our findings underscore the potential of SNNs for multi-task PM directly on resource-constrained edge devices, enabling scalable and energy-efficient industrial monitoring solutions.
Related papers
- Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - Low-Power Vibration-Based Predictive Maintenance for Industry 4.0 using Neural Networks: A Survey [33.08038317407649]
This paper investigates the potential of neural networks for low-power on-device computation of vibration sensor data for predictive maintenance.
No satisfactory standard benchmark dataset exists for evaluating neural networks in predictive maintenance tasks.
We highlight the need for future research on hardware implementations of neural networks for low-power predictive maintenance applications.
arXiv Detail & Related papers (2024-08-01T12:46:37Z) - sVAD: A Robust, Low-Power, and Light-Weight Voice Activity Detection
with Spiking Neural Networks [51.516451451719654]
Spiking Neural Networks (SNNs) are known to be biologically plausible and power-efficient.
This paper introduces a novel SNN-based Voice Activity Detection model, referred to as sVAD.
It provides effective auditory feature representation through SincNet and 1D convolution, and improves noise robustness with attention mechanisms.
arXiv Detail & Related papers (2024-03-09T02:55:44Z) - Fast Cell Library Characterization for Design Technology Co-Optimization Based on Graph Neural Networks [0.1752969190744922]
Design technology co-optimization (DTCO) plays a critical role in achieving optimal power, performance, and area.
We propose a graph neural network (GNN)-based machine learning model for rapid and accurate cell library characterization.
arXiv Detail & Related papers (2023-12-20T06:10:27Z) - Evaluating Spiking Neural Network On Neuromorphic Platform For Human
Activity Recognition [2.710807780228189]
Energy efficiency and low latency are crucial requirements for wearable AI-empowered human activity recognition systems.
Spike-based workouts recognition system can achieve a comparable accuracy to popular milliwatt RISC-V bases multi-core processor GAP8 with a traditional neural network.
arXiv Detail & Related papers (2023-08-01T18:59:06Z) - NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes [50.00272243518593]
Deep neural networks (DNNs) have become ubiquitous in machine learning, but their energy consumption remains problematically high.<n>We have developed NeuralFuse, a novel add-on module that handles the energy-accuracy tradeoff in low-voltage regimes.<n>At a 1% bit-error rate, NeuralFuse can reduce access energy by up to 24% while recovering accuracy by up to 57%.
arXiv Detail & Related papers (2023-06-29T11:38:22Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - FSpiNN: An Optimization Framework for Memory- and Energy-Efficient
Spiking Neural Networks [14.916996986290902]
Spiking Neural Networks (SNNs) offer unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule.
However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy.
We propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing.
arXiv Detail & Related papers (2020-07-17T09:40:26Z) - QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of
Neural Networks [3.2242513084255036]
QUANOS is a framework that performs layer-specific hybrid quantization based on Adversarial Noise Sensitivity (ANS)
Our experiments on CIFAR10, CIFAR100 datasets show that QUANOS outperforms homogenously quantized 8-bit precision baseline in terms of adversarial robustness.
arXiv Detail & Related papers (2020-04-22T15:56:31Z) - Adaptive Anomaly Detection for IoT Data in Hierarchical Edge Computing [71.86955275376604]
We propose an adaptive anomaly detection approach for hierarchical edge computing (HEC) systems to solve this problem.
We design an adaptive scheme to select one of the models based on the contextual information extracted from input data, to perform anomaly detection.
We evaluate our proposed approach using a real IoT dataset, and demonstrate that it reduces detection delay by 84% while maintaining almost the same accuracy as compared to offloading detection tasks to the cloud.
arXiv Detail & Related papers (2020-01-10T05:29:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.