Optimizing Low-Energy Carbon IIoT Systems with Quantum Algorithms: Performance Evaluation and Noise Robustness
- URL: http://arxiv.org/abs/2503.00888v1
- Date: Sun, 02 Mar 2025 13:13:11 GMT
- Title: Optimizing Low-Energy Carbon IIoT Systems with Quantum Algorithms: Performance Evaluation and Noise Robustness
- Authors: Kshitij Dave, Nouhaila Innan, Bikash K. Behera, Shahid Mumtaz, Saif Al-Kuwari, Ahmed Farouk,
- Abstract summary: Low-energy carbon Internet of Things (IoT) systems are essential for sustainable development.<n>We introduce three quantum algorithms: quantum neural networks utilizing Pennylane (QNN-P), Qiskit (QNN-Q) and hybrid quantum neural networks (QNN-H)<n>For the RODD dataset, QNN-P achieved the highest accuracy at 0.95, followed by QNN-H at 0.91 and QNN-Q at 0.80. Similarly, for the GPSD dataset, QNN-P attained an accuracy of 0.94, QNN-H 0.87, and QNN-Q 0.74
- Score: 22.867189884561768
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low-energy carbon Internet of Things (IoT) systems are essential for sustainable development, as they reduce carbon emissions while ensuring efficient device performance. Although classical algorithms manage energy efficiency and data processing within these systems, they often face scalability and real-time processing limitations. Quantum algorithms offer a solution to these challenges by delivering faster computations and improved optimization, thereby enhancing both the performance and sustainability of low-energy carbon IoT systems. Therefore, we introduced three quantum algorithms: quantum neural networks utilizing Pennylane (QNN-P), Qiskit (QNN-Q), and hybrid quantum neural networks (QNN-H). These algorithms are applied to two low-energy carbon IoT datasets room occupancy detection (RODD) and GPS tracker (GPSD). For the RODD dataset, QNN-P achieved the highest accuracy at 0.95, followed by QNN-H at 0.91 and QNN-Q at 0.80. Similarly, for the GPSD dataset, QNN-P attained an accuracy of 0.94, QNN-H 0.87, and QNN-Q 0.74. Furthermore, the robustness of these models is verified against six noise models. The proposed quantum algorithms demonstrate superior computational efficiency and scalability in noisy environments, making them highly suitable for future low-energy carbon IoT systems. These advancements pave the way for more sustainable and efficient IoT infrastructures, significantly minimizing energy consumption while maintaining optimal device performance.
Related papers
- Efficient optimization of neural network backflow for ab-initio quantum chemistry [0.0]
Ground state of second-quantized quantum chemistry Hamiltonians is key to determining molecular properties.<n>We develop improvements for optimizing these wave-functions which includes compact subspace construction, truncated local energy evaluations, improved sampling, and physics-informed modifications.<n>An ablation study highlights the contribution of each enhancement, showing significant gains in energy accuracy and computational efficiency.
arXiv Detail & Related papers (2025-02-26T05:31:08Z) - Synergistic Development of Perovskite Memristors and Algorithms for Robust Analog Computing [53.77822620185878]
We propose a synergistic methodology to concurrently optimize perovskite memristor fabrication and develop robust analog DNNs.<n>We develop "BayesMulti", a training strategy utilizing BO-guided noise injection to improve the resistance of analog DNNs to memristor imperfections.<n>Our integrated approach enables use of analog computing in much deeper and wider networks, achieving up to 100-fold improvements.
arXiv Detail & Related papers (2024-12-03T19:20:08Z) - Resource-Efficient Sensor Fusion via System-Wide Dynamic Gated Neural Networks [16.0018681576301]
We propose a novel algorithmic strategy called Quantile-constrained Inference (QIC)
QIC makes joint, high-quality, swift decisions on all the above aspects of the system.
Our results confirm that QIC matches the optimum and outperforms its alternatives by over 80%.
arXiv Detail & Related papers (2024-10-22T06:12:04Z) - Enhancing Dropout-based Bayesian Neural Networks with Multi-Exit on FPGA [20.629635991749808]
This paper proposes an algorithm and hardware co-design framework that can generate field-programmable gate array (FPGA)-based accelerators for efficient BayesNNs.
At the algorithm level, we propose novel multi-exit dropout-based BayesNNs with reduced computational and memory overheads.
At the hardware level, this paper introduces a transformation framework that can generate FPGA-based accelerators for the proposed efficient BayesNNs.
arXiv Detail & Related papers (2024-06-20T17:08:42Z) - Q-SNNs: Quantized Spiking Neural Networks [12.719590949933105]
Spiking Neural Networks (SNNs) leverage sparse spikes to represent information and process them in an event-driven manner.<n>We introduce a lightweight and hardware-friendly Quantized SNN that applies quantization to both synaptic weights and membrane potentials.<n>We present a new Weight-Spike Dual Regulation (WS-DR) method inspired by information entropy theory.
arXiv Detail & Related papers (2024-06-19T16:23:26Z) - LitE-SNN: Designing Lightweight and Efficient Spiking Neural Network through Spatial-Temporal Compressive Network Search and Joint Optimization [48.41286573672824]
Spiking Neural Networks (SNNs) mimic the information-processing mechanisms of the human brain and are highly energy-efficient.
We propose a new approach named LitE-SNN that incorporates both spatial and temporal compression into the automated network design process.
arXiv Detail & Related papers (2024-01-26T05:23:11Z) - Energy-Efficient On-Board Radio Resource Management for Satellite
Communications via Neuromorphic Computing [59.40731173370976]
We investigate the application of energy-efficient brain-inspired machine learning models for on-board radio resource management.
For relevant workloads, spiking neural networks (SNNs) implemented on Loihi 2 yield higher accuracy, while reducing power consumption by more than 100$times$ as compared to the CNN-based reference platform.
arXiv Detail & Related papers (2023-08-22T03:13:57Z) - The Hardware Impact of Quantization and Pruning for Weights in Spiking
Neural Networks [0.368986335765876]
quantization and pruning of parameters can both compress the model size, reduce memory footprints, and facilitate low-latency execution.
We study various combinations of pruning and quantization in isolation, cumulatively, and simultaneously to a state-of-the-art SNN targeting gesture recognition.
We show that this state-of-the-art model is amenable to aggressive parameter quantization, not suffering from any loss in accuracy down to ternary weights.
arXiv Detail & Related papers (2023-02-08T16:25:20Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - FSpiNN: An Optimization Framework for Memory- and Energy-Efficient
Spiking Neural Networks [14.916996986290902]
Spiking Neural Networks (SNNs) offer unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule.
However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy.
We propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing.
arXiv Detail & Related papers (2020-07-17T09:40:26Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.