ATP: Adaptive Threshold Pruning for Efficient Data Encoding in Quantum Neural Networks
- URL: http://arxiv.org/abs/2503.21815v1
- Date: Wed, 26 Mar 2025 01:14:26 GMT
- Title: ATP: Adaptive Threshold Pruning for Efficient Data Encoding in Quantum Neural Networks
- Authors: Mohamed Afane, Gabrielle Ebbrecht, Ying Wang, Juntao Chen, Junaid Farooq,
- Abstract summary: We introduce Adaptive Threshold Pruning (ATP), an encoding method that reduces entanglement and optimize data complexity for efficient computations in Quantum Neural Networks (QNNs)<n>ATP dynamically prunes non-essential features in the data based on adaptive thresholds, effectively reducing quantum circuit requirements while preserving high performance.<n>Our results highlight ATPs ability to balance computational efficiency and model resilience, achieving significant performance improvements with fewer resources.
- Score: 6.80372007036868
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum Neural Networks (QNNs) offer promising capabilities for complex data tasks, but are often constrained by limited qubit resources and high entanglement, which can hinder scalability and efficiency. In this paper, we introduce Adaptive Threshold Pruning (ATP), an encoding method that reduces entanglement and optimizes data complexity for efficient computations in QNNs. ATP dynamically prunes non-essential features in the data based on adaptive thresholds, effectively reducing quantum circuit requirements while preserving high performance. Extensive experiments across multiple datasets demonstrate that ATP reduces entanglement entropy and improves adversarial robustness when combined with adversarial training methods like FGSM. Our results highlight ATPs ability to balance computational efficiency and model resilience, achieving significant performance improvements with fewer resources, which will help make QNNs more feasible in practical, resource-constrained settings.
Related papers
- Federated Quantum-Train Long Short-Term Memory for Gravitational Wave Signal [3.360429911727189]
We present Federated QT-LSTM, a novel framework that combines the Quantum-Train (QT) methodology with Long Short-Term Memory (LSTM) networks in a federated learning setup.<n>By leveraging quantum neural networks (QNNs) to generate classical LSTM model parameters during training, the framework effectively addresses challenges in model compression, scalability, and computational efficiency.
arXiv Detail & Related papers (2025-03-20T11:34:13Z) - Memory-Free and Parallel Computation for Quantized Spiking Neural Networks [12.227968342252026]
Quantized Spiking Neural Networks (QSNNs) offer superior energy efficiency and are well-suited for deployment on resource-limited edge devices.<n> limited bit-width weight and membrane potential result in a notable performance decline.<n>We introduce a memory-free quantization method that captures all historical information without directly storing membrane potentials.
arXiv Detail & Related papers (2025-02-25T10:34:25Z) - Regression and Classification with Single-Qubit Quantum Neural Networks [0.0]
We use a resource-efficient and scalable Single-Qubit Quantum Neural Network (SQQNN) for both regression and classification tasks.<n>For classification, we introduce a novel training method inspired by the Taylor series, which can efficiently find a global minimum in a single step.<n>The SQQNN exhibits virtually error-free and strong performance in regression and classification tasks, including the MNIST dataset.
arXiv Detail & Related papers (2024-12-12T17:35:36Z) - Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - LitE-SNN: Designing Lightweight and Efficient Spiking Neural Network through Spatial-Temporal Compressive Network Search and Joint Optimization [48.41286573672824]
Spiking Neural Networks (SNNs) mimic the information-processing mechanisms of the human brain and are highly energy-efficient.
We propose a new approach named LitE-SNN that incorporates both spatial and temporal compression into the automated network design process.
arXiv Detail & Related papers (2024-01-26T05:23:11Z) - Generative AI-enabled Quantum Computing Networks and Intelligent
Resource Allocation [80.78352800340032]
Quantum computing networks execute large-scale generative AI computation tasks and advanced quantum algorithms.
efficient resource allocation in quantum computing networks is a critical challenge due to qubit variability and network complexity.
We introduce state-of-the-art reinforcement learning (RL) algorithms, from generative learning to quantum machine learning for optimal quantum resource allocation.
arXiv Detail & Related papers (2024-01-13T17:16:38Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
Heterogeneous Memory Architectures [68.91874045918112]
adapter-ALBERT is an efficient model optimization for maximal data reuse across different tasks.
We demonstrate the advantage of mapping the model to a heterogeneous on-chip memory architecture by performing simulations on a validated NLP edge accelerator.
arXiv Detail & Related papers (2023-03-25T14:40:59Z) - Learning k-Level Structured Sparse Neural Networks Using Group Envelope Regularization [4.0554893636822]
We introduce a novel approach to deploy large-scale Deep Neural Networks on constrained resources.
The method speeds up inference time and aims to reduce memory demand and power consumption.
arXiv Detail & Related papers (2022-12-25T15:40:05Z) - Entanglement Rate Optimization in Heterogeneous Quantum Communication
Networks [79.8886946157912]
Quantum communication networks are emerging as a promising technology that could constitute a key building block in future communication networks in the 6G era and beyond.
Recent advances led to the deployment of small- and large-scale quantum communication networks with real quantum hardware.
In quantum networks, entanglement is a key resource that allows for data transmission between different nodes.
arXiv Detail & Related papers (2021-05-30T11:34:23Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - FSpiNN: An Optimization Framework for Memory- and Energy-Efficient
Spiking Neural Networks [14.916996986290902]
Spiking Neural Networks (SNNs) offer unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule.
However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy.
We propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing.
arXiv Detail & Related papers (2020-07-17T09:40:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.