Dual-Qubit Hierarchical Fuzzy Neural Network for Image Classification: Enabling Relational Learning via Quantum Entanglement
- URL: http://arxiv.org/abs/2512.13274v1
- Date: Mon, 15 Dec 2025 12:35:53 GMT
- Title: Dual-Qubit Hierarchical Fuzzy Neural Network for Image Classification: Enabling Relational Learning via Quantum Entanglement
- Authors: Wenwei Zhang, Jintao Wang, Tianyu Ye, Changgeng Liao,
- Abstract summary: This paper proposes a dual-qubit hierarchical fuzzy neural network (DQ-HFNN)<n>It encodes feature pairs onto a pair of entangled qubits, which extends the single-feature fuzzy model to a joint fuzzy representation.<n> Experiments under noisy conditions suggest that it is robust against noise and has the potential to be implemented on noisy intermediate-scale quantum devices.
- Score: 27.317371900372127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classical deep neural network models struggle to represent data uncertainty and capture dependencies between features simultaneously, especially under fuzzy or noisy conditions. Although a quantum-assisted hierarchical fuzzy neural network (QA-HFNN) was proposed to learn fuzzy membership for each feature, it cannot model dependencies between features due to its single-qubit encoding. To address this, this paper proposes a dual-qubit hierarchical fuzzy neural network (DQ-HFNN), encoding feature pairs onto a pair of entangled qubits, which extends the single-feature fuzzy model to a joint fuzzy representation. By introducing quantum entanglement, the dual-qubit circuit can encode non-classical correlations, enabling the model to directly learn relationship patterns between feature pairs. Experiments on benchmarks show that DQ-HFNN demonstrates higher classification accuracy than QA-HFNN, as well as classical deep learning baselines. Furthermore, ablation studies after controlling for circuit depth and parameter counts show that the performance gain mainly stems from the relational modeling capability enabled by entanglement rather than enhanced expressivity. The proposed DQ-HFNN model exhibits high parameter efficiency and fast inference speed. Experiments under noisy conditions suggest that it is robust against noise and has the potential to be implemented on noisy intermediate-scale quantum devices.
Related papers
- Q-RUN: Quantum-Inspired Data Re-uploading Networks [9.564540024568245]
Data re-uploading quantum circuits (DRQC) are a key approach to implementing quantum neural networks.<n>We introduce the mathematical paradigm of DRQC into classical models by proposing a quantum-inspired data re-uploading network (Q-RUN)<n>Q-RUN retains the Fourier-expressive advantages of quantum models without any quantum hardware.
arXiv Detail & Related papers (2025-12-18T04:12:09Z) - Hybrid Quantum Neural Networks for Efficient Protein-Ligand Binding Affinity Prediction [0.8957579200590984]
High-performance requirements and vast datasets involved in affinity prediction demand increasingly large AI models.<n>Quantum machine learning has emerged as a promising solution to these challenges.<n>This study proposes a hybrid quantum neural network (HQNN) that empirically demonstrates the capability to approximate non-linear functions.
arXiv Detail & Related papers (2025-09-14T02:20:21Z) - HQFNN: A Compact Quantum-Fuzzy Neural Network for Accurate Image Classification [0.3595507621009123]
Highly Quantized Fuzzy Neural Network (HQFNN) couples quantum signal to lightweight CNN feature extractor.<n>HQFNN consistently surpasses classical, fuzzy enhanced and quantum only baselines.<n> Gate count analysis shows that circuit depth grows sublinearly with input dimension.
arXiv Detail & Related papers (2025-06-11T09:12:20Z) - Lean classical-quantum hybrid neural network model for image classification [12.353900068459446]
We introduce a Lean Classical-Quantum Hybrid Neural Network (LCQHNN), which achieves efficient classification performance with only four layers of variational circuits.<n>Our experiments demonstrate that LCQHNN achieves 100%, 99.02%, and 85.55% classification accuracy on MNIST, FashionMNIST, and CIFAR-10 datasets.
arXiv Detail & Related papers (2024-12-03T00:37:11Z) - Noise-Resilient Unsupervised Graph Representation Learning via Multi-Hop Feature Quality Estimation [53.91958614666386]
Unsupervised graph representation learning (UGRL) based on graph neural networks (GNNs)
We propose a novel UGRL method based on Multi-hop feature Quality Estimation (MQE)
arXiv Detail & Related papers (2024-07-29T12:24:28Z) - Coherent Feed Forward Quantum Neural Network [2.1178416840822027]
Quantum machine learning, focusing on quantum neural networks (QNNs), remains a vastly uncharted field of study.
We introduce a bona fide QNN model, which seamlessly aligns with the versatility of a traditional FFNN in terms of its adaptable intermediate layers and nodes.
We test our proposed model on various benchmarking datasets such as the diagnostic breast cancer (Wisconsin) and credit card fraud detection datasets.
arXiv Detail & Related papers (2024-02-01T15:13:26Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - On the eigenvector bias of Fourier feature networks: From regression to
solving multi-scale PDEs with physics-informed neural networks [0.0]
We show that neural networks (PINNs) struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features.
We construct novel architectures that employ multi-scale random observational features and justify how such coordinate embedding layers can lead to robust and accurate PINN models.
arXiv Detail & Related papers (2020-12-18T04:19:30Z) - Decentralizing Feature Extraction with Quantum Convolutional Neural
Network for Automatic Speech Recognition [101.69873988328808]
We build upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction.
An input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram.
The corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters.
The encoded features are then down-streamed to the local RNN model for the final recognition.
arXiv Detail & Related papers (2020-10-26T03:36:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.