LCQNN: Linear Combination of Quantum Neural Networks
- URL: http://arxiv.org/abs/2507.02832v2
- Date: Mon, 04 Aug 2025 13:29:29 GMT
- Title: LCQNN: Linear Combination of Quantum Neural Networks
- Authors: Hongshun Yao, Xia Liu, Mingrui Jing, Guangxi Li, Xin Wang,
- Abstract summary: We introduce the Linear Combination of Quantum Neural Networks (LCQNN) framework, which uses the linear combination of unitaries concept to create a tunable design.<n>We show how specific structural choices, such as adopting $k$ of control unitaries or restricting the model to certain group-theoretic subspaces, prevent gradients from collapsing.<n>In group action scenarios, we show that by exploiting symmetry and excluding exponentially large irreducible subspaces, the model circumvents barren plateaus.
- Score: 7.010027035873597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum neural networks combine quantum computing with advanced data-driven methods, offering promising applications in quantum machine learning. However, the optimal paradigm for balancing trainability and expressivity in QNNs remains an open question. To address this, we introduce the Linear Combination of Quantum Neural Networks (LCQNN) framework, which uses the linear combination of unitaries concept to create a tunable design that mitigates vanishing gradients without incurring excessive classical simulability. We show how specific structural choices, such as adopting $k$-local control unitaries or restricting the model to certain group-theoretic subspaces, prevent gradients from collapsing while maintaining sufficient expressivity for complex tasks. We further employ the LCQNN model to handle supervised learning tasks, demonstrating its effectiveness on real datasets. In group action scenarios, we show that by exploiting symmetry and excluding exponentially large irreducible subspaces, the model circumvents barren plateaus. Overall, LCQNN provides a novel framework for focusing quantum resources into architectures that are practically trainable yet expressive enough to tackle challenging machine learning applications.
Related papers
- Inductive Graph Representation Learning with Quantum Graph Neural Networks [0.40964539027092917]
Quantum Graph Neural Networks (QGNNs) present a promising approach for combining quantum computing with graph-structured data processing.<n>We propose a versatile QGNN framework inspired by the classical GraphSAGE approach, utilizing quantum models as aggregators.<n>We show that our quantum approach exhibits robust generalization across molecules with varying numbers of atoms without requiring circuit modifications.
arXiv Detail & Related papers (2025-03-31T14:04:08Z) - Single-Qudit Quantum Neural Networks for Multiclass Classification [0.0]
This paper proposes a single-qudit quantum neural network for multiclass classification.<n>Our design employs an $d$-dimensional unitary operator, where $d$ corresponds to the number of classes.<n>We evaluate our model on the MNIST and EMNIST datasets, demonstrating competitive accuracy.
arXiv Detail & Related papers (2025-03-12T11:12:05Z) - Quantum Simplicial Neural Networks [11.758402121933996]
We present the first Quantum Topological Deep Learning Model: Quantum Simplicial Networks (QSNs)<n>QSNs are a stack of Quantum Simplicial Layers, which are inspired by the Ising model to encode higher-order structures into quantum states.<n> Experiments on synthetic classification tasks show that QSNs can outperform classical simplicial TDL models in accuracy and efficiency.
arXiv Detail & Related papers (2025-01-09T20:07:25Z) - Quantum Pointwise Convolution: A Flexible and Scalable Approach for Neural Network Enhancement [0.0]
We propose a novel architecture, which incorporates pointwise convolution within a quantum neural network framework.<n>By using quantum circuits, we map data to a higher-dimensional space, capturing more complex feature relationships.<n>In experiments, we applied the quantum pointwise convolution layer to classification tasks on the FashionMNIST and CIFAR10 datasets.
arXiv Detail & Related papers (2024-12-02T08:03:59Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - A Quantum-Classical Collaborative Training Architecture Based on Quantum
State Fidelity [50.387179833629254]
We introduce a collaborative classical-quantum architecture called co-TenQu.
Co-TenQu enhances a classical deep neural network by up to 41.72% in a fair setting.
It outperforms other quantum-based methods by up to 1.9 times and achieves similar accuracy while utilizing 70.59% fewer qubits.
arXiv Detail & Related papers (2024-02-23T14:09:41Z) - Coreset selection can accelerate quantum machine learning models with
provable generalization [6.733416056422756]
Quantum neural networks (QNNs) and quantum kernels stand as prominent figures in the realm of quantum machine learning.
We present a unified approach: coreset selection, aimed at expediting the training of QNNs and quantum kernels.
arXiv Detail & Related papers (2023-09-19T08:59:46Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - Variational Quantum Neural Networks (VQNNS) in Image Classification [0.0]
This paper investigates how training of quantum neural network (QNNs) can be done using quantum optimization algorithms.
In this paper, a QNN structure is made where a variational parameterized circuit is incorporated as an input layer named as Variational Quantum Neural Network (VQNNs)
VQNNs is experimented with MNIST digit recognition (less complex) and crack image classification datasets which converge the computation in lesser time than QNN with decent training accuracy.
arXiv Detail & Related papers (2023-03-10T11:24:32Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Power and limitations of single-qubit native quantum neural networks [5.526775342940154]
Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization.
We formulate a theoretical framework for the expressive ability of data re-uploading quantum neural networks.
arXiv Detail & Related papers (2022-05-16T17:58:27Z) - Optimizing Tensor Network Contraction Using Reinforcement Learning [86.05566365115729]
We propose a Reinforcement Learning (RL) approach combined with Graph Neural Networks (GNN) to address the contraction ordering problem.
The problem is extremely challenging due to the huge search space, the heavy-tailed reward distribution, and the challenging credit assignment.
We show how a carefully implemented RL-agent that uses a GNN as the basic policy construct can address these challenges.
arXiv Detail & Related papers (2022-04-18T21:45:13Z) - Branching Quantum Convolutional Neural Networks [0.0]
Small-scale quantum computers are already showing potential gains in learning tasks on large quantum and very large classical data sets.
We present a generalization of QCNN, the branching quantum convolutional neural network, or bQCNN, with substantially higher expressibility.
arXiv Detail & Related papers (2020-12-28T19:00:03Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - Entanglement Classification via Neural Network Quantum States [58.720142291102135]
In this paper we combine machine-learning tools and the theory of quantum entanglement to perform entanglement classification for multipartite qubit systems in pure states.
We use a parameterisation of quantum systems using artificial neural networks in a restricted Boltzmann machine (RBM) architecture, known as Neural Network Quantum States (NNS)
arXiv Detail & Related papers (2019-12-31T07:40:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.