Edge-Local and Qubit-Efficient Quantum Graph Learning for the NISQ Era
- URL: http://arxiv.org/abs/2602.16018v1
- Date: Tue, 17 Feb 2026 21:17:42 GMT
- Title: Edge-Local and Qubit-Efficient Quantum Graph Learning for the NISQ Era
- Authors: Armin Ahmadkhaniha, Jake Doliskani,
- Abstract summary: We introduce a fully quantum graph convolutional architecture designed explicitly for unsupervised learning in the noisy intermediate-scale quantum regime.<n>Our model decomposes message passing into pairwise interactions along graph edges using only hardware-native single- and two-qubit gates.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) are a powerful framework for learning representations from graph-structured data, but their direct implementation on near-term quantum hardware remains challenging due to circuit depth, multi-qubit interactions, and qubit scalability constraints. In this work, we introduce a fully quantum graph convolutional architecture designed explicitly for unsupervised learning in the noisy intermediate-scale quantum (NISQ) regime. Our approach combines a variational quantum feature extraction layer with an edge-local and qubit-efficient quantum message-passing mechanism inspired by the Quantum Alternating Operator Ansatz (QAOA) framework. Unlike prior models that rely on global operations or multi-controlled unitaries, our model decomposes message passing into pairwise interactions along graph edges using only hardware-native single- and two-qubit gates. This design reduces the qubit requirement from $O(Nn)$ to $O(n)$ for a graph with $N$ nodes and $n$-qubit feature registers, enabling implementation on current quantum devices regardless of graph size. We train the model using the Deep Graph Infomax objective to perform unsupervised node representation learning. Experiments on the Cora citation network and a large-scale genomic SNP dataset demonstrate that our model remains competitive with prior quantum and hybrid approaches.
Related papers
- Graph Signal Generative Diffusion Models [74.75869068073577]
We introduce U-shaped encoder-decoder graph neural networks (U-GNNs) for graph signal generation using denoising diffusion processes.<n>The architecture learns node features at different resolutions with skip connections between the encoder and decoder paths.<n>We demonstrate the effectiveness of the diffusion model in probabilistic forecasting of stock prices.
arXiv Detail & Related papers (2025-09-21T21:57:27Z) - Inductive Graph Representation Learning with Quantum Graph Neural Networks [0.40964539027092917]
Quantum Graph Neural Networks (QGNNs) present a promising approach for combining quantum computing with graph-structured data processing.<n>We propose a versatile QGNN framework inspired by the classical GraphSAGE approach, utilizing quantum models as aggregators.<n>We show that our quantum approach exhibits robust generalization across molecules with varying numbers of atoms without requiring circuit modifications.
arXiv Detail & Related papers (2025-03-31T14:04:08Z) - A quantum annealing approach to graph node embedding [1.0878040851638]
Node embedding is a key technique for representing graph nodes as vectors while preserving structural and relational properties.<n> classical methods such as DeepWalk, node2vec, and graph convolutional networks learn node embeddings by capturing structural and relational patterns in graphs.<n>Quantum computing provides a promising alternative for graph-based learning by leveraging quantum effects and introducing novel optimization approaches.
arXiv Detail & Related papers (2025-03-08T20:11:55Z) - QGHNN: A quantum graph Hamiltonian neural network [30.632260870411177]
Graph Neural Networks (GNNs) strive to address the challenges posed by complex, high-dimensional graph data.<n>Quantum Neural Networks (QNNs) present a compelling alternative due to their potential for quantum parallelism.<n>This paper introduces a quantum graph Hamiltonian neural network (QGHNN) to enhance graph representation and learning on noisy intermediate-scale quantum computers.
arXiv Detail & Related papers (2025-01-14T10:15:17Z) - Projected Stochastic Gradient Descent with Quantum Annealed Binary Gradients [51.82488018573326]
We present QP-SBGD, a novel layer-wise optimiser tailored towards training neural networks with binary weights.
BNNs reduce the computational requirements and energy consumption of deep learning models with minimal loss in accuracy.
Our algorithm is implemented layer-wise, making it suitable to train larger networks on resource-limited quantum hardware.
arXiv Detail & Related papers (2023-10-23T17:32:38Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Towards Quantum Graph Neural Networks: An Ego-Graph Learning Approach [47.19265172105025]
We propose a novel hybrid quantum-classical algorithm for graph-structured data, which we refer to as the Ego-graph based Quantum Graph Neural Network (egoQGNN)
egoQGNN implements the GNN theoretical framework using the tensor product and unity matrix representation, which greatly reduces the number of model parameters required.
The architecture is based on a novel mapping from real-world data to Hilbert space.
arXiv Detail & Related papers (2022-01-13T16:35:45Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.