Quantum Convolutional Neural Network with Flexible Stride
- URL: http://arxiv.org/abs/2412.00645v1
- Date: Sun, 01 Dec 2024 02:37:06 GMT
- Title: Quantum Convolutional Neural Network with Flexible Stride
- Authors: Kai Yu, Song Lin, Bin-Bin Cai,
- Abstract summary: We propose a novel quantum convolutional neural network algorithm.
It can flexibly adjust the stride to accommodate different tasks.
It can achieve exponential acceleration of data scale in less memory compared with its classical counterpart.
- Score: 7.362858964229726
- License:
- Abstract: Convolutional neural network is a crucial tool for machine learning, especially in the field of computer vision. Its unique structure and characteristics provide significant advantages in feature extraction. However, with the exponential growth of data scale, classical computing architectures face serious challenges in terms of time efficiency and memory requirements. In this paper, we propose a novel quantum convolutional neural network algorithm. It can flexibly adjust the stride to accommodate different tasks while ensuring that the required qubits do not increase proportionally with the size of the sliding window. First, a data loading method based on quantum superposition is presented, which is able to exponentially reduce space requirements. Subsequently, quantum subroutines for convolutional layers, pooling layers, and fully connected layers are designed, fully replicating the core functions of classical convolutional neural networks. Among them, the quantum arithmetic technique is introduced to recover the data position information of the corresponding receptive field through the position information of the feature, which makes the selection of step size more flexible. Moreover, parallel quantum amplitude estimation and swap test techniques are employed, enabling parallel feature extraction. Analysis shows that the method can achieve exponential acceleration of data scale in less memory compared with its classical counterpart. Finally, the proposed method is numerically simulated on the Qiskit framework using handwritten digital images in the MNIST dataset. The experimental results provide evidence for the effectiveness of the model.
Related papers
- Quantum Pointwise Convolution: A Flexible and Scalable Approach for Neural Network Enhancement [0.0]
We propose a novel architecture, which incorporates pointwise convolution within a quantum neural network framework.
By using quantum circuits, we map data to a higher-dimensional space, capturing more complex feature relationships.
In experiments, we applied the quantum pointwise convolution layer to classification tasks on the FashionMNIST and CIFAR10 datasets.
arXiv Detail & Related papers (2024-12-02T08:03:59Z) - Training-efficient density quantum machine learning [2.918930150557355]
Quantum machine learning requires powerful, flexible and efficiently trainable models.
We present density quantum neural networks, a learning model incorporating randomisation over a set of trainable unitaries.
arXiv Detail & Related papers (2024-05-30T16:40:28Z) - Enhancing the expressivity of quantum neural networks with residual
connections [0.0]
We propose a quantum circuit-based algorithm to implement quantum residual neural networks (QResNets)
Our work lays the foundation for a complete quantum implementation of the classical residual neural networks.
arXiv Detail & Related papers (2024-01-29T04:00:51Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Comparing concepts of quantum and classical neural network models for
image classification task [0.456877715768796]
This material includes the results of experiments on training and performance of a hybrid quantum-classical neural network.
Although its simulation is time-consuming, the quantum network, although its simulation is time-consuming, overcomes the classical network.
arXiv Detail & Related papers (2021-08-19T18:49:30Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Variational learning for quantum artificial neural networks [0.0]
We first review a series of recent works describing the implementation of artificial neurons and feed-forward neural networks on quantum processors.
We then present an original realization of efficient individual quantum nodes based on variational unsampling protocols.
While keeping full compatibility with the overall memory-efficient feed-forward architecture, our constructions effectively reduce the quantum circuit depth required to determine the activation probability of single neurons.
arXiv Detail & Related papers (2021-03-03T16:10:15Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.