Traffic Sign Classification Using Deep and Quantum Neural Networks
- URL: http://arxiv.org/abs/2209.15251v1
- Date: Fri, 30 Sep 2022 06:16:03 GMT
- Title: Traffic Sign Classification Using Deep and Quantum Neural Networks
- Authors: Sylwia Kuros, Tomasz Kryjak
- Abstract summary: Quantum Neural Networks (QNNs) are an emerging technology that can be used in many applications including computer vision.
In this paper, we presented a traffic sign classification system implemented using a hybrid quantum-classical convolutional neural network.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum Neural Networks (QNNs) are an emerging technology that can be used in
many applications including computer vision. In this paper, we presented a
traffic sign classification system implemented using a hybrid quantum-classical
convolutional neural network. Experiments on the German Traffic Sign
Recognition Benchmark dataset indicate that currently QNN do not outperform
classical DCNN (Deep Convolutuional Neural Networks), yet still provide an
accuracy of over 90% and are a definitely promising solution for advanced
computer vision.
Related papers
- CTRQNets & LQNets: Continuous Time Recurrent and Liquid Quantum Neural Networks [76.53016529061821]
Liquid Quantum Neural Network (LQNet) and Continuous Time Recurrent Quantum Neural Network (CTRQNet) developed.
LQNet and CTRQNet achieve accuracy increases as high as 40% on CIFAR 10 through binary classification.
arXiv Detail & Related papers (2024-08-28T00:56:03Z) - Quantum Recurrent Neural Networks for Sequential Learning [11.133759363113867]
We propose a new kind of quantum recurrent neural network (QRNN) to find quantum advantageous applications in the near term.
Our QRNN is built by stacking the QRBs in a staggered way that can greatly reduce the algorithm's requirement with regard to the coherent time of quantum devices.
The numerical experiments show that our QRNN achieves much better performance in prediction (classification) accuracy against the classical RNN and state-of-the-art QNN models for sequential learning.
arXiv Detail & Related papers (2023-02-07T04:04:39Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Predict better with less training data using a QNN [1.7481852615249125]
We describe a quanvolutional neural network (QNN) algorithm that efficiently maps classical image data to quantum states.
We empirically observe a genuine quantum advantage for an industrial application where the advantage is due to superior data encoding.
arXiv Detail & Related papers (2022-06-08T15:25:58Z) - QDCNN: Quantum Dilated Convolutional Neural Network [1.52292571922932]
We propose a novel hybrid quantum-classical algorithm called quantum dilated convolutional neural networks (QDCNNs)
Our method extends the concept of dilated convolution, which has been widely applied in modern deep learning algorithms, to the context of hybrid neural networks.
The proposed QDCNNs are able to capture larger context during the quantum convolution process while reducing the computational cost.
arXiv Detail & Related papers (2021-10-29T10:24:34Z) - On Circuit-based Hybrid Quantum Neural Networks for Remote Sensing
Imagery Classification [88.31717434938338]
The hybrid QCNNs enrich the classical architecture of CNNs by introducing a quantum layer within a standard neural network.
The novel QCNN proposed in this work is applied to the Land Use and Land Cover (LULC) classification, chosen as an Earth Observation (EO) use case.
The results of the multiclass classification prove the effectiveness of the presented approach, by demonstrating that the QCNN performances are higher than the classical counterparts.
arXiv Detail & Related papers (2021-09-20T12:41:50Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - A Quantum Convolutional Neural Network for Image Classification [7.745213180689952]
We propose a novel neural network model named Quantum Convolutional Neural Network (QCNN)
QCNN is based on implementable quantum circuits and has a similar structure as classical convolutional neural networks.
Numerical simulation results on the MNIST dataset demonstrate the effectiveness of our model.
arXiv Detail & Related papers (2021-07-08T06:47:34Z) - QFCNN: Quantum Fourier Convolutional Neural Network [4.344289435743451]
We propose a new hybrid quantum-classical circuit, namely Quantum Fourier Convolutional Network (QFCN)
Our model achieves exponential speed-up compared with classical CNN theoretically and improves over the existing best result of quantum CNN.
We demonstrate the potential of this architecture by applying it to different deep learning tasks, including traffic prediction and image classification.
arXiv Detail & Related papers (2021-06-19T04:37:39Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.