Image Classification by Throwing Quantum Kitchen Sinks at Tensor
Networks
- URL: http://arxiv.org/abs/2208.13895v1
- Date: Mon, 29 Aug 2022 21:38:22 GMT
- Title: Image Classification by Throwing Quantum Kitchen Sinks at Tensor
Networks
- Authors: Nathan X. Kodama (Case Western Reserve University), Alex Bocharov
(Microsoft Quantum), Marcus P. da Silva (Microsoft Quantum)
- Abstract summary: We propose a new circuit ansatz for quantum machine learning.
We run numerical experiments to empirically evaluate the performance of the new ansatz on image classification.
The addition of feature optimization greatly boosts performance, leading to state-of-the-art quantum circuits for image classification.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several variational quantum circuit approaches to machine learning have been
proposed in recent years, with one promising class of variational algorithms
involving tensor networks operating on states resulting from local feature
maps. In contrast, a random feature approach known as quantum kitchen sinks
provides comparable performance, but leverages non-local feature maps. Here we
combine these two approaches by proposing a new circuit ansatz where a tree
tensor network coherently processes the non-local feature maps of quantum
kitchen sinks, and we run numerical experiments to empirically evaluate the
performance of the new ansatz on image classification. From the perspective of
classification performance, we find that simply combining quantum kitchen sinks
with tensor networks yields no qualitative improvements. However, the addition
of feature optimization greatly boosts performance, leading to state-of-the-art
quantum circuits for image classification, requiring only shallow circuits and
a small number of qubits -- both well within reach of near-term quantum
devices.
Related papers
- Towards Efficient Quantum Hybrid Diffusion Models [68.43405413443175]
We propose a new methodology to design quantum hybrid diffusion models.
We propose two possible hybridization schemes combining quantum computing's superior generalization with classical networks' modularity.
arXiv Detail & Related papers (2024-02-25T16:57:51Z) - A Quantum-Classical Collaborative Training Architecture Based on Quantum
State Fidelity [50.387179833629254]
We introduce a collaborative classical-quantum architecture called co-TenQu.
Co-TenQu enhances a classical deep neural network by up to 41.72% in a fair setting.
It outperforms other quantum-based methods by up to 1.9 times and achieves similar accuracy while utilizing 70.59% fewer qubits.
arXiv Detail & Related papers (2024-02-23T14:09:41Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Multiclass classification using quantum convolutional neural networks
with hybrid quantum-classical learning [0.5999777817331318]
We propose a quantum machine learning approach based on quantum convolutional neural networks for solving multiclass classification problems.
We use the proposed approach to demonstrate the 4-class classification for the case of the MNIST dataset using eight qubits for data encoding and four acnilla qubits.
Our results demonstrate comparable accuracy of our solution with classical convolutional neural networks with comparable numbers of trainable parameters.
arXiv Detail & Related papers (2022-03-29T09:07:18Z) - New quantum neural network designs [0.0]
We investigate the performance of new quantum neural network designs.
We develop a new technique, where we merge feature map and variational circuit into a single parameterized circuit.
We achieve lower loss, better accuracy, and faster convergence.
arXiv Detail & Related papers (2022-03-12T10:20:14Z) - Multi-class quantum classifiers with tensor network circuits for quantum
phase recognition [0.0]
Network-inspired circuits have been proposed as a natural choice for variational quantum eigensolver circuits.
We present numerical experiments on multi-class entanglements based on tree tensor network and multiscale renormalization ansatz circuits.
arXiv Detail & Related papers (2021-10-15T21:55:13Z) - An unsupervised feature learning for quantum-classical convolutional
network with applications to fault detection [5.609958919699706]
We present a simple unsupervised method for quantum-classical convolutional networks to learn a hierarchy of quantum feature extractors.
The main contribution of the proposed approach is to use the $K$-means clustering to maximize the difference of quantum properties in quantum circuit ansatz.
arXiv Detail & Related papers (2021-07-17T03:16:59Z) - Direct Quantization for Training Highly Accurate Low Bit-width Deep
Neural Networks [73.29587731448345]
This paper proposes two novel techniques to train deep convolutional neural networks with low bit-width weights and activations.
First, to obtain low bit-width weights, most existing methods obtain the quantized weights by performing quantization on the full-precision network weights.
Second, to obtain low bit-width activations, existing works consider all channels equally.
arXiv Detail & Related papers (2020-12-26T15:21:18Z) - Experimental Quantum Generative Adversarial Networks for Image
Generation [93.06926114985761]
We experimentally achieve the learning and generation of real-world hand-written digit images on a superconducting quantum processor.
Our work provides guidance for developing advanced quantum generative models on near-term quantum devices.
arXiv Detail & Related papers (2020-10-13T06:57:17Z) - Searching for Low-Bit Weights in Quantized Neural Networks [129.8319019563356]
Quantized neural networks with low-bit weights and activations are attractive for developing AI accelerators.
We present to regard the discrete weights in an arbitrary quantized neural network as searchable variables, and utilize a differential method to search them accurately.
arXiv Detail & Related papers (2020-09-18T09:13:26Z) - Experimental realization of a quantum image classifier via
tensor-network-based machine learning [4.030017427802459]
We demonstrate highly successful classifications of real-life images using photonic qubits.
We focus on binary classification for hand-written zeroes and ones, whose features are cast into the tensor-network representation.
Our scheme can be scaled to efficient multi-qubit encodings of features in the tensor-product representation.
arXiv Detail & Related papers (2020-03-19T03:26:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.