Quantum-Classical Machine learning by Hybrid Tensor Networks
- URL: http://arxiv.org/abs/2005.09428v2
- Date: Wed, 14 Aug 2024 06:43:55 GMT
- Title: Quantum-Classical Machine learning by Hybrid Tensor Networks
- Authors: Ding Liu, Jiaqi Yao, Zekun Yao, Quan Zhang,
- Abstract summary: We propose the quantum-classical hybrid networks (HTN) which combine tensor networks with classical neural networks in a uniform deep learning framework.
We show the potential applications of HTN, including quantum classification and quantum autoencoder.
- Score: 14.851178989158976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor networks (TN) have found a wide use in machine learning, and in particular, TN and deep learning bear striking similarities. In this work, we propose the quantum-classical hybrid tensor networks (HTN) which combine tensor networks with classical neural networks in a uniform deep learning framework to overcome the limitations of regular tensor networks in machine learning. We first analyze the limitations of regular tensor networks in the applications of machine learning involving the representation power and architecture scalability. We conclude that in fact the regular tensor networks are not competent to be the basic building blocks of deep learning. Then, we discuss the performance of HTN which overcome all the deficiency of regular tensor networks for machine learning. In this sense, we are able to train HTN in the deep learning way which is the standard combination of algorithms such as Back Propagation and Stochastic Gradient Descent. We finally provide two applicable cases to show the potential applications of HTN, including quantum states classification and quantum-classical autoencoder. These cases also demonstrate the great potentiality to design various HTN in deep learning way.
Related papers
- Arbitrary Polynomial Separations in Trainable Quantum Machine Learning [1.0080317855851213]
Recent theoretical results in quantum machine learning have demonstrated a general trade-off between the expressive power of quantum neural networks (QNNs) and their trainability.
We here circumvent these negative results by constructing a hierarchy of efficiently train QNNs that exhibit unconditionally provable, memory separations.
We show that quantum contextuality is the source of the expressivity separation, suggesting that other classical sequence learning problems with long-time correlations may be a regime where practical advantages in quantum machine learning may exist.
arXiv Detail & Related papers (2024-02-13T17:12:01Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - The cross-sectional stock return predictions via quantum neural network
and tensor network [0.0]
We investigate the application of quantum and quantum-inspired machine learning algorithms to stock return predictions.
We evaluate the performance of quantum neural network, an algorithm suited for noisy intermediate-scale quantum computers, and tensor network, a quantum-inspired machine learning algorithm.
arXiv Detail & Related papers (2023-04-25T00:05:13Z) - Tensor Networks Meet Neural Networks: A Survey and Future Perspectives [27.878669143107885]
tensorial neural networks (TNNs) and neural networks (NNs) are two fundamental data modeling approaches.
TNs solve the curse of dimensionality in large-scale tensors by converting an exponential number of dimensions to complexity.
NNs have displayed exceptional performance in various applications, e.g., computer vision, natural language processing, and robotics research.
arXiv Detail & Related papers (2023-01-22T17:35:56Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - Quantum Phase Recognition using Quantum Tensor Networks [0.0]
This paper examines a quantum machine learning approach based on shallow variational ansatz inspired by tensor networks for supervised learning tasks.
We are able to reach $geq 98%$ test-set accuracies with both multi-scale entanglement renormalization ansatz (MERA) and tree tensor network (TTN) inspired parametrized quantum circuits.
arXiv Detail & Related papers (2022-12-12T19:29:07Z) - Deep Neural Networks as the Semi-classical Limit of Topological Quantum Neural Networks: The problem of generalisation [0.3871780652193725]
We propose using this framework to understand the problem of generalisation in Deep Neural Networks.
A framework of this kind explains the overfitting behavior of Deep Neural Networks during the training step and the corresponding generalisation capabilities.
We apply a novel algorithm we developed, showing that it obtains similar results to standard neural networks, but without the need for training.
arXiv Detail & Related papers (2022-10-25T03:14:59Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Entanglement Classification via Neural Network Quantum States [58.720142291102135]
In this paper we combine machine-learning tools and the theory of quantum entanglement to perform entanglement classification for multipartite qubit systems in pure states.
We use a parameterisation of quantum systems using artificial neural networks in a restricted Boltzmann machine (RBM) architecture, known as Neural Network Quantum States (NNS)
arXiv Detail & Related papers (2019-12-31T07:40:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.