Completely Quantum Neural Networks
- URL: http://arxiv.org/abs/2202.11727v1
- Date: Wed, 23 Feb 2022 19:00:03 GMT
- Title: Completely Quantum Neural Networks
- Authors: Steve Abel, Juan C. Criado, Michael Spannowsky
- Abstract summary: We describe how to embed and train a general neural network in a quantum annealer.
We develop three crucial ingredients: the free parameters of the network, the approximation of the activation function, and reduction of binary higher-order ingredients into ones.
We implement this for an elementary network and illustrate the advantages of quantum training: its consistency in finding the global minimum of the loss function and the fact that the network training converges in a single step.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial neural networks are at the heart of modern deep learning
algorithms. We describe how to embed and train a general neural network in a
quantum annealer without introducing any classical element in training. To
implement the network on a state-of-the-art quantum annealer, we develop three
crucial ingredients: binary encoding the free parameters of the network,
polynomial approximation of the activation function, and reduction of binary
higher-order polynomials into quadratic ones. Together, these ideas allow
encoding the loss function as an Ising model Hamiltonian. The quantum annealer
then trains the network by finding the ground state. We implement this for an
elementary network and illustrate the advantages of quantum training: its
consistency in finding the global minimum of the loss function and the fact
that the network training converges in a single annealing step, which leads to
short training times while maintaining a high classification performance. Our
approach opens a novel avenue for the quantum training of general machine
learning models.
Related papers
- Dissipation-driven quantum generative adversarial networks [11.833077116494929]
We introduce a novel dissipation-driven quantum generative adversarial network (DQGAN) architecture specifically tailored for generating classical data.
The classical data is encoded into the input qubits of the input layer via strong tailored dissipation processes.
We extract both the generated data and the classification results by measuring the observables of the steady state of the output qubits.
arXiv Detail & Related papers (2024-08-28T07:41:58Z) - CTRQNets & LQNets: Continuous Time Recurrent and Liquid Quantum Neural Networks [76.53016529061821]
Liquid Quantum Neural Network (LQNet) and Continuous Time Recurrent Quantum Neural Network (CTRQNet) developed.
LQNet and CTRQNet achieve accuracy increases as high as 40% on CIFAR 10 through binary classification.
arXiv Detail & Related papers (2024-08-28T00:56:03Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - Realization of a quantum neural network using repeat-until-success
circuits in a superconducting quantum processor [0.0]
In this paper, we use repeat-until-success circuits enabled by real-time control-flow feedback to realize quantum neurons with non-linear activation functions.
As an example, we construct a minimal feedforward quantum neural network capable of learning all 2-to-1-bit Boolean functions.
This model is shown to perform non-linear classification and effectively learns from multiple copies of a single training state consisting of the maximal superposition of all inputs.
arXiv Detail & Related papers (2022-12-21T03:26:32Z) - Optimizing Tensor Network Contraction Using Reinforcement Learning [86.05566365115729]
We propose a Reinforcement Learning (RL) approach combined with Graph Neural Networks (GNN) to address the contraction ordering problem.
The problem is extremely challenging due to the huge search space, the heavy-tailed reward distribution, and the challenging credit assignment.
We show how a carefully implemented RL-agent that uses a GNN as the basic policy construct can address these challenges.
arXiv Detail & Related papers (2022-04-18T21:45:13Z) - A Hybrid Quantum-Classical Neural Network Architecture for Binary
Classification [0.0]
We propose a hybrid quantum-classical neural network architecture where each neuron is a variational quantum circuit.
On simulated hardware, we observe that the hybrid neural network achieves roughly 10% higher classification accuracy and 20% better minimization of cost than an individual variational quantum circuit.
arXiv Detail & Related papers (2022-01-05T21:06:30Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Quantum Annealing Formulation for Binary Neural Networks [40.99969857118534]
In this work, we explore binary neural networks, which are lightweight yet powerful models typically intended for resource constrained devices.
We devise a quadratic unconstrained binary optimization formulation for the training problem.
While the problem is intractable, i.e., the cost to estimate the binary weights scales exponentially with network size, we show how the problem can be optimized directly on a quantum annealer.
arXiv Detail & Related papers (2021-07-05T03:20:54Z) - The Hintons in your Neural Network: a Quantum Field Theory View of Deep
Learning [84.33745072274942]
We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles.
On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing.
arXiv Detail & Related papers (2021-03-08T17:24:29Z) - Quantum Deformed Neural Networks [83.71196337378022]
We develop a new quantum neural network layer designed to run efficiently on a quantum computer.
It can be simulated on a classical computer when restricted in the way it entangles input states.
arXiv Detail & Related papers (2020-10-21T09:46:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.