Accelerating the training of single-layer binary neural networks using
the HHL quantum algorithm
- URL: http://arxiv.org/abs/2210.12707v1
- Date: Sun, 23 Oct 2022 11:58:05 GMT
- Title: Accelerating the training of single-layer binary neural networks using
the HHL quantum algorithm
- Authors: Sonia Lopez Alarcon, Cory Merkel, Martin Hoffnagle, Sabrina Ly,
Alejandro Pozas-Kerstjens
- Abstract summary: We show that useful information can be extracted from the quantum-mechanical implementation of Harrow-Hassidim-Lloyd (HHL)
This paper shows, however, that useful information can be extracted from the quantum-mechanical implementation of HHL, and used to reduce the complexity of finding the solution on the classical side.
- Score: 58.720142291102135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Binary Neural Networks are a promising technique for implementing efficient
deep models with reduced storage and computational requirements. The training
of these is however, still a compute-intensive problem that grows drastically
with the layer size and data input. At the core of this calculation is the
linear regression problem. The Harrow-Hassidim-Lloyd (HHL) quantum algorithm
has gained relevance thanks to its promise of providing a quantum state
containing the solution of a linear system of equations. The solution is
encoded in superposition at the output of a quantum circuit. Although this
seems to provide the answer to the linear regression problem for the training
neural networks, it also comes with multiple, difficult-to-avoid hurdles. This
paper shows, however, that useful information can be extracted from the
quantum-mechanical implementation of HHL, and used to reduce the complexity of
finding the solution on the classical side.
Related papers
- Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Training Multi-layer Neural Networks on Ising Machine [41.95720316032297]
This paper proposes an Ising learning algorithm to train quantized neural network (QNN)
As far as we know, this is the first algorithm to train multi-layer feedforward networks on Ising machines.
arXiv Detail & Related papers (2023-11-06T04:09:15Z) - Quantum Annealing for Single Image Super-Resolution [86.69338893753886]
We propose a quantum computing-based algorithm to solve the single image super-resolution (SISR) problem.
The proposed AQC-based algorithm is demonstrated to achieve improved speed-up over a classical analog while maintaining comparable SISR accuracy.
arXiv Detail & Related papers (2023-04-18T11:57:15Z) - Learning To Optimize Quantum Neural Network Without Gradients [3.9848482919377006]
We introduce a novel meta-optimization algorithm that trains a emphmeta-optimizer network to output parameters for the quantum circuit.
We show that we achieve a better quality minima in fewer circuit evaluations than existing gradient based algorithms on different datasets.
arXiv Detail & Related papers (2023-04-15T01:09:12Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks
with Quantum Computation [8.947825738917869]
Ridgelet transform has been a fundamental mathematical tool in the theoretical studies of neural networks.
We develop a quantum ridgelet transform (QRT) which implements the ridgelet transform of a quantum state within a linear runtime $exp(O(D))$ of quantum computation.
As an application, we show that one can use QRT as a fundamental subroutine for QML to efficiently find a sparse trainable subnetwork of large shallow wide neural networks.
arXiv Detail & Related papers (2023-01-27T19:00:00Z) - Optimizing Tensor Network Contraction Using Reinforcement Learning [86.05566365115729]
We propose a Reinforcement Learning (RL) approach combined with Graph Neural Networks (GNN) to address the contraction ordering problem.
The problem is extremely challenging due to the huge search space, the heavy-tailed reward distribution, and the challenging credit assignment.
We show how a carefully implemented RL-agent that uses a GNN as the basic policy construct can address these challenges.
arXiv Detail & Related papers (2022-04-18T21:45:13Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.