An Algorithm for Fast Supervised Learning in Variational Circuits
through Simultaneous Processing of Multiple Samples
- URL: http://arxiv.org/abs/2011.14297v1
- Date: Sun, 29 Nov 2020 06:14:41 GMT
- Title: An Algorithm for Fast Supervised Learning in Variational Circuits
through Simultaneous Processing of Multiple Samples
- Authors: Siddharth Dangwal, Ritvik Sharma, Debanjan Bhowmik
- Abstract summary: We propose a novel algorithm for fast training of variational classifiers by processing multiple samples parallelly.
The presented algorithm utilizes qRAM and other quantum circuits in the forward pass.
Although we discuss only binary classification in the paper, the algorithm can be easily generalized to multi-class classification.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel algorithm for fast training of variational classifiers by
processing multiple samples parallelly. The algorithm can be adapted for any
ansatz used in the variational circuit. The presented algorithm utilizes qRAM
and other quantum circuits in the forward pass. Further, instead of the usual
practice of computing the loss classically, we calculate the loss using a
Swap-test circuit. The algorithm thus brings down the training cost of a
variational classifier to O(logN)from the usual O(N)when training on a dataset
of N samples. Although we discuss only binary classification in the paper, the
algorithm can be easily generalized to multi-class classification.
Related papers
- Gaussian Elimination versus Greedy Methods for the Synthesis of Linear
Reversible Circuits [0.0]
reversible circuits represent a subclass of reversible circuits with many applications in quantum computing.
We propose a new algorithm for any linear reversible operator by using an optimized version of the Gaussian elimination algorithm and a tuned LU factorization.
Overall, our algorithms improve the state-of-the-art methods for specific ranges of problem sizes.
arXiv Detail & Related papers (2022-01-17T16:31:42Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Quantum Algorithms for Prediction Based on Ridge Regression [0.7612218105739107]
We propose a quantum algorithm based on ridge regression model, which get the optimal fitting parameters.
The proposed algorithm has a wide range of application and the proposed algorithm can be used as a subroutine of other quantum algorithms.
arXiv Detail & Related papers (2021-04-27T11:03:52Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Qubit-efficient entanglement spectroscopy using qubit resets [0.0]
We develop qubit-efficient quantum algorithms for entanglement spectroscopy on NISQ devices.
Our algorithms use fewer qubits than any previous efficient algorithm while achieving similar performance in the presence of noise.
We also introduce the notion of effective circuit depth as a generalization of standard circuit depth suitable for circuits with qubit resets.
arXiv Detail & Related papers (2020-10-06T23:22:57Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Nearly Linear Row Sampling Algorithm for Quantile Regression [54.75919082407094]
We give a row sampling algorithm for the quantile loss function with sample complexity nearly linear in the dimensionality of the data.
Based upon our row sampling algorithm, we give the fastest known algorithm for quantile regression and a graph sparsification algorithm for balanced directed graphs.
arXiv Detail & Related papers (2020-06-15T13:40:07Z) - A Study of Performance of Optimal Transport [16.847501106437534]
We show that network simplex and augmenting path based algorithms can consistently outperform numerical matrix-scaling based methods.
We present a new algorithm that improves upon the classical Kuhn-Munkres algorithm.
arXiv Detail & Related papers (2020-05-03T20:37:05Z) - Quantum algorithms for hedging and the learning of Ising models [6.346764987071066]
A paradigmatic algorithm for online learning is the Hedge algorithm by Freund and Schapire.
This work presents quantum algorithms for such online learning in an oracular setting.
arXiv Detail & Related papers (2020-02-14T12:48:53Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.