Exponential Error Convergence in Data Classification with Optimized
Random Features: Acceleration by Quantum Machine Learning
- URL: http://arxiv.org/abs/2106.09028v1
- Date: Wed, 16 Jun 2021 18:00:00 GMT
- Title: Exponential Error Convergence in Data Classification with Optimized
Random Features: Acceleration by Quantum Machine Learning
- Authors: Hayata Yamasaki, Sho Sonoda
- Abstract summary: An algorithm for machine learning by quantum computer, quantum machine learning (QML), can exponentially speed up sampling of optimized random features.
We here construct a QML algorithm for a classification task accelerated by the optimized random features.
We prove that the QML algorithm for optimized random features, combined with gradient descent (SGD), can achieve state-of-the-art exponential convergence speed.
- Score: 8.98526174345299
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Random features are a central technique for scalable learning algorithms
based on kernel methods. A recent work has shown that an algorithm for machine
learning by quantum computer, quantum machine learning (QML), can exponentially
speed up sampling of optimized random features, even without imposing
restrictive assumptions on sparsity and low-rankness of matrices that had
limited applicability of conventional QML algorithms; this QML algorithm makes
it possible to significantly reduce and provably minimize the required number
of features for regression tasks. However, a major interest in the field of QML
is how widely the advantages of quantum computation can be exploited, not only
in the regression tasks. We here construct a QML algorithm for a classification
task accelerated by the optimized random features. We prove that the QML
algorithm for sampling optimized random features, combined with stochastic
gradient descent (SGD), can achieve state-of-the-art exponential convergence
speed of reducing classification error in a classification task under a
low-noise condition; at the same time, our algorithm with optimized random
features can take advantage of the significant reduction of the required number
of features so as to accelerate each iteration in the SGD and evaluation of the
classifier obtained from our algorithm. These results discover a promising
application of QML to significant acceleration of the leading classification
algorithm based on kernel methods, without ruining its applicability to a
practical class of data sets and the exponential error-convergence speed.
Related papers
- Multiscale Quantum Approximate Optimization Algorithm [0.0]
The quantum approximate optimization algorithm (QAOA) is one of the canonical algorithms designed to find approximate solutions to optimization problems.
We propose a new version of QAOA that incorporates the capabilities of QAOA and the real-space renormalization group transformation.
arXiv Detail & Related papers (2023-12-11T07:47:16Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Sequential minimum optimization algorithm with small sample size
estimators [0.06445605125467573]
Sequential minimum optimization is a machine-learning global search training algorithm.
We apply it to photonics circuits where the additional challenge appears: low frequency of coincidence events lowers the speed of the algorithm.
arXiv Detail & Related papers (2023-03-02T06:02:46Z) - Gradient-Free optimization algorithm for single-qubit quantum classifier [0.3314882635954752]
A gradient-free optimization algorithm is proposed to overcome the effects of barren plateau caused by quantum devices.
The proposed algorithm is demonstrated for a classification task and is compared with that using Adam.
The proposed gradient-free optimization algorithm can reach a high accuracy faster than that using Adam.
arXiv Detail & Related papers (2022-05-10T08:45:03Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Quantum Approximate Optimization Algorithm Based Maximum Likelihood
Detection [80.28858481461418]
Recent advances in quantum technologies pave the way for noisy intermediate-scale quantum (NISQ) devices.
Recent advances in quantum technologies pave the way for noisy intermediate-scale quantum (NISQ) devices.
arXiv Detail & Related papers (2021-07-11T10:56:24Z) - Learning Neural Network Quantum States with the Linear Method [0.0]
We show that the linear method can be used successfully for the optimization of complex valued neural network quantum states.
We compare the LM to the state-of-the-art SR algorithm and find that the LM requires up to an order of magnitude fewer iterations for convergence.
arXiv Detail & Related papers (2021-04-22T12:18:33Z) - Learning with Optimized Random Features: Exponential Speedup by Quantum
Machine Learning without Sparsity and Low-Rank Assumptions [8.08640000394814]
We develop a quantum algorithm for sampling from an optimized distribution over runtime features.
Our algorithm achieves an exponential speedup in $D$ compared to any known classical algorithm for this sampling task.
arXiv Detail & Related papers (2020-04-22T18:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.