Improved Quantum Boosting
- URL: http://arxiv.org/abs/2009.08360v1
- Date: Thu, 17 Sep 2020 15:16:42 GMT
- Title: Improved Quantum Boosting
- Authors: Adam Izdebski and Ronald de Wolf
- Abstract summary: Arunachalam and Maity gave the first quantum improvement for boosting, by combining Freund and Schapire's AdaBoost algorithm with a quantum algorithm for approximate counting.
In this paper we give a substantially faster and simpler quantum boosting algorithm, based on Servedio's SmoothBoost algorithm.
- Score: 1.0660480034605238
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Boosting is a general method to convert a weak learner (which generates
hypotheses that are just slightly better than random) into a strong learner
(which generates hypotheses that are much better than random). Recently,
Arunachalam and Maity gave the first quantum improvement for boosting, by
combining Freund and Schapire's AdaBoost algorithm with a quantum algorithm for
approximate counting. Their booster is faster than classical boosting as a
function of the VC-dimension of the weak learner's hypothesis class, but worse
as a function of the quality of the weak learner. In this paper we give a
substantially faster and simpler quantum boosting algorithm, based on
Servedio's SmoothBoost algorithm.
Related papers
- How to Boost Any Loss Function [63.573324901948716]
We show that any loss function can be optimized with boosting.
We also show that boosting can achieve a feat not yet known to be possible in the classical $0th$ order setting.
arXiv Detail & Related papers (2024-07-02T14:08:23Z) - Supervised Learning Guarantee for Quantum AdaBoost [7.473180902609473]
In the noisy intermediate-scale quantum (NISQ) era, variational quantum algorithms are constrained due to a limited number of qubits and the shallow depth of quantum circuits.
In this paper, we theoretically establish and numerically verify a learning guarantee for quantum adaptive boosting (AdaBoost)
Our work indicates that in the current NISQ era, introducing appropriate ensemble methods is particularly valuable in improving the performance of quantum machine learning algorithms.
arXiv Detail & Related papers (2024-02-04T07:18:44Z) - Multiclass Boosting: Simple and Intuitive Weak Learning Criteria [72.71096438538254]
We give a simple and efficient boosting algorithm, that does not require realizability assumptions.
We present a new result on boosting for list learners, as well as provide a novel proof for the characterization of multiclass PAC learning.
arXiv Detail & Related papers (2023-07-02T19:26:58Z) - AdaBoost is not an Optimal Weak to Strong Learner [11.003568749905359]
We show that the sample complexity of AdaBoost, and other classic variations thereof, are sub-optimal by at least one logarithmic factor in the desired accuracy of the strong learner.
arXiv Detail & Related papers (2023-01-27T07:37:51Z) - Online Agnostic Multiclass Boosting [20.22409095000365]
We give the first boosting algorithm for online agnostic mutliclass classification.
Our reduction also enables the construction of algorithms for statistical agnostic, online realizable, and statistical realizable multiclass boosting.
arXiv Detail & Related papers (2022-05-30T13:59:55Z) - Entanglement and coherence in Bernstein-Vazirani algorithm [58.720142291102135]
Bernstein-Vazirani algorithm allows one to determine a bit string encoded into an oracle.
We analyze in detail the quantum resources in the Bernstein-Vazirani algorithm.
We show that in the absence of entanglement, the performance of the algorithm is directly related to the amount of quantum coherence in the initial state.
arXiv Detail & Related papers (2022-05-26T20:32:36Z) - Quantum Boosting using Domain-Partitioning Hypotheses [0.9264464791978363]
Boosting is an ensemble learning method that converts a weak learner into a strong learner in the PAC learning framework.
We show that Q-RealBoost provides a speedup over Q-AdaBoost in terms of both the bias of the weak learner and the time taken by the weak learner to learn the target concept class.
arXiv Detail & Related papers (2021-10-25T10:46:13Z) - Boosting for Online Convex Optimization [64.15578413206715]
We consider the decision-making framework of online convex optimization with a large number of experts.
We define a weak learning algorithm as a mechanism that guarantees approximate regret against a base class of experts.
We give an efficient boosting algorithm that guarantees near-optimal regret against the convex hull of the base class.
arXiv Detail & Related papers (2021-02-18T12:30:49Z) - Quantum Inspired Adaptive Boosting [0.0]
We show that the quantum ensemble method does not have advantage over classical algorithms.
We propose methods inspired by combining the quantum ensemble method with adaptive boosting.
The algorithms were tested and found to be comparable to the AdaBoost algorithm on publicly available data sets.
arXiv Detail & Related papers (2021-02-01T16:33:14Z) - Online Agnostic Boosting via Regret Minimization [47.19178618537368]
Boosting is a widely used machine learning approach based on the idea of aggregating weak learning rules.
We provide the first online boosting algorithm; that is, given a weak learner with only marginally-better-than-trivial regret guarantees, our algorithm boosts it to a strong learner with sublinear regret.
arXiv Detail & Related papers (2020-03-02T19:21:25Z) - On the Dual Formulation of Boosting Algorithms [92.74617630106559]
We show that the Lagrange problems of AdaBoost, LogitBoost and soft-marginBoost are all dual problems with generalized hinge loss entropy.
By looking at the dual problems of these boosting algorithms, we show that the success of boosting can be understood in terms of maintaining a better margin distribution.
arXiv Detail & Related papers (2009-01-23T02:14:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.