Online Agnostic Multiclass Boosting
- URL: http://arxiv.org/abs/2205.15113v1
- Date: Mon, 30 May 2022 13:59:55 GMT
- Title: Online Agnostic Multiclass Boosting
- Authors: Vinod Raman, Ambuj Tewari
- Abstract summary: We give the first boosting algorithm for online agnostic mutliclass classification.
Our reduction also enables the construction of algorithms for statistical agnostic, online realizable, and statistical realizable multiclass boosting.
- Score: 20.22409095000365
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Boosting is a fundamental approach in machine learning that enjoys both
strong theoretical and practical guarantees. At a high-level, boosting
algorithms cleverly aggregate weak learners to generate predictions with
arbitrarily high accuracy. In this way, boosting algorithms convert weak
learners into strong ones. Recently, Brukhim et al. extended boosting to the
online agnostic binary classification setting. A key ingredient in their
approach is a clean and simple reduction to online convex optimization, one
that efficiently converts an arbitrary online convex optimizer to an agnostic
online booster. In this work, we extend this reduction to multiclass problems
and give the first boosting algorithm for online agnostic mutliclass
classification. Our reduction also enables the construction of algorithms for
statistical agnostic, online realizable, and statistical realizable multiclass
boosting.
Related papers
- How to Boost Any Loss Function [63.573324901948716]
We show that any loss function can be optimized with boosting.
We also show that boosting can achieve a feat not yet known to be possible in the classical $0th$ order setting.
arXiv Detail & Related papers (2024-07-02T14:08:23Z) - Efficient Methods for Non-stationary Online Learning [67.3300478545554]
We present efficient methods for optimizing dynamic regret and adaptive regret, which reduce the number of projections per round from $mathcalO(log T)$ to $1$.
Our technique hinges on the reduction mechanism developed in parameter-free online learning and requires non-trivial twists on non-stationary online methods.
arXiv Detail & Related papers (2023-09-16T07:30:12Z) - Multiclass Boosting: Simple and Intuitive Weak Learning Criteria [72.71096438538254]
We give a simple and efficient boosting algorithm, that does not require realizability assumptions.
We present a new result on boosting for list learners, as well as provide a novel proof for the characterization of multiclass PAC learning.
arXiv Detail & Related papers (2023-07-02T19:26:58Z) - Quantum Boosting using Domain-Partitioning Hypotheses [0.9264464791978363]
Boosting is an ensemble learning method that converts a weak learner into a strong learner in the PAC learning framework.
We show that Q-RealBoost provides a speedup over Q-AdaBoost in terms of both the bias of the weak learner and the time taken by the weak learner to learn the target concept class.
arXiv Detail & Related papers (2021-10-25T10:46:13Z) - Boosting for Online Convex Optimization [64.15578413206715]
We consider the decision-making framework of online convex optimization with a large number of experts.
We define a weak learning algorithm as a mechanism that guarantees approximate regret against a base class of experts.
We give an efficient boosting algorithm that guarantees near-optimal regret against the convex hull of the base class.
arXiv Detail & Related papers (2021-02-18T12:30:49Z) - MP-Boost: Minipatch Boosting via Adaptive Feature and Observation
Sampling [0.0]
MP-Boost is an algorithm loosely based on AdaBoost that learns by adaptively selecting small subsets of instances and features.
We empirically demonstrate the interpretability, comparative accuracy, and computational time of our approach on a variety of binary classification tasks.
arXiv Detail & Related papers (2020-11-14T04:26:13Z) - Improved Quantum Boosting [1.0660480034605238]
Arunachalam and Maity gave the first quantum improvement for boosting, by combining Freund and Schapire's AdaBoost algorithm with a quantum algorithm for approximate counting.
In this paper we give a substantially faster and simpler quantum boosting algorithm, based on Servedio's SmoothBoost algorithm.
arXiv Detail & Related papers (2020-09-17T15:16:42Z) - Online Agnostic Boosting via Regret Minimization [47.19178618537368]
Boosting is a widely used machine learning approach based on the idea of aggregating weak learning rules.
We provide the first online boosting algorithm; that is, given a weak learner with only marginally-better-than-trivial regret guarantees, our algorithm boosts it to a strong learner with sublinear regret.
arXiv Detail & Related papers (2020-03-02T19:21:25Z) - On the Dual Formulation of Boosting Algorithms [92.74617630106559]
We show that the Lagrange problems of AdaBoost, LogitBoost and soft-marginBoost are all dual problems with generalized hinge loss entropy.
By looking at the dual problems of these boosting algorithms, we show that the success of boosting can be understood in terms of maintaining a better margin distribution.
arXiv Detail & Related papers (2009-01-23T02:14:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.