Efficient Online Large-Margin Classification via Dual Certificates
- URL: http://arxiv.org/abs/2509.19670v1
- Date: Wed, 24 Sep 2025 01:07:19 GMT
- Title: Efficient Online Large-Margin Classification via Dual Certificates
- Authors: Nam Ho-Nguyen, Fatma Kılınç-Karzan, Ellie Nguyen, Lingqing Shen,
- Abstract summary: We study the offline maximum margin problem through its dual formulation.<n>We use the resulting geometric insights to design a principled and efficient algorithm for the online setting.
- Score: 0.2099922236065961
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online classification is a central problem in optimization, statistical learning and data science. Classical algorithms such as the perceptron offer efficient updates and finite mistake guarantees on linearly separable data, but they do not exploit the underlying geometric structure of the classification problem. We study the offline maximum margin problem through its dual formulation and use the resulting geometric insights to design a principled and efficient algorithm for the online setting. A key feature of our method is its translation invariance, inherited from the offline formulation, which plays a central role in its performance analysis. Our theoretical analysis yields improved mistake and margin bounds that depend only on translation-invariant quantities, offering stronger guarantees than existing algorithms under the same assumptions in favorable settings. In particular, we identify a parameter regime where our algorithm makes at most two mistakes per sequence, whereas the perceptron can be forced to make arbitrarily many mistakes. Our numerical study on real data further demonstrates that our method matches the computational efficiency of existing online algorithms, while significantly outperforming them in accuracy.
Related papers
- Online Inference of Constrained Optimization: Primal-Dual Optimality and Sequential Quadratic Programming [55.848340925419286]
We study online statistical inference for the solutions of quadratic optimization problems with equality and inequality constraints.<n>We develop a sequential programming (SSQP) method to solve these problems, where the step direction is computed by sequentially performing an approximation of the objective and a linear approximation of the constraints.<n>We show that our method global almost moving-average convergence and exhibits local normality with an optimal primal-dual limiting matrix in the sense of Hjek and Le Cam.
arXiv Detail & Related papers (2025-11-27T06:16:17Z) - Online-BLS: An Accurate and Efficient Online Broad Learning System for Data Stream Classification [52.251569042852815]
We introduce an online broad learning system framework with closed-form solutions for each online update.<n>We design an effective weight estimation algorithm and an efficient online updating strategy.<n>Our framework is naturally extended to data stream scenarios with concept drift and exceeds state-of-the-art baselines.
arXiv Detail & Related papers (2025-01-28T13:21:59Z) - A General Online Algorithm for Optimizing Complex Performance Metrics [5.726378955570775]
We introduce and analyze a general online algorithm that can be used in a straightforward way with a variety of complex performance metrics in binary, multi-class, and multi-label classification problems.
The algorithm's update and prediction rules are appealingly simple and computationally efficient without the need to store any past data.
arXiv Detail & Related papers (2024-06-20T21:24:47Z) - Online Nonparametric Supervised Learning for Massive Data [0.0]
We develop a fast algorithm adapted to the real-time calculation of the nonparametric classifier in massive as well as streaming data frameworks.
The proposed methods are evaluated and compared to some commonly used machine learning algorithms for real-time fetal well-being monitoring.
arXiv Detail & Related papers (2024-05-29T20:04:23Z) - Efficient Methods for Non-stationary Online Learning [63.268670895111654]
We present efficient methods for optimizing dynamic regret and adaptive regret.<n>The proposed algorithms require only one gradient query and one function evaluation at each round.<n>We also study an even stronger measure, namely "interval dynamic regret", and reduce the number of projections per round from $O(log2 T)$ to $1$.
arXiv Detail & Related papers (2023-09-16T07:30:12Z) - Interpolation-based Contrastive Learning for Few-Label Semi-Supervised
Learning [43.51182049644767]
Semi-supervised learning (SSL) has long been proved to be an effective technique to construct powerful models with limited labels.
Regularization-based methods which force the perturbed samples to have similar predictions with the original ones have attracted much attention.
We propose a novel contrastive loss to guide the embedding of the learned network to change linearly between samples.
arXiv Detail & Related papers (2022-02-24T06:00:05Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Efficient First-Order Contextual Bandits: Prediction, Allocation, and
Triangular Discrimination [82.52105963476703]
A recurring theme in statistical learning, online learning, and beyond is that faster convergence rates are possible for problems with low noise.
First-order guarantees are relatively well understood in statistical and online learning.
We show that the logarithmic loss and an information-theoretic quantity called the triangular discrimination play a fundamental role in obtaining first-order guarantees.
arXiv Detail & Related papers (2021-07-05T19:20:34Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Online Robust and Adaptive Learning from Data Streams [22.319483572757097]
In online learning, it is necessary to learn robustly to outliers and to adapt quickly to changes in the underlying data generating mechanism.
In this paper, we refer to the former attribute of online learning algorithms as robustness and to the latter as adaptivity.
We propose a novel approximation-based robustness-adaptivity algorithm (SRA) to evaluate the tradeoff.
arXiv Detail & Related papers (2020-07-23T17:49:04Z) - Online Passive-Aggressive Total-Error-Rate Minimization [1.370633147306388]
We provide a new online learning algorithm which utilizes online passive-aggressive learning (PA) and total-error-rate minimization (TER) for binary classification.
Experimental results demonstrate that the proposed PATER algorithms achieves better performances in terms of efficiency and effectiveness than the existing state-of-the-art online learning algorithms in real-world data sets.
arXiv Detail & Related papers (2020-02-05T13:10:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.