Deterministic Online Classification: Non-iteratively Reweighted
Recursive Least-Squares for Binary Class Rebalancing
- URL: http://arxiv.org/abs/2301.09230v1
- Date: Sun, 22 Jan 2023 23:44:18 GMT
- Title: Deterministic Online Classification: Non-iteratively Reweighted
Recursive Least-Squares for Binary Class Rebalancing
- Authors: Se-In Jang
- Abstract summary: Weighted Least-Squares (WLS) has been widely used as a deterministic batch solution with a specific weight design.
We introduce a new deterministic online classification algorithm of WLS with a constant time complexity for binary class rebalancing.
We demonstrate that our proposed online formulation exactly converges to its batch formulation and outperforms existing state-of-the-art online binary classification algorithms.
- Score: 1.370633147306388
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deterministic solutions are becoming more critical for interpretability.
Weighted Least-Squares (WLS) has been widely used as a deterministic batch
solution with a specific weight design. In the online settings of WLS, exact
reweighting is necessary to converge to its batch settings. In order to comply
with its necessity, the iteratively reweighted least-squares algorithm is
mainly utilized with a linearly growing time complexity which is not attractive
for online learning. Due to the high and growing computational costs, an
efficient online formulation of reweighted least-squares is desired. We
introduce a new deterministic online classification algorithm of WLS with a
constant time complexity for binary class rebalancing. We demonstrate that our
proposed online formulation exactly converges to its batch formulation and
outperforms existing state-of-the-art stochastic online binary classification
algorithms in real-world data sets empirically.
Related papers
- Model-based RL as a Minimalist Approach to Horizon-Free and Second-Order Bounds [59.875550175217874]
We show that a simple Model-based Reinforcement Learning scheme achieves strong regret and sample bounds in online and offline RL settings.
We highlight that our algorithms are simple, fairly standard, and indeed have been extensively studied in the RL literature.
arXiv Detail & Related papers (2024-08-16T19:52:53Z) - Online Nonparametric Supervised Learning for Massive Data [0.0]
We develop a fast algorithm adapted to the real-time calculation of the nonparametric classifier in massive as well as streaming data frameworks.
The proposed methods are evaluated and compared to some commonly used machine learning algorithms for real-time fetal well-being monitoring.
arXiv Detail & Related papers (2024-05-29T20:04:23Z) - Efficient Methods for Non-stationary Online Learning [67.3300478545554]
We present efficient methods for optimizing dynamic regret and adaptive regret, which reduce the number of projections per round from $mathcalO(log T)$ to $1$.
Our technique hinges on the reduction mechanism developed in parameter-free online learning and requires non-trivial twists on non-stationary online methods.
arXiv Detail & Related papers (2023-09-16T07:30:12Z) - Learning-Assisted Algorithm Unrolling for Online Optimization with
Budget Constraints [27.84415856657607]
We propose a new machine learning (ML) assisted unrolling approach, called LAAU (Learning-Assisted Algorithm Unrolling)
For efficient training via backpropagation, we derive gradients of the decision pipeline over time.
We also provide the average cost bounds for two cases when training data is available offline and collected online, respectively.
arXiv Detail & Related papers (2022-12-03T20:56:29Z) - Oracle Inequalities for Model Selection in Offline Reinforcement
Learning [105.74139523696284]
We study the problem of model selection in offline RL with value function approximation.
We propose the first model selection algorithm for offline RL that achieves minimax rate-optimal inequalities up to logarithmic factors.
We conclude with several numerical simulations showing it is capable of reliably selecting a good model class.
arXiv Detail & Related papers (2022-11-03T17:32:34Z) - Smoothed Online Learning is as Easy as Statistical Learning [77.00766067963195]
We provide the first oracle-efficient, no-regret algorithms in this setting.
We show that if a function class is learnable in the classical setting, then there is an oracle-efficient, no-regret algorithm for contextual bandits.
arXiv Detail & Related papers (2022-02-09T19:22:34Z) - Online Deterministic Annealing for Classification and Clustering [0.0]
We introduce an online prototype-based learning algorithm for clustering and classification.
We show that the proposed algorithm constitutes a competitive-learning neural network, the learning rule of which is formulated as an online approximation algorithm.
arXiv Detail & Related papers (2021-02-11T04:04:21Z) - Online Orthogonal Matching Pursuit [6.6389732792316005]
We present a novel online algorithm: Online Orthogonal Matching Pursuit (OOMP) for online support recovery in the random design setting of sparse linear regression.
Our procedure selects features sequentially, alternating between allocation of samples only as needed to candidate features, and optimization over the selected set of variables to estimate the regression coefficients.
arXiv Detail & Related papers (2020-11-22T21:59:05Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z) - Continual Learning using a Bayesian Nonparametric Dictionary of Weight
Factors [75.58555462743585]
Naively trained neural networks tend to experience catastrophic forgetting in sequential task settings.
We propose a principled nonparametric approach based on the Indian Buffet Process (IBP) prior, letting the data determine how much to expand the model complexity.
We demonstrate the effectiveness of our method on a number of continual learning benchmarks and analyze how weight factors are allocated and reused throughout the training.
arXiv Detail & Related papers (2020-04-21T15:20:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.