Locally-Adaptive Nonparametric Online Learning
- URL: http://arxiv.org/abs/2002.01882v2
- Date: Sun, 1 Nov 2020 14:37:18 GMT
- Title: Locally-Adaptive Nonparametric Online Learning
- Authors: Ilja Kuzborskij, Nicol\`o Cesa-Bianchi
- Abstract summary: We introduce efficient online algorithms that adapt to arbitrary data sequences.
We use a technique based on tree experts to efficiently compete against all such prunings.
Our technique delivers regret bounds that are significantly better than those proven using the previous approach.
- Score: 12.018422134251384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the main strengths of online algorithms is their ability to adapt to
arbitrary data sequences. This is especially important in nonparametric
settings, where performance is measured against rich classes of comparator
functions that are able to fit complex environments. Although such hard
comparators and complex environments may exhibit local regularities, efficient
algorithms, which can provably take advantage of these local patterns, are
hardly known. We fill this gap by introducing efficient online algorithms
(based on a single versatile master algorithm) each adapting to one of the
following regularities: (i) local Lipschitzness of the competitor function,
(ii) local metric dimension of the instance sequence, (iii) local performance
of the predictor across different regions of the instance space. Extending
previous approaches, we design algorithms that dynamically grow hierarchical
$\epsilon$-nets on the instance space whose prunings correspond to different
"locality profiles" for the problem at hand. Using a technique based on tree
experts, we simultaneously and efficiently compete against all such prunings,
and prove regret bounds each scaling with a quantity associated with a
different type of local regularity. When competing against "simple" locality
profiles, our technique delivers regret bounds that are significantly better
than those proven using the previous approach. On the other hand, the time
dependence of our bounds is not worse than that obtained by ignoring any local
regularities.
Related papers
- Locally Adaptive One-Class Classifier Fusion with Dynamic $\ell$p-Norm Constraints for Robust Anomaly Detection [17.93058599783703]
We introduce a framework that dynamically adjusts fusion weights based on local data characteristics.
Our method incorporates an interior-point optimization technique that significantly improves computational efficiency.
The framework's ability to adapt to local data patterns while maintaining computational efficiency makes it particularly valuable for real-time applications.
arXiv Detail & Related papers (2024-11-10T09:57:13Z) - Minimax Adaptive Boosting for Online Nonparametric Regression [10.138723409205497]
We introduce a parameter-free online gradient boosting (OGB) algorithm for adversarial online nonparametric regression.
We show that its application to chaining trees achieves minimax optimal regret when competing against Lipschitz functions.
arXiv Detail & Related papers (2024-10-04T12:30:03Z) - A General Online Algorithm for Optimizing Complex Performance Metrics [5.726378955570775]
We introduce and analyze a general online algorithm that can be used in a straightforward way with a variety of complex performance metrics in binary, multi-class, and multi-label classification problems.
The algorithm's update and prediction rules are appealingly simple and computationally efficient without the need to store any past data.
arXiv Detail & Related papers (2024-06-20T21:24:47Z) - Local Sample-weighted Multiple Kernel Clustering with Consensus
Discriminative Graph [73.68184322526338]
Multiple kernel clustering (MKC) is committed to achieving optimal information fusion from a set of base kernels.
This paper proposes a novel local sample-weighted multiple kernel clustering model.
Experimental results demonstrate that our LSWMKC possesses better local manifold representation and outperforms existing kernel or graph-based clustering algo-rithms.
arXiv Detail & Related papers (2022-07-05T05:00:38Z) - Flipping the switch on local exploration: Genetic Algorithms with
Reversals [0.0]
Authors show that gradient-free search techniques are suitable for providing an optimal solution in the discrete domain.
They also show that the use of multiple local searches can improve performance on local searches.
It is observed that the proposed GA variants have the least average cost across all benchmarks including the problem proposed and IC performs better than its constituents.
arXiv Detail & Related papers (2022-02-02T08:27:11Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z) - Stochastic batch size for adaptive regularization in deep network
optimization [63.68104397173262]
We propose a first-order optimization algorithm incorporating adaptive regularization applicable to machine learning problems in deep learning framework.
We empirically demonstrate the effectiveness of our algorithm using an image classification task based on conventional network models applied to commonly used benchmark datasets.
arXiv Detail & Related papers (2020-04-14T07:54:53Z) - Multi-View Optimization of Local Feature Geometry [70.18863787469805]
We address the problem of refining the geometry of local image features from multiple views without known scene or camera geometry.
Our proposed method naturally complements the traditional feature extraction and matching paradigm.
We show that our method consistently improves the triangulation and camera localization performance for both hand-crafted and learned local features.
arXiv Detail & Related papers (2020-03-18T17:22:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.