Nonparametric adaptive active learning under local smoothness condition
- URL: http://arxiv.org/abs/2102.11077v1
- Date: Mon, 22 Feb 2021 14:47:21 GMT
- Title: Nonparametric adaptive active learning under local smoothness condition
- Authors: Boris Ndjia Njike, Xavier Siebert
- Abstract summary: This paper adresses the problem of adaptive active learning in a nonparametric setting with minimal assumptions.
We present a novel algorithm that is valid under more general assumptions than the previously known algorithms.
Our algorithm achieves a minimax rate of convergence, and therefore performs almost as well as the best known non-adaptive algorithms.
- Score: 0.76146285961466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Active learning is typically used to label data, when the labeling process is
expensive. Several active learning algorithms have been theoretically proved to
perform better than their passive counterpart. However, these algorithms rely
on some assumptions, which themselves contain some specific parameters. This
paper adresses the problem of adaptive active learning in a nonparametric
setting with minimal assumptions. We present a novel algorithm that is valid
under more general assumptions than the previously known algorithms, and that
can moreover adapt to the parameters used in these assumptions. This allows us
to work with a larger class of distributions, thereby avoiding to exclude
important densities like gaussians. Our algorithm achieves a minimax rate of
convergence, and therefore performs almost as well as the best known
non-adaptive algorithms.
Related papers
- Unsupervised Learning of Initialization in Deep Neural Networks via
Maximum Mean Discrepancy [74.34895342081407]
We propose an unsupervised algorithm to find good initialization for input data.
We first notice that each parameter configuration in the parameter space corresponds to one particular downstream task of d-way classification.
We then conjecture that the success of learning is directly related to how diverse downstream tasks are in the vicinity of the initial parameters.
arXiv Detail & Related papers (2023-02-08T23:23:28Z) - Adaptive Federated Minimax Optimization with Lower Complexities [82.51223883622552]
We propose an efficient adaptive minimax optimization algorithm (i.e., AdaFGDA) to solve these minimax problems.
It builds our momentum-based reduced and localSGD techniques, and it flexibly incorporate various adaptive learning rates.
arXiv Detail & Related papers (2022-11-14T12:32:18Z) - Algorithms that Approximate Data Removal: New Results and Limitations [2.6905021039717987]
We study the problem of deleting user data from machine learning models trained using empirical risk minimization.
We develop an online unlearning algorithm that is both computationally and memory efficient.
arXiv Detail & Related papers (2022-09-25T17:20:33Z) - Efficient Active Learning with Abstention [12.315392649501101]
We develop the first computationally efficient active learning algorithm with abstention.
A key feature of the algorithm is that it avoids the undesirable "noise-seeking" behavior often seen in active learning.
arXiv Detail & Related papers (2022-03-31T18:34:57Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Smoothed Online Learning is as Easy as Statistical Learning [77.00766067963195]
We provide the first oracle-efficient, no-regret algorithms in this setting.
We show that if a function class is learnable in the classical setting, then there is an oracle-efficient, no-regret algorithm for contextual bandits.
arXiv Detail & Related papers (2022-02-09T19:22:34Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Learning to Actively Learn: A Robust Approach [22.75298609290053]
This work proposes a procedure for designing algorithms for adaptive data collection tasks like active learning and pure-exploration multi-armed bandits.
Our adaptive algorithm is learned via adversarial training over equivalence classes of problems derived from information theoretic lower bounds.
We perform synthetic experiments to justify the stability and effectiveness of the training procedure, and then evaluate the method on tasks derived from real data.
arXiv Detail & Related papers (2020-10-29T06:48:22Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Fase-AL -- Adaptation of Fast Adaptive Stacking of Ensembles for
Supporting Active Learning [0.0]
This work presents the FASE-AL algorithm which induces classification models with non-labeled instances using Active Learning.
The algorithm achieves promising results in terms of the percentage of correctly classified instances.
arXiv Detail & Related papers (2020-01-30T17:25:47Z) - K-NN active learning under local smoothness assumption [1.0152838128195467]
We design an active learning algorithm with a rate of convergence better than in passive learning.
Unlike previous active learning algorithms, we use a smoothness assumption that provides a dependence on the marginal distribution of the instance space.
arXiv Detail & Related papers (2020-01-17T10:44:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.