K-NN active learning under local smoothness assumption
- URL: http://arxiv.org/abs/2001.06485v2
- Date: Sun, 12 Jul 2020 08:05:48 GMT
- Title: K-NN active learning under local smoothness assumption
- Authors: Boris Ndjia Njike, Xavier Siebert
- Abstract summary: We design an active learning algorithm with a rate of convergence better than in passive learning.
Unlike previous active learning algorithms, we use a smoothness assumption that provides a dependence on the marginal distribution of the instance space.
- Score: 1.0152838128195467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a large body of work on convergence rates either in passive or
active learning. Here we first outline some of the main results that have been
obtained, more specifically in a nonparametric setting under assumptions about
the smoothness of the regression function (or the boundary between classes) and
the margin noise. We discuss the relative merits of these underlying
assumptions by putting active learning in perspective with recent work on
passive learning. We design an active learning algorithm with a rate of
convergence better than in passive learning, using a particular smoothness
assumption customized for k-nearest neighbors. Unlike previous active learning
algorithms, we use a smoothness assumption that provides a dependence on the
marginal distribution of the instance space. Additionally, our algorithm avoids
the strong density assumption that supposes the existence of the density
function of the marginal distribution of the instance space and is therefore
more generally applicable.
Related papers
- Unlearning-based Neural Interpretations [51.99182464831169]
We show that current baselines defined using static functions are biased, fragile and manipulable.
We propose UNI to compute an (un)learnable, debiased and adaptive baseline by perturbing the input towards an unlearning direction of steepest ascent.
arXiv Detail & Related papers (2024-10-10T16:02:39Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Poisson Reweighted Laplacian Uncertainty Sampling for Graph-based Active
Learning [1.6752182911522522]
We show that uncertainty sampling is sufficient to achieve exploration versus exploitation in graph-based active learning.
In particular, we use a recently developed algorithm, Poisson ReWeighted Laplace Learning (PWLL) for the classifier.
We present experimental results on a number of graph-based image classification problems.
arXiv Detail & Related papers (2022-10-27T22:07:53Z) - Using Sum-Product Networks to Assess Uncertainty in Deep Active Learning [3.7507283158673212]
This paper proposes a new and very simple approach to computing uncertainty in deep active learning with a Convolutional Neural Network (CNN)
The main idea is to use the feature representation extracted by the CNN as data for training a Sum-Product Network (SPN)
arXiv Detail & Related papers (2022-06-20T14:28:19Z) - Efficient and Near-Optimal Smoothed Online Learning for Generalized
Linear Functions [28.30744223973527]
We give a computationally efficient algorithm that is the first to enjoy the statistically optimal log(T/sigma) regret for realizable K-wise linear classification.
We develop a novel characterization of the geometry of the disagreement region induced by generalized linear classifiers.
arXiv Detail & Related papers (2022-05-25T21:31:36Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z) - Convergence of Uncertainty Sampling for Active Learning [11.115182142203711]
We propose an efficient uncertainty estimator for binary classification which we also extend to multiple classes.
We provide theoretical guarantees for our algorithm under the influence of noise in the task of binary and multi-class classification.
arXiv Detail & Related papers (2021-10-29T13:51:30Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Nonparametric adaptive active learning under local smoothness condition [0.76146285961466]
This paper adresses the problem of adaptive active learning in a nonparametric setting with minimal assumptions.
We present a novel algorithm that is valid under more general assumptions than the previously known algorithms.
Our algorithm achieves a minimax rate of convergence, and therefore performs almost as well as the best known non-adaptive algorithms.
arXiv Detail & Related papers (2021-02-22T14:47:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.