Active Learning for Regression and Classification by Inverse Distance
Weighting
- URL: http://arxiv.org/abs/2204.07177v1
- Date: Thu, 14 Apr 2022 18:07:50 GMT
- Title: Active Learning for Regression and Classification by Inverse Distance
Weighting
- Authors: Alberto Bemporad
- Abstract summary: This paper proposes an active learning algorithm for solving regression and classification problems.
The algorithm has the following features: (i) supports both pool-based and population-based sampling; (ii) is independent of the type of predictor used; and (iii) can handle known and unknown constraints on the queryable feature vectors.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes an active learning algorithm for solving regression and
classification problems based on inverse-distance weighting functions for
selecting the feature vectors to query. The algorithm has the following
features: (i) supports both pool-based and population-based sampling; (ii) is
independent of the type of predictor used; (iii) can handle known and unknown
constraints on the queryable feature vectors; and (iv) can run either
sequentially, or in batch mode, depending on how often the predictor is
retrained. The method's potential is shown in numerical tests on illustrative
synthetic problems and real-world regression and classification datasets from
the UCI repository. A Python implementation of the algorithm that we call IDEAL
(Inverse-Distance based Exploration for Active Learning), is available at
\url{http://cse.lab.imtlucca.it/~bemporad/ideal}.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Online Analytic Exemplar-Free Continual Learning with Large Models for Imbalanced Autonomous Driving Task [25.38082751323396]
We propose an Analytic Exemplar-Free Online Continual Learning algorithm (AEF-OCL)
The AEF-OCL leverages analytic continual learning principles and employs ridge regression as a classifier for features extracted by a large backbone network.
Experimental results demonstrate that despite being an exemplar-free strategy, our method outperforms various methods on the autonomous driving SODA10M dataset.
arXiv Detail & Related papers (2024-05-28T03:19:15Z) - A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - What learning algorithm is in-context learning? Investigations with
linear models [87.91612418166464]
We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly.
We show that trained in-context learners closely match the predictors computed by gradient descent, ridge regression, and exact least-squares regression.
Preliminary evidence that in-context learners share algorithmic features with these predictors.
arXiv Detail & Related papers (2022-11-28T18:59:51Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Piecewise linear regression and classification [0.20305676256390928]
This paper proposes a method for solving multivariate regression and classification problems using piecewise linear predictors.
A Python implementation of the algorithm described in this paper is available at http://cse.lab.imtlucca.it/bemporad/parc.
arXiv Detail & Related papers (2021-03-10T17:07:57Z) - Deep Inverse Q-learning with Constraints [15.582910645906145]
We introduce a novel class of algorithms that only needs to solve the MDP underlying the demonstrated behavior once to recover the expert policy.
We show how to extend this class of algorithms to continuous state-spaces via function approximation and how to estimate a corresponding action-value function.
We evaluate the resulting algorithms called Inverse Action-value Iteration, Inverse Q-learning and Deep Inverse Q-learning on the Objectworld benchmark.
arXiv Detail & Related papers (2020-08-04T17:21:51Z) - Deep Learning with Functional Inputs [0.0]
We present a methodology for integrating functional data into feed-forward neural networks.
A by-product of the method is a set of dynamic functional weights that can be visualized during the optimization process.
The model is shown to perform well in a number of contexts including prediction of new data and recovery of the true underlying functional weights.
arXiv Detail & Related papers (2020-06-17T01:23:00Z) - Meta-learning with Stochastic Linear Bandits [120.43000970418939]
We consider a class of bandit algorithms that implement a regularized version of the well-known OFUL algorithm, where the regularization is a square euclidean distance to a bias vector.
We show both theoretically and experimentally, that when the number of tasks grows and the variance of the task-distribution is small, our strategies have a significant advantage over learning the tasks in isolation.
arXiv Detail & Related papers (2020-05-18T08:41:39Z) - A Graph-Based Approach for Active Learning in Regression [37.42533189350655]
Active learning aims to reduce labeling efforts by selectively asking humans to annotate the most important data points from an unlabeled pool.
Most existing active learning for regression methods use the regression function learned at each active learning iteration to select the next informative point to query.
We propose a feature-focused approach that formulates both sequential and batch-mode active regression as a novel bipartite graph optimization problem.
arXiv Detail & Related papers (2020-01-30T00:59:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.