Active Output Selection Strategies for Multiple Learning Regression
Models
- URL: http://arxiv.org/abs/2011.14307v1
- Date: Sun, 29 Nov 2020 08:05:53 GMT
- Title: Active Output Selection Strategies for Multiple Learning Regression
Models
- Authors: Adrian Prochaska and Julien Pillas and Bernard B\"aker
- Abstract summary: The strategy is actively learning multiple outputs in the same input space.
The presented method is applied to three different toy examples with noise in a real world range and to a benchmark dataset.
The results are promising but also show that the algorithm has to be improved to increase robustness for noisy environments.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning shows promise to decrease test bench time for model-based
drivability calibration. This paper presents a new strategy for active output
selection, which suits the needs of calibration tasks. The strategy is actively
learning multiple outputs in the same input space. It chooses the output model
with the highest cross-validation error as leading. The presented method is
applied to three different toy examples with noise in a real world range and to
a benchmark dataset. The results are analyzed and compared to other existing
strategies. In a best case scenario, the presented strategy is able to decrease
the number of points by up to 30% compared to a sequential space-filling design
while outperforming other existing active learning strategies. The results are
promising but also show that the algorithm has to be improved to increase
robustness for noisy environments. Further research will focus on improving the
algorithm and applying it to a real-world example.
Related papers
- Realistic Evaluation of Test-Time Adaptation Algorithms: Unsupervised Hyperparameter Selection [1.4530711901349282]
Test-Time Adaptation (TTA) has emerged as a promising strategy for tackling the problem of machine learning model robustness under distribution shifts.
We evaluate existing TTA methods using surrogate-based hp-selection strategies to obtain a more realistic evaluation of their performance.
arXiv Detail & Related papers (2024-07-19T11:58:30Z) - Model-Free Active Exploration in Reinforcement Learning [53.786439742572995]
We study the problem of exploration in Reinforcement Learning and present a novel model-free solution.
Our strategy is able to identify efficient policies faster than state-of-the-art exploration approaches.
arXiv Detail & Related papers (2024-06-30T19:00:49Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - Making Look-Ahead Active Learning Strategies Feasible with Neural
Tangent Kernels [6.372625755672473]
We propose a new method for approximating active learning acquisition strategies that are based on retraining with hypothetically-labeled candidate data points.
Although this is usually infeasible with deep networks, we use the neural tangent kernel to approximate the result of retraining.
arXiv Detail & Related papers (2022-06-25T06:13:27Z) - Budget-aware Few-shot Learning via Graph Convolutional Network [56.41899553037247]
This paper tackles the problem of few-shot learning, which aims to learn new visual concepts from a few examples.
A common problem setting in few-shot classification assumes random sampling strategy in acquiring data labels.
We introduce a new budget-aware few-shot learning problem that aims to learn novel object categories.
arXiv Detail & Related papers (2022-01-07T02:46:35Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Improved active output selection strategy for noisy environments [0.0]
The test bench time needed for model-based calibration can be reduced with active learning methods for test design.
This paper presents an improved strategy for active output selection.
arXiv Detail & Related papers (2021-01-10T08:27:30Z) - Learning active learning at the crossroads? evaluation and discussion [0.03807314298073299]
Active learning aims to reduce annotation cost by predicting which samples are useful for a human expert to label.
There is no best active learning strategy that consistently outperforms all others in all applications.
We present the results of a benchmark performed on 20 datasets that compares a strategy learned using a recent meta-learning algorithm with margin sampling.
arXiv Detail & Related papers (2020-12-16T10:35:43Z) - Semi-supervised Batch Active Learning via Bilevel Optimization [89.37476066973336]
We formulate our approach as a data summarization problem via bilevel optimization.
We show that our method is highly effective in keyword detection tasks in the regime when only few labeled samples are available.
arXiv Detail & Related papers (2020-10-19T16:53:24Z) - A Graph-Based Approach for Active Learning in Regression [37.42533189350655]
Active learning aims to reduce labeling efforts by selectively asking humans to annotate the most important data points from an unlabeled pool.
Most existing active learning for regression methods use the regression function learned at each active learning iteration to select the next informative point to query.
We propose a feature-focused approach that formulates both sequential and batch-mode active regression as a novel bipartite graph optimization problem.
arXiv Detail & Related papers (2020-01-30T00:59:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.