Improved active output selection strategy for noisy environments
- URL: http://arxiv.org/abs/2101.03499v1
- Date: Sun, 10 Jan 2021 08:27:30 GMT
- Title: Improved active output selection strategy for noisy environments
- Authors: Adrian Prochaska, Julien Pillas and Bernard B\"aker
- Abstract summary: The test bench time needed for model-based calibration can be reduced with active learning methods for test design.
This paper presents an improved strategy for active output selection.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The test bench time needed for model-based calibration can be reduced with
active learning methods for test design. This paper presents an improved
strategy for active output selection. This is the task of learning multiple
models in the same input dimensions and suits the needs of calibration tasks.
Compared to an existing strategy, we take into account the noise estimate,
which is inherent to Gaussian processes. The method is validated on three
different toy examples. The performance compared to the existing best strategy
is the same or better in each example. In a best case scenario, the new
strategy needs at least 10% less measurements compared to all other active or
passive strategies. Further efforts will evaluate the strategy on a real-world
application. Moreover, the implementation of more sophisticated active-learning
strategies for the query placement will be realized.
Related papers
- Realistic Evaluation of Test-Time Adaptation Algorithms: Unsupervised Hyperparameter Selection [1.4530711901349282]
Test-Time Adaptation (TTA) has emerged as a promising strategy for tackling the problem of machine learning model robustness under distribution shifts.
We evaluate existing TTA methods using surrogate-based hp-selection strategies to obtain a more realistic evaluation of their performance.
arXiv Detail & Related papers (2024-07-19T11:58:30Z) - Model-Free Active Exploration in Reinforcement Learning [53.786439742572995]
We study the problem of exploration in Reinforcement Learning and present a novel model-free solution.
Our strategy is able to identify efficient policies faster than state-of-the-art exploration approaches.
arXiv Detail & Related papers (2024-06-30T19:00:49Z) - Perturbation-based Active Learning for Question Answering [25.379528163789082]
Building a question answering (QA) model with less annotation costs can be achieved by utilizing active learning (AL) training strategy.
It selects the most informative unlabeled training data to update the model effectively.
arXiv Detail & Related papers (2023-11-04T08:07:23Z) - On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning [71.55412580325743]
We show that multi-task pretraining with fine-tuning on new tasks performs equally as well, or better, than meta-pretraining with meta test-time adaptation.
This is encouraging for future research, as multi-task pretraining tends to be simpler and computationally cheaper than meta-RL.
arXiv Detail & Related papers (2022-06-07T13:24:00Z) - Meta Navigator: Search for a Good Adaptation Policy for Few-shot
Learning [113.05118113697111]
Few-shot learning aims to adapt knowledge learned from previous tasks to novel tasks with only a limited amount of labeled data.
Research literature on few-shot learning exhibits great diversity, while different algorithms often excel at different few-shot learning scenarios.
We present Meta Navigator, a framework that attempts to solve the limitation in few-shot learning by seeking a higher-level strategy.
arXiv Detail & Related papers (2021-09-13T07:20:01Z) - Learning active learning at the crossroads? evaluation and discussion [0.03807314298073299]
Active learning aims to reduce annotation cost by predicting which samples are useful for a human expert to label.
There is no best active learning strategy that consistently outperforms all others in all applications.
We present the results of a benchmark performed on 20 datasets that compares a strategy learned using a recent meta-learning algorithm with margin sampling.
arXiv Detail & Related papers (2020-12-16T10:35:43Z) - Active Output Selection Strategies for Multiple Learning Regression
Models [0.0]
The strategy is actively learning multiple outputs in the same input space.
The presented method is applied to three different toy examples with noise in a real world range and to a benchmark dataset.
The results are promising but also show that the algorithm has to be improved to increase robustness for noisy environments.
arXiv Detail & Related papers (2020-11-29T08:05:53Z) - Semi-supervised Batch Active Learning via Bilevel Optimization [89.37476066973336]
We formulate our approach as a data summarization problem via bilevel optimization.
We show that our method is highly effective in keyword detection tasks in the regime when only few labeled samples are available.
arXiv Detail & Related papers (2020-10-19T16:53:24Z) - Localized active learning of Gaussian process state space models [63.97366815968177]
A globally accurate model is not required to achieve good performance in many common control applications.
We propose an active learning strategy for Gaussian process state space models that aims to obtain an accurate model on a bounded subset of the state-action space.
By employing model predictive control, the proposed technique integrates information collected during exploration and adaptively improves its exploration strategy.
arXiv Detail & Related papers (2020-05-04T05:35:02Z) - Learning to Select Base Classes for Few-shot Classification [96.92372639495551]
We use the Similarity Ratio as an indicator for the generalization performance of a few-shot model.
We then formulate the base class selection problem as a submodular optimization problem over Similarity Ratio.
arXiv Detail & Related papers (2020-04-01T09:55:18Z) - Adaptive strategy in differential evolution via explicit exploitation
and exploration controls [0.0]
This paper proposes a new strategy adaptation method, named explicit adaptation scheme (Ea scheme)
Ea scheme separates multiple strategies and employs them on-demand.
Experimental studies on benchmark functions demonstrate the effectiveness of Ea scheme.
arXiv Detail & Related papers (2020-02-03T09:12:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.