Stopping Criterion for Active Learning Based on Error Stability
- URL: http://arxiv.org/abs/2104.01836v1
- Date: Mon, 5 Apr 2021 10:15:50 GMT
- Title: Stopping Criterion for Active Learning Based on Error Stability
- Authors: Hideaki Ishibashi and Hideitsu Hino
- Abstract summary: We propose a stopping criterion based on error stability, which guarantees that the change in generalization error upon adding a new sample is bounded by the annotation cost.
We demonstrate that the proposed criterion stops active learning at the appropriate timing for various learning models and real datasets.
- Score: 3.2996723916635267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning is a framework for supervised learning to improve the
predictive performance by adaptively annotating a small number of samples. To
realize efficient active learning, both an acquisition function that determines
the next datum and a stopping criterion that determines when to stop learning
should be considered. In this study, we propose a stopping criterion based on
error stability, which guarantees that the change in generalization error upon
adding a new sample is bounded by the annotation cost and can be applied to any
Bayesian active learning. We demonstrate that the proposed criterion stops
active learning at the appropriate timing for various learning models and real
datasets.
Related papers
- Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Learning Objective-Specific Active Learning Strategies with Attentive
Neural Processes [72.75421975804132]
Learning Active Learning (LAL) suggests to learn the active learning strategy itself, allowing it to adapt to the given setting.
We propose a novel LAL method for classification that exploits symmetry and independence properties of the active learning problem.
Our approach is based on learning from a myopic oracle, which gives our model the ability to adapt to non-standard objectives.
arXiv Detail & Related papers (2023-09-11T14:16:37Z) - NTKCPL: Active Learning on Top of Self-Supervised Model by Estimating
True Coverage [3.4806267677524896]
We propose a novel active learning strategy, neural tangent kernel clustering-pseudo-labels (NTKCPL)
It estimates empirical risk based on pseudo-labels and the model prediction with NTK approximation.
We validate our method on five datasets, empirically demonstrating that it outperforms the baseline methods in most cases.
arXiv Detail & Related papers (2023-06-07T01:43:47Z) - Mitigating Catastrophic Forgetting in Task-Incremental Continual
Learning with Adaptive Classification Criterion [50.03041373044267]
We propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning.
Experiments show that CFL achieves state-of-the-art performance and has a stronger ability to overcome compared with the classification baselines.
arXiv Detail & Related papers (2023-05-20T19:22:40Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Mitigating Sampling Bias and Improving Robustness in Active Learning [13.994967246046008]
We introduce supervised contrastive active learning by leveraging the contrastive loss for active learning under a supervised setting.
We propose an unbiased query strategy that selects informative data samples of diverse feature representations.
We empirically demonstrate our proposed methods reduce sampling bias, achieve state-of-the-art accuracy and model calibration in an active learning setup.
arXiv Detail & Related papers (2021-09-13T20:58:40Z) - Targeted Active Learning for Bayesian Decision-Making [15.491942513739676]
We argue that when acquiring samples sequentially, separating learning and decision-making is sub-optimal.
We introduce a novel active learning strategy which takes the down-the-line decision problem into account.
Specifically, we introduce a novel active learning criterion which maximizes the expected information gain on the posterior distribution of the optimal decision.
arXiv Detail & Related papers (2021-06-08T09:05:43Z) - Active Learning for Sequence Tagging with Deep Pre-trained Models and
Bayesian Uncertainty Estimates [52.164757178369804]
Recent advances in transfer learning for natural language processing in conjunction with active learning open the possibility to significantly reduce the necessary annotation budget.
We conduct an empirical study of various Bayesian uncertainty estimation methods and Monte Carlo dropout options for deep pre-trained models in the active learning framework.
We also demonstrate that to acquire instances during active learning, a full-size Transformer can be substituted with a distilled version, which yields better computational performance.
arXiv Detail & Related papers (2021-01-20T13:59:25Z) - Stopping criterion for active learning based on deterministic
generalization bounds [4.518012967046983]
We propose a criterion for automatically stopping active learning.
The proposed stopping criterion is based on the difference in the expected generalization errors and hypothesis testing.
We demonstrate the effectiveness of the proposed method via experiments with both artificial and real datasets.
arXiv Detail & Related papers (2020-05-15T08:15:47Z) - Continual Learning with Node-Importance based Adaptive Group Sparse
Regularization [30.23319528662881]
We propose a novel regularization-based continual learning method, dubbed as Adaptive Group Sparsity based Continual Learning (AGS-CL)
Our method selectively employs the two penalties when learning each node based its the importance, which is adaptively updated after learning each new task.
arXiv Detail & Related papers (2020-03-30T18:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.