Deep Bayesian Active Learning, A Brief Survey on Recent Advances
- URL: http://arxiv.org/abs/2012.08044v1
- Date: Tue, 15 Dec 2020 02:06:07 GMT
- Title: Deep Bayesian Active Learning, A Brief Survey on Recent Advances
- Authors: Salman Mohamadi, Hamidreza Amindavar
- Abstract summary: Active learning starts training the model with a small size of labeled data.
Deep learning methods are not capable of either representing or manipulating model uncertainty.
Deep Bayesian active learning frameworks provide practical consideration in the model.
- Score: 6.345523830122166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Active learning frameworks offer efficient data annotation without remarkable
accuracy degradation. In other words, active learning starts training the model
with a small size of labeled data while exploring the space of unlabeled data
in order to select most informative samples to be labeled. Generally speaking,
representing the uncertainty is crucial in any active learning framework,
however, deep learning methods are not capable of either representing or
manipulating model uncertainty. On the other hand, from the real world
application perspective, uncertainty representation is getting more and more
attention in the machine learning community. Deep Bayesian active learning
frameworks and generally any Bayesian active learning settings, provide
practical consideration in the model which allows training with small data
while representing the model uncertainty for further efficient training. In
this paper, we briefly survey recent advances in Bayesian active learning and
in particular deep Bayesian active learning frameworks.
Related papers
- Making Better Use of Unlabelled Data in Bayesian Active Learning [19.050266270699368]
We propose a framework for semi-supervised Bayesian active learning.
We find it produces better-performing models than either conventional Bayesian active learning or semi-supervised learning with randomly acquired data.
arXiv Detail & Related papers (2024-04-26T08:41:55Z) - Model Uncertainty based Active Learning on Tabular Data using Boosted
Trees [0.4667030429896303]
Supervised machine learning relies on the availability of good labelled data for model training.
Active learning is a sub-field of machine learning which helps in obtaining the labelled data efficiently.
arXiv Detail & Related papers (2023-10-30T14:29:53Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Revisiting Deep Active Learning for Semantic Segmentation [37.3546941940388]
We show that the data distribution is decisive for the performance of the various active learning objectives proposed in the literature.
We demonstrate that the integration of semi-supervised learning with active learning can improve performance when the two objectives are aligned.
arXiv Detail & Related papers (2023-02-08T14:23:37Z) - Frugal Reinforcement-based Active Learning [12.18340575383456]
We propose a novel active learning approach for label-efficient training.
The proposed method is iterative and aims at minimizing a constrained objective function that mixes diversity, representativity and uncertainty criteria.
We also introduce a novel weighting mechanism based on reinforcement learning, which adaptively balances these criteria at each training iteration.
arXiv Detail & Related papers (2022-12-09T14:17:45Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z) - Few-Cost Salient Object Detection with Adversarial-Paced Learning [95.0220555274653]
This paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only.
We name this task as the few-cost salient object detection and propose an adversarial-paced learning (APL)-based framework to facilitate the few-cost learning scenario.
arXiv Detail & Related papers (2021-04-05T14:15:49Z) - Active Learning for Sequence Tagging with Deep Pre-trained Models and
Bayesian Uncertainty Estimates [52.164757178369804]
Recent advances in transfer learning for natural language processing in conjunction with active learning open the possibility to significantly reduce the necessary annotation budget.
We conduct an empirical study of various Bayesian uncertainty estimation methods and Monte Carlo dropout options for deep pre-trained models in the active learning framework.
We also demonstrate that to acquire instances during active learning, a full-size Transformer can be substituted with a distilled version, which yields better computational performance.
arXiv Detail & Related papers (2021-01-20T13:59:25Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.