Deep Active Learning: A Reality Check
- URL: http://arxiv.org/abs/2403.14800v1
- Date: Thu, 21 Mar 2024 19:28:17 GMT
- Title: Deep Active Learning: A Reality Check
- Authors: Edrina Gashi, Jiankang Deng, Ismail Elezi,
- Abstract summary: No single-model method decisively outperforms entropy-based active learning.
We extend our evaluation to other tasks, exploring the active learning effectiveness in combination with semi-supervised learning.
- Score: 30.19086526296748
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We conduct a comprehensive evaluation of state-of-the-art deep active learning methods. Surprisingly, under general settings, no single-model method decisively outperforms entropy-based active learning, and some even fall short of random sampling. We delve into overlooked aspects like starting budget, budget step, and pretraining's impact, revealing their significance in achieving superior results. Additionally, we extend our evaluation to other tasks, exploring the active learning effectiveness in combination with semi-supervised learning, and object detection. Our experiments provide valuable insights and concrete recommendations for future active learning studies. By uncovering the limitations of current methods and understanding the impact of different experimental settings, we aim to inspire more efficient training of deep learning models in real-world scenarios with limited annotation budgets. This work contributes to advancing active learning's efficacy in deep learning and empowers researchers to make informed decisions when applying active learning to their tasks.
Related papers
- Unveiling the Role of Expert Guidance: A Comparative Analysis of User-centered Imitation Learning and Traditional Reinforcement Learning [0.0]
This study explores the performance, robustness, and limitations of imitation learning compared to traditional reinforcement learning methods.
The insights gained from this study contribute to the advancement of human-centered artificial intelligence.
arXiv Detail & Related papers (2024-10-28T18:07:44Z) - Meta-Learning Neural Procedural Biases [9.876317838854018]
We propose Neural Procedural Bias Meta-Learning, a novel framework designed to meta-learn task procedural biases.
We show that by meta-learning the procedural biases of a neural network, we can induce strong inductive biases towards a distribution of learning tasks, enabling robust learning performance across many well-established few-shot learning benchmarks.
arXiv Detail & Related papers (2024-06-12T08:09:29Z) - Discovering Temporally-Aware Reinforcement Learning Algorithms [42.016150906831776]
We propose a simple augmentation to two existing objective discovery approaches.
We find that commonly used meta-gradient approaches fail to discover adaptive objective functions.
arXiv Detail & Related papers (2024-02-08T17:07:42Z) - Advancing Deep Active Learning & Data Subset Selection: Unifying
Principles with Information-Theory Intuitions [3.0539022029583953]
This thesis aims to enhance the practicality of deep learning by improving the label and training efficiency of deep learning models.
We investigate data subset selection techniques, specifically active learning and active sampling, grounded in information-theoretic principles.
arXiv Detail & Related papers (2024-01-09T01:41:36Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Procedure Planning in Instructional Videosvia Contextual Modeling and
Model-based Policy Learning [114.1830997893756]
This work focuses on learning a model to plan goal-directed actions in real-life videos.
We propose novel algorithms to model human behaviors through Bayesian Inference and model-based Imitation Learning.
arXiv Detail & Related papers (2021-10-05T01:06:53Z) - Prioritized Experience-based Reinforcement Learning with Human Guidance:
Methdology and Application to Autonomous Driving [2.5895890901896124]
Reinforcement learning requires skillful definition and remarkable computational efforts to solve optimization and control problems.
In this paper, a comprehensive human guidance-based reinforcement learning framework is established.
A novel prioritized experience replay mechanism that adapts to human guidance is proposed to boost the efficiency and performance of the reinforcement learning algorithm.
arXiv Detail & Related papers (2021-09-26T07:19:26Z) - Mind Your Outliers! Investigating the Negative Impact of Outliers on
Active Learning for Visual Question Answering [71.15403434929915]
We show that across 5 models and 4 datasets on the task of visual question answering, a wide variety of active learning approaches fail to outperform random selection.
We identify the problem as collective outliers -- groups of examples that active learning methods prefer to acquire but models fail to learn.
We show that active learning sample efficiency increases significantly as the number of collective outliers in the active learning pool decreases.
arXiv Detail & Related papers (2021-07-06T00:52:11Z) - Importance Weighted Policy Learning and Adaptation [89.46467771037054]
We study a complementary approach which is conceptually simple, general, modular and built on top of recent improvements in off-policy learning.
The framework is inspired by ideas from the probabilistic inference literature and combines robust off-policy learning with a behavior prior.
Our approach achieves competitive adaptation performance on hold-out tasks compared to meta reinforcement learning baselines and can scale to complex sparse-reward scenarios.
arXiv Detail & Related papers (2020-09-10T14:16:58Z) - Confident Coreset for Active Learning in Medical Image Analysis [57.436224561482966]
We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
arXiv Detail & Related papers (2020-04-05T13:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.