ALBench: A Framework for Evaluating Active Learning in Object Detection
- URL: http://arxiv.org/abs/2207.13339v1
- Date: Wed, 27 Jul 2022 07:46:23 GMT
- Title: ALBench: A Framework for Evaluating Active Learning in Object Detection
- Authors: Zhanpeng Feng, Shiliang Zhang, Rinyoichi Takezoe, Wenze Hu, Manmohan
Chandraker, Li-Jia Li, Vijay K. Narayanan, Xiaoyu Wang
- Abstract summary: This paper contributes an active learning benchmark framework named as ALBench for evaluating active learning in object detection.
Developed on an automatic deep model training system, this ALBench framework is easy-to-use, compatible with different active learning algorithms, and ensures the same training and testing protocols.
- Score: 102.81795062493536
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Active learning is an important technology for automated machine learning
systems. In contrast to Neural Architecture Search (NAS) which aims at
automating neural network architecture design, active learning aims at
automating training data selection. It is especially critical for training a
long-tailed task, in which positive samples are sparsely distributed. Active
learning alleviates the expensive data annotation issue through incrementally
training models powered with efficient data selection. Instead of annotating
all unlabeled samples, it iteratively selects and annotates the most valuable
samples. Active learning has been popular in image classification, but has not
been fully explored in object detection. Most of current approaches on object
detection are evaluated with different settings, making it difficult to fairly
compare their performance. To facilitate the research in this field, this paper
contributes an active learning benchmark framework named as ALBench for
evaluating active learning in object detection. Developed on an automatic deep
model training system, this ALBench framework is easy-to-use, compatible with
different active learning algorithms, and ensures the same training and testing
protocols. We hope this automated benchmark system help researchers to easily
reproduce literature's performance and have objective comparisons with prior
arts. The code will be release through Github.
Related papers
- Learning from the Best: Active Learning for Wireless Communications [9.523381807291049]
Active learning algorithms identify the most critical and informative samples in an unlabeled dataset and label only those samples, instead of the complete set.
We present a case study of deep learning-based mmWave beam selection, where labeling is performed by a compute-intensive algorithm based on exhaustive search.
Our results show that using an active learning algorithm for class-imbalanced datasets can reduce labeling overhead by up to 50% for this dataset.
arXiv Detail & Related papers (2024-01-23T12:21:57Z) - Active Code Learning: Benchmarking Sample-Efficient Training of Code
Models [35.54965391159943]
In software engineering (ML4Code), efficiently training models of code with less human effort has become an emergent problem.
Active learning is such a technique that allows developers to train a model with reduced data while producing models with desired performance.
This paper builds the first benchmark to study this critical problem - active code learning.
arXiv Detail & Related papers (2023-06-02T03:26:11Z) - Evaluating Zero-cost Active Learning for Object Detection [4.106771265655055]
Object detection requires substantial labeling effort for learning robust models.
Active learning can reduce this effort by intelligently selecting relevant examples to be annotated.
We show that a key ingredient is not only the score on a bounding box level but also the technique used for aggregating the scores for ranking images.
arXiv Detail & Related papers (2022-12-08T11:48:39Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Few-Cost Salient Object Detection with Adversarial-Paced Learning [95.0220555274653]
This paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only.
We name this task as the few-cost salient object detection and propose an adversarial-paced learning (APL)-based framework to facilitate the few-cost learning scenario.
arXiv Detail & Related papers (2021-04-05T14:15:49Z) - Feature Learning for Accelerometer based Gait Recognition [0.0]
Autoencoders are very close to discriminative end-to-end models with regards to their feature learning ability.
fully convolutional models are able to learn good feature representations, regardless of the training strategy.
arXiv Detail & Related papers (2020-07-31T10:58:01Z) - AutoOD: Automated Outlier Detection via Curiosity-guided Search and
Self-imitation Learning [72.99415402575886]
Outlier detection is an important data mining task with numerous practical applications.
We propose AutoOD, an automated outlier detection framework, which aims to search for an optimal neural network model.
Experimental results on various real-world benchmark datasets demonstrate that the deep model identified by AutoOD achieves the best performance.
arXiv Detail & Related papers (2020-06-19T18:57:51Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z) - A Comprehensive Benchmark Framework for Active Learning Methods in
Entity Matching [17.064993611446898]
In this paper, we build a unified active learning benchmark framework for EM.
The goal of the framework is to enable concrete guidelines for practitioners as to what active learning combinations will work well for EM.
Our framework also includes novel optimizations that improve the quality of the learned model by roughly 9% in terms of F1-score and reduce example selection latencies by up to 10x without affecting the quality of the model.
arXiv Detail & Related papers (2020-03-29T19:08:03Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.