ALBench: A Framework for Evaluating Active Learning in Object Detection
- URL: http://arxiv.org/abs/2207.13339v1
- Date: Wed, 27 Jul 2022 07:46:23 GMT
- Title: ALBench: A Framework for Evaluating Active Learning in Object Detection
- Authors: Zhanpeng Feng, Shiliang Zhang, Rinyoichi Takezoe, Wenze Hu, Manmohan
Chandraker, Li-Jia Li, Vijay K. Narayanan, Xiaoyu Wang
- Abstract summary: This paper contributes an active learning benchmark framework named as ALBench for evaluating active learning in object detection.
Developed on an automatic deep model training system, this ALBench framework is easy-to-use, compatible with different active learning algorithms, and ensures the same training and testing protocols.
- Score: 102.81795062493536
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Active learning is an important technology for automated machine learning
systems. In contrast to Neural Architecture Search (NAS) which aims at
automating neural network architecture design, active learning aims at
automating training data selection. It is especially critical for training a
long-tailed task, in which positive samples are sparsely distributed. Active
learning alleviates the expensive data annotation issue through incrementally
training models powered with efficient data selection. Instead of annotating
all unlabeled samples, it iteratively selects and annotates the most valuable
samples. Active learning has been popular in image classification, but has not
been fully explored in object detection. Most of current approaches on object
detection are evaluated with different settings, making it difficult to fairly
compare their performance. To facilitate the research in this field, this paper
contributes an active learning benchmark framework named as ALBench for
evaluating active learning in object detection. Developed on an automatic deep
model training system, this ALBench framework is easy-to-use, compatible with
different active learning algorithms, and ensures the same training and testing
protocols. We hope this automated benchmark system help researchers to easily
reproduce literature's performance and have objective comparisons with prior
arts. The code will be release through Github.
Related papers
- Efficient Human-in-the-Loop Active Learning: A Novel Framework for Data Labeling in AI Systems [0.6267574471145215]
We propose a novel active learning framework with significant potential for application in modern AI systems.
Unlike the traditional active learning methods, which only focus on determining which data point should be labeled, our framework also introduces an innovative perspective on incorporating different query scheme.
Our proposed active learning framework exhibits higher accuracy and lower loss compared to other methods.
arXiv Detail & Related papers (2024-12-31T05:12:51Z) - Exploring Machine Learning Engineering for Object Detection and Tracking by Unmanned Aerial Vehicle (UAV) [3.600782980481468]
This research effort focuses on the development of a machine learning pipeline emphasizing the inclusion of assurance methods with increasing automation.
A new dataset was created by collecting videos of moving object such as Roomba vacuum cleaner, emulating search and rescue (SAR) for indoor environment.
After the refinement of the dataset it was trained on a second YOLOv4 and a Mask R-CNN model, which is deployed on a Parrot Mambo drone to perform real-time object detection and tracking.
arXiv Detail & Related papers (2024-12-19T19:27:31Z) - Oriented Tiny Object Detection: A Dataset, Benchmark, and Dynamic Unbiased Learning [51.170479006249195]
We introduce a new dataset, benchmark, and a dynamic coarse-to-fine learning scheme in this study.
Our proposed dataset, AI-TOD-R, features the smallest object sizes among all oriented object detection datasets.
We present a benchmark spanning a broad range of detection paradigms, including both fully-supervised and label-efficient approaches.
arXiv Detail & Related papers (2024-12-16T09:14:32Z) - Active Code Learning: Benchmarking Sample-Efficient Training of Code
Models [35.54965391159943]
In software engineering (ML4Code), efficiently training models of code with less human effort has become an emergent problem.
Active learning is such a technique that allows developers to train a model with reduced data while producing models with desired performance.
This paper builds the first benchmark to study this critical problem - active code learning.
arXiv Detail & Related papers (2023-06-02T03:26:11Z) - Evaluating Zero-cost Active Learning for Object Detection [4.106771265655055]
Object detection requires substantial labeling effort for learning robust models.
Active learning can reduce this effort by intelligently selecting relevant examples to be annotated.
We show that a key ingredient is not only the score on a bounding box level but also the technique used for aggregating the scores for ranking images.
arXiv Detail & Related papers (2022-12-08T11:48:39Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - AutoOD: Automated Outlier Detection via Curiosity-guided Search and
Self-imitation Learning [72.99415402575886]
Outlier detection is an important data mining task with numerous practical applications.
We propose AutoOD, an automated outlier detection framework, which aims to search for an optimal neural network model.
Experimental results on various real-world benchmark datasets demonstrate that the deep model identified by AutoOD achieves the best performance.
arXiv Detail & Related papers (2020-06-19T18:57:51Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z) - A Comprehensive Benchmark Framework for Active Learning Methods in
Entity Matching [17.064993611446898]
In this paper, we build a unified active learning benchmark framework for EM.
The goal of the framework is to enable concrete guidelines for practitioners as to what active learning combinations will work well for EM.
Our framework also includes novel optimizations that improve the quality of the learned model by roughly 9% in terms of F1-score and reduce example selection latencies by up to 10x without affecting the quality of the model.
arXiv Detail & Related papers (2020-03-29T19:08:03Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.