SIMILAR: Submodular Information Measures Based Active Learning In
Realistic Scenarios
- URL: http://arxiv.org/abs/2107.00717v1
- Date: Thu, 1 Jul 2021 19:49:44 GMT
- Title: SIMILAR: Submodular Information Measures Based Active Learning In
Realistic Scenarios
- Authors: Suraj Kothawade, Nathan Beck, Krishnateja Killamsetty, Rishabh Iyer
- Abstract summary: SIMILAR is a unified active learning framework using recently proposed submodular information measures (SIM) as acquisition functions.
We show that SIMILAR significantly outperforms existing active learning algorithms by as much as 5% - 18% in the case of rare classes and 5% - 10% in the case of out-of-distribution data.
- Score: 1.911678487931003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning has proven to be useful for minimizing labeling costs by
selecting the most informative samples. However, existing active learning
methods do not work well in realistic scenarios such as imbalance or rare
classes, out-of-distribution data in the unlabeled set, and redundancy. In this
work, we propose SIMILAR (Submodular Information Measures based actIve
LeARning), a unified active learning framework using recently proposed
submodular information measures (SIM) as acquisition functions. We argue that
SIMILAR not only works in standard active learning, but also easily extends to
the realistic settings considered above and acts as a one-stop solution for
active learning that is scalable to large real-world datasets. Empirically, we
show that SIMILAR significantly outperforms existing active learning algorithms
by as much as ~5% - 18% in the case of rare classes and ~5% - 10% in the case
of out-of-distribution data on several image classification tasks like
CIFAR-10, MNIST, and ImageNet.
Related papers
- Active Prompt Learning with Vision-Language Model Priors [9.173468790066956]
We introduce a class-guided clustering that leverages the pre-trained image and text encoders of vision-language models.
We propose a budget-saving selective querying based on adaptive class-wise thresholds.
arXiv Detail & Related papers (2024-11-23T02:34:33Z) - Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - BAL: Balancing Diversity and Novelty for Active Learning [53.289700543331925]
We introduce a novel framework, Balancing Active Learning (BAL), which constructs adaptive sub-pools to balance diverse and uncertain data.
Our approach outperforms all established active learning methods on widely recognized benchmarks by 1.20%.
arXiv Detail & Related papers (2023-12-26T08:14:46Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Revisiting Deep Active Learning for Semantic Segmentation [37.3546941940388]
We show that the data distribution is decisive for the performance of the various active learning objectives proposed in the literature.
We demonstrate that the integration of semi-supervised learning with active learning can improve performance when the two objectives are aligned.
arXiv Detail & Related papers (2023-02-08T14:23:37Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Mitigating Sampling Bias and Improving Robustness in Active Learning [13.994967246046008]
We introduce supervised contrastive active learning by leveraging the contrastive loss for active learning under a supervised setting.
We propose an unbiased query strategy that selects informative data samples of diverse feature representations.
We empirically demonstrate our proposed methods reduce sampling bias, achieve state-of-the-art accuracy and model calibration in an active learning setup.
arXiv Detail & Related papers (2021-09-13T20:58:40Z) - Rebuilding Trust in Active Learning with Actionable Metrics [77.99796068970569]
Active Learning (AL) is an active domain of research, but is seldom used in the industry despite the pressing needs.
This is in part due to a misalignment of objectives, while research strives at getting the best results on selected datasets.
We present various actionable metrics to help rebuild trust of industrial practitioners in Active Learning.
arXiv Detail & Related papers (2020-12-18T09:34:59Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z) - A Comprehensive Benchmark Framework for Active Learning Methods in
Entity Matching [17.064993611446898]
In this paper, we build a unified active learning benchmark framework for EM.
The goal of the framework is to enable concrete guidelines for practitioners as to what active learning combinations will work well for EM.
Our framework also includes novel optimizations that improve the quality of the learned model by roughly 9% in terms of F1-score and reduce example selection latencies by up to 10x without affecting the quality of the model.
arXiv Detail & Related papers (2020-03-29T19:08:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.