Active Learning Through a Covering Lens
- URL: http://arxiv.org/abs/2205.11320v1
- Date: Mon, 23 May 2022 14:03:23 GMT
- Title: Active Learning Through a Covering Lens
- Authors: Ofer Yehuda, Avihu Dekel, Guy Hacohen, Daphna Weinshall
- Abstract summary: Deep active learning aims to reduce the annotation cost for deep neural networks.
We propose ProbCover, a new active learning algorithm for the low budget regime.
We show that our principled active learning strategy improves the state-of-the-art in the low-budget regime in several image recognition benchmarks.
- Score: 7.952582509792972
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep active learning aims to reduce the annotation cost for deep neural
networks, which are notoriously data-hungry. Until recently, deep active
learning methods struggled in the low-budget regime, where only a small amount
of samples are annotated. The situation has been alleviated by recent advances
in self-supervised representation learning methods, which impart the geometry
of the data representation with rich information about the points. Taking
advantage of this progress, we study the problem of subset selection for
annotation through a "covering" lens, proposing ProbCover -- a new active
learning algorithm for the low budget regime, which seeks to maximize
Probability Coverage. We describe a dual way to view our formulation, from
which one can derive strategies suitable for the high budget regime of active
learning, related to existing methods like Coreset. We conclude with extensive
experiments, evaluating ProbCover in the low budget regime. We show that our
principled active learning strategy improves the state-of-the-art in the
low-budget regime in several image recognition benchmarks. This method is
especially beneficial in semi-supervised settings, allowing state-of-the-art
semi-supervised methods to achieve high accuracy with only a few labels.
Related papers
- Enhancing Active Learning for Sentinel 2 Imagery through Contrastive Learning and Uncertainty Estimation [0.0]
We introduce a novel method designed to enhance label efficiency in satellite imagery analysis.
Our approach utilizes contrastive learning together with uncertainty estimations via Monte Carlo Dropout.
Our results show that the proposed method performs better than several other popular methods in this field.
arXiv Detail & Related papers (2024-05-22T01:54:51Z) - Neural Active Learning Beyond Bandits [69.99592173038903]
We study both stream-based and pool-based active learning with neural network approximations.
We propose two algorithms based on the newly designed exploitation and exploration neural networks for stream-based and pool-based active learning.
arXiv Detail & Related papers (2024-04-18T21:52:14Z) - Improved Regret for Efficient Online Reinforcement Learning with Linear
Function Approximation [69.0695698566235]
We study reinforcement learning with linear function approximation and adversarially changing cost functions.
We present a computationally efficient policy optimization algorithm for the challenging general setting of unknown dynamics and bandit feedback.
arXiv Detail & Related papers (2023-01-30T17:26:39Z) - Budget-aware Few-shot Learning via Graph Convolutional Network [56.41899553037247]
This paper tackles the problem of few-shot learning, which aims to learn new visual concepts from a few examples.
A common problem setting in few-shot classification assumes random sampling strategy in acquiring data labels.
We introduce a new budget-aware few-shot learning problem that aims to learn novel object categories.
arXiv Detail & Related papers (2022-01-07T02:46:35Z) - A Simple Baseline for Low-Budget Active Learning [15.54250249254414]
We show that a simple k-means clustering algorithm can outperform state-of-the-art active learning methods on low budgets.
This method can be used as a simple baseline for low-budget active learning on image classification.
arXiv Detail & Related papers (2021-10-22T19:36:56Z) - Mitigating Sampling Bias and Improving Robustness in Active Learning [13.994967246046008]
We introduce supervised contrastive active learning by leveraging the contrastive loss for active learning under a supervised setting.
We propose an unbiased query strategy that selects informative data samples of diverse feature representations.
We empirically demonstrate our proposed methods reduce sampling bias, achieve state-of-the-art accuracy and model calibration in an active learning setup.
arXiv Detail & Related papers (2021-09-13T20:58:40Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Semi-supervised Batch Active Learning via Bilevel Optimization [89.37476066973336]
We formulate our approach as a data summarization problem via bilevel optimization.
We show that our method is highly effective in keyword detection tasks in the regime when only few labeled samples are available.
arXiv Detail & Related papers (2020-10-19T16:53:24Z) - Confident Coreset for Active Learning in Medical Image Analysis [57.436224561482966]
We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
arXiv Detail & Related papers (2020-04-05T13:46:16Z) - Average Reward Adjusted Discounted Reinforcement Learning:
Near-Blackwell-Optimal Policies for Real-World Applications [0.0]
Reinforcement learning aims at finding the best stationary policy for a given Markov Decision Process.
This paper provides deep theoretical insights to the widely applied standard discounted reinforcement learning framework.
We establish a novel near-Blackwell-optimal reinforcement learning algorithm.
arXiv Detail & Related papers (2020-04-02T08:05:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.