Pareto Optimization for Active Learning under Out-of-Distribution Data
Scenarios
- URL: http://arxiv.org/abs/2207.01190v1
- Date: Mon, 4 Jul 2022 04:11:44 GMT
- Title: Pareto Optimization for Active Learning under Out-of-Distribution Data
Scenarios
- Authors: Xueying Zhan, Zeyu Dai, Qingzhong Wang, Qing Li, Haoyi Xiong, Dejing
Dou, Antoni B. Chan
- Abstract summary: We propose a sampling scheme, which selects optimal subsets of unlabeled samples with fixed batch size from the unlabeled data pool.
Experimental results show its effectiveness on both classical Machine Learning (ML) and Deep Learning (DL) tasks.
- Score: 79.02009938011447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pool-based Active Learning (AL) has achieved great success in minimizing
labeling cost by sequentially selecting informative unlabeled samples from a
large unlabeled data pool and querying their labels from oracle/annotators.
However, existing AL sampling strategies might not work well in
out-of-distribution (OOD) data scenarios, where the unlabeled data pool
contains some data samples that do not belong to the classes of the target
task. Achieving good AL performance under OOD data scenarios is a challenging
task due to the natural conflict between AL sampling strategies and OOD sample
detection. AL selects data that are hard to be classified by the current basic
classifier (e.g., samples whose predicted class probabilities have high
entropy), while OOD samples tend to have more uniform predicted class
probabilities (i.e., high entropy) than in-distribution (ID) data. In this
paper, we propose a sampling scheme, Monte-Carlo Pareto Optimization for Active
Learning (POAL), which selects optimal subsets of unlabeled samples with fixed
batch size from the unlabeled data pool. We cast the AL sampling task as a
multi-objective optimization problem, and thus we utilize Pareto optimization
based on two conflicting objectives: (1) the normal AL data sampling scheme
(e.g., maximum entropy), and (2) the confidence of not being an OOD sample.
Experimental results show its effectiveness on both classical Machine Learning
(ML) and Deep Learning (DL) tasks.
Related papers
- Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - Deep Active Learning with Contrastive Learning Under Realistic Data Pool
Assumptions [2.578242050187029]
Active learning aims to identify the most informative data from an unlabeled data pool that enables a model to reach the desired accuracy rapidly.
Most existing active learning methods have been evaluated in an ideal setting where only samples relevant to the target task exist in an unlabeled data pool.
We introduce new active learning benchmarks that include ambiguous, task-irrelevant out-of-distribution as well as in-distribution samples.
arXiv Detail & Related papers (2023-03-25T10:46:10Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Active Learning at the ImageNet Scale [43.595076693347835]
In this work, we study a combination of active learning (AL) and pretraining (SSP) on ImageNet.
We find that performance on small toy datasets is not representative of performance on ImageNet due to the class imbalanced samples selected by an active learner.
We propose Balanced Selection (BASE), a simple, scalable AL algorithm that outperforms random sampling consistently.
arXiv Detail & Related papers (2021-11-25T02:48:51Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Multi-Task Curriculum Framework for Open-Set Semi-Supervised Learning [54.85397562961903]
Semi-supervised learning (SSL) has been proposed to leverage unlabeled data for training powerful models when only limited labeled data is available.
We address a more complex novel scenario named open-set SSL, where out-of-distribution (OOD) samples are contained in unlabeled data.
Our method achieves state-of-the-art results by successfully eliminating the effect of OOD samples.
arXiv Detail & Related papers (2020-07-22T10:33:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.