How To Overcome Confirmation Bias in Semi-Supervised Image
Classification By Active Learning
- URL: http://arxiv.org/abs/2308.08224v1
- Date: Wed, 16 Aug 2023 08:52:49 GMT
- Title: How To Overcome Confirmation Bias in Semi-Supervised Image
Classification By Active Learning
- Authors: Sandra Gilhuber, Rasmus Hvingelby, Mang Ling Ada Fok, Thomas Seidl
- Abstract summary: We present three data challenges common in real-world applications: between-class imbalance, within-class imbalance, and between-class similarity.
We find that random sampling does not mitigate confirmation bias and, in some cases, leads to worse performance than supervised learning.
Our results provide insights into the potential of combining active and semi-supervised learning in the presence of common real-world challenges.
- Score: 2.1805442504863506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Do we need active learning? The rise of strong deep semi-supervised methods
raises doubt about the usability of active learning in limited labeled data
settings. This is caused by results showing that combining semi-supervised
learning (SSL) methods with a random selection for labeling can outperform
existing active learning (AL) techniques. However, these results are obtained
from experiments on well-established benchmark datasets that can overestimate
the external validity. However, the literature lacks sufficient research on the
performance of active semi-supervised learning methods in realistic data
scenarios, leaving a notable gap in our understanding. Therefore we present
three data challenges common in real-world applications: between-class
imbalance, within-class imbalance, and between-class similarity. These
challenges can hurt SSL performance due to confirmation bias. We conduct
experiments with SSL and AL on simulated data challenges and find that random
sampling does not mitigate confirmation bias and, in some cases, leads to worse
performance than supervised learning. In contrast, we demonstrate that AL can
overcome confirmation bias in SSL in these realistic settings. Our results
provide insights into the potential of combining active and semi-supervised
learning in the presence of common real-world challenges, which is a promising
direction for robust methods when learning with limited labeled data in
real-world applications.
Related papers
- Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - Semi-Supervised Class-Agnostic Motion Prediction with Pseudo Label
Regeneration and BEVMix [59.55173022987071]
We study the potential of semi-supervised learning for class-agnostic motion prediction.
Our framework adopts a consistency-based self-training paradigm, enabling the model to learn from unlabeled data.
Our method exhibits comparable performance to weakly and some fully supervised methods.
arXiv Detail & Related papers (2023-12-13T09:32:50Z) - Active Semi-Supervised Learning by Exploring Per-Sample Uncertainty and
Consistency [30.94964727745347]
We propose a method called Active Semi-supervised Learning (ASSL) to improve accuracy of models at a lower cost.
ASSL involves more dynamic model updates than Active Learning (AL) due to the use of unlabeled data.
ASSL achieved about 5.3 times higher computational efficiency than Semi-supervised Learning (SSL) while achieving the same performance.
arXiv Detail & Related papers (2023-03-15T22:58:23Z) - Fair Robust Active Learning by Joint Inconsistency [22.150782414035422]
We introduce a novel task, Fair Robust Active Learning (FRAL), integrating conventional FAL and adversarial robustness.
We develop a simple yet effective FRAL strategy by Joint INconsistency (JIN)
Our method exploits the prediction inconsistency between benign and adversarial samples as well as between standard and robust models.
arXiv Detail & Related papers (2022-09-22T01:56:41Z) - Collaborative Intelligence Orchestration: Inconsistency-Based Fusion of
Semi-Supervised Learning and Active Learning [60.26659373318915]
Active learning (AL) and semi-supervised learning (SSL) are two effective, but often isolated, means to alleviate the data-hungry problem.
We propose an innovative Inconsistency-based virtual aDvErial algorithm to further investigate SSL-AL's potential superiority.
Two real-world case studies visualize the practical industrial value of applying and deploying the proposed data sampling algorithm.
arXiv Detail & Related papers (2022-06-07T13:28:43Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Adversarial Self-Supervised Learning for Semi-Supervised 3D Action
Recognition [123.62183172631443]
We present Adversarial Self-Supervised Learning (ASSL), a novel framework that tightly couples SSL and the semi-supervised scheme.
Specifically, we design an effective SSL scheme to improve the discrimination capability of learned representations for 3D action recognition.
arXiv Detail & Related papers (2020-07-12T08:01:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.