Training Ensembles with Inliers and Outliers for Semi-supervised Active
Learning
- URL: http://arxiv.org/abs/2307.03741v1
- Date: Fri, 7 Jul 2023 17:50:07 GMT
- Title: Training Ensembles with Inliers and Outliers for Semi-supervised Active
Learning
- Authors: Vladan Stojni\'c, Zakaria Laskar, Giorgos Tolias
- Abstract summary: In this work, we present an approach that leverages three highly synergistic components.
Joint training with inliers and outliers, semi-supervised learning through pseudo-labeling, and model ensembling are used.
Remarkably, despite its simplicity, our proposed approach outperforms all other methods in terms of performance.
- Score: 16.204251285425478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep active learning in the presence of outlier examples poses a realistic
yet challenging scenario. Acquiring unlabeled data for annotation requires a
delicate balance between avoiding outliers to conserve the annotation budget
and prioritizing useful inlier examples for effective training. In this work,
we present an approach that leverages three highly synergistic components,
which are identified as key ingredients: joint classifier training with inliers
and outliers, semi-supervised learning through pseudo-labeling, and model
ensembling. Our work demonstrates that ensembling significantly enhances the
accuracy of pseudo-labeling and improves the quality of data acquisition. By
enabling semi-supervision through the joint training process, where outliers
are properly handled, we observe a substantial boost in classifier accuracy
through the use of all available unlabeled examples. Notably, we reveal that
the integration of joint training renders explicit outlier detection
unnecessary; a conventional component for acquisition in prior work. The three
key components align seamlessly with numerous existing approaches. Through
empirical evaluations, we showcase that their combined use leads to a
performance increase. Remarkably, despite its simplicity, our proposed approach
outperforms all other methods in terms of performance. Code:
https://github.com/vladan-stojnic/active-outliers
Related papers
- Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - One-bit Supervision for Image Classification: Problem, Solution, and
Beyond [114.95815360508395]
This paper presents one-bit supervision, a novel setting of learning with fewer labels, for image classification.
We propose a multi-stage training paradigm and incorporate negative label suppression into an off-the-shelf semi-supervised learning algorithm.
In multiple benchmarks, the learning efficiency of the proposed approach surpasses that using full-bit, semi-supervised supervision.
arXiv Detail & Related papers (2023-11-26T07:39:00Z) - Quantile-based Maximum Likelihood Training for Outlier Detection [5.902139925693801]
We introduce a quantile-based maximum likelihood objective for learning the inlier distribution to improve the outlier separation during inference.
Our approach fits a normalizing flow to pre-trained discriminative features and detects the outliers according to the evaluated log-likelihood.
arXiv Detail & Related papers (2023-08-20T22:27:54Z) - Unilaterally Aggregated Contrastive Learning with Hierarchical
Augmentation for Anomaly Detection [64.50126371767476]
We propose Unilaterally Aggregated Contrastive Learning with Hierarchical Augmentation (UniCon-HA)
We explicitly encourage the concentration of inliers and the dispersion of virtual outliers via supervised and unsupervised contrastive losses.
Our method is evaluated under three AD settings including unlabeled one-class, unlabeled multi-class, and labeled multi-class.
arXiv Detail & Related papers (2023-08-20T04:01:50Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data [27.75143621836449]
We propose UnMixMatch, a semi-supervised learning framework which can learn effective representations from unconstrained data.
We perform extensive experiments on 4 commonly used datasets and demonstrate superior performance over existing semi-supervised methods with a performance boost of 4.79%.
arXiv Detail & Related papers (2023-06-02T01:07:14Z) - Active Self-Training for Weakly Supervised 3D Scene Semantic
Segmentation [17.27850877649498]
We introduce a method for weakly supervised segmentation of 3D scenes that combines self-training and active learning.
We demonstrate that our approach leads to an effective method that provides improvements in scene segmentation over previous works and baselines.
arXiv Detail & Related papers (2022-09-15T06:00:25Z) - Dynamic Supervisor for Cross-dataset Object Detection [52.95818230087297]
Cross-dataset training in object detection tasks is complicated because the inconsistency in the category range across datasets transforms fully supervised learning into semi-supervised learning.
We propose a dynamic supervisor framework that updates the annotations multiple times through multiple-updated submodels trained using hard and soft labels.
In the final generated annotations, both recall and precision improve significantly through the integration of hard-label training with soft-label training.
arXiv Detail & Related papers (2022-04-01T03:18:46Z) - Contrastive Regularization for Semi-Supervised Learning [46.020125061295886]
We propose contrastive regularization to improve both efficiency and accuracy of the consistency regularization by well-clustered features of unlabeled data.
Our method also shows robust performance on open-set semi-supervised learning where unlabeled data includes out-of-distribution samples.
arXiv Detail & Related papers (2022-01-17T07:20:11Z) - Out-of-Scope Intent Detection with Self-Supervision and Discriminative
Training [20.242645823965145]
Out-of-scope intent detection is of practical importance in task-oriented dialogue systems.
We propose a method to train an out-of-scope intent classifier in a fully end-to-end manner by simulating the test scenario in training.
We evaluate our method extensively on four benchmark dialogue datasets and observe significant improvements over state-of-the-art approaches.
arXiv Detail & Related papers (2021-06-16T08:17:18Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.