LORD: Leveraging Open-Set Recognition with Unknown Data
- URL: http://arxiv.org/abs/2308.12584v1
- Date: Thu, 24 Aug 2023 06:12:41 GMT
- Title: LORD: Leveraging Open-Set Recognition with Unknown Data
- Authors: Tobias Koch, Christian Riess, Thomas K\"ohler
- Abstract summary: LORD is a framework to Leverage Open-set Recognition by exploiting unknown data.
We identify three model-agnostic training strategies that exploit background data and applied them to well-established classifiers.
- Score: 10.200937444995944
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Handling entirely unknown data is a challenge for any deployed classifier.
Classification models are typically trained on a static pre-defined dataset and
are kept in the dark for the open unassigned feature space. As a result, they
struggle to deal with out-of-distribution data during inference. Addressing
this task on the class-level is termed open-set recognition (OSR). However,
most OSR methods are inherently limited, as they train closed-set classifiers
and only adapt the downstream predictions to OSR. This work presents LORD, a
framework to Leverage Open-set Recognition by exploiting unknown Data. LORD
explicitly models open space during classifier training and provides a
systematic evaluation for such approaches. We identify three model-agnostic
training strategies that exploit background data and applied them to
well-established classifiers. Due to LORD's extensive evaluation protocol, we
consistently demonstrate improved recognition of unknown data. The benchmarks
facilitate in-depth analysis across various requirement levels. To mitigate
dependency on extensive and costly background datasets, we explore mixup as an
off-the-shelf data generation technique. Our experiments highlight mixup's
effectiveness as a substitute for background datasets. Lightweight constraints
on mixup synthesis further improve OSR performance.
Related papers
- Robust Semi-supervised Learning by Wisely Leveraging Open-set Data [48.67897991121204]
Open-set Semi-supervised Learning (OSSL) holds a realistic setting that unlabeled data may come from classes unseen in the labeled set.
We propose Wise Open-set Semi-supervised Learning (WiseOpen), a generic OSSL framework that selectively leverages the open-set data for training the model.
arXiv Detail & Related papers (2024-05-11T10:22:32Z) - Informed Decision-Making through Advancements in Open Set Recognition and Unknown Sample Detection [0.0]
Open set recognition (OSR) aims to bring classification tasks in a situation that is more like reality.
This study provides an algorithm exploring a new representation of feature space to improve classification in OSR tasks.
arXiv Detail & Related papers (2024-05-09T15:15:34Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Prompt-driven efficient Open-set Semi-supervised Learning [52.30303262499391]
Open-set semi-supervised learning (OSSL) has attracted growing interest, which investigates a more practical scenario where out-of-distribution (OOD) samples are only contained in unlabeled data.
We propose a prompt-driven efficient OSSL framework, called OpenPrompt, which can propagate class information from labeled to unlabeled data with only a small number of trainable parameters.
arXiv Detail & Related papers (2022-09-28T16:25:08Z) - Towards Accurate Open-Set Recognition via Background-Class
Regularization [36.96359929574601]
In open-set recognition (OSR), classifiers should be able to reject unknown-class samples while maintaining high closed-set classification accuracy.
Previous studies attempted to limit latent feature space and reject data located outside the limited space via offline analyses.
We propose a simple inference process (without offline analyses) to conduct OSR in standard classifier architectures.
We show that the proposed method provides robust OSR results, while maintaining high closed-set classification accuracy.
arXiv Detail & Related papers (2022-07-21T03:55:36Z) - Open Set Recognition using Vision Transformer with an Additional
Detection Head [6.476341388938684]
We propose a novel approach to open set recognition (OSR) based on the vision transformer (ViT) technique.
Our approach employs two separate training stages. First, a ViT model is trained to perform closed set classification.
Then, an additional detection head is attached to the embedded features extracted by the ViT, trained to force the representations of known data to class-specific clusters compactly.
arXiv Detail & Related papers (2022-03-16T07:34:58Z) - Open-Set Recognition: A Good Closed-Set Classifier is All You Need [146.6814176602689]
We show that the ability of a classifier to make the 'none-of-above' decision is highly correlated with its accuracy on the closed-set classes.
We use this correlation to boost the performance of the cross-entropy OSR 'baseline' by improving its closed-set accuracy.
We also construct new benchmarks which better respect the task of detecting semantic novelty.
arXiv Detail & Related papers (2021-10-12T17:58:59Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Learning Placeholders for Open-Set Recognition [38.57786747665563]
We propose PlaceholdeRs for Open-SEt Recognition (Proser) to maintain classification performance on known classes and reject unknowns.
Proser efficiently generates novel class by manifold mixup, and adaptively sets the value of reserved open-set classifier during training.
arXiv Detail & Related papers (2021-03-28T09:18:15Z) - ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for
Semi-supervised Continual Learning [52.831894583501395]
Continual learning assumes the incoming data are fully labeled, which might not be applicable in real applications.
We propose deep Online Replay with Discriminator Consistency (ORDisCo) to interdependently learn a classifier with a conditional generative adversarial network (GAN)
We show ORDisCo achieves significant performance improvement on various semi-supervised learning benchmark datasets for SSCL.
arXiv Detail & Related papers (2021-01-02T09:04:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.