Feature Decoupling in Self-supervised Representation Learning for Open
Set Recognition
- URL: http://arxiv.org/abs/2209.14385v1
- Date: Wed, 28 Sep 2022 19:21:53 GMT
- Title: Feature Decoupling in Self-supervised Representation Learning for Open
Set Recognition
- Authors: Jingyun Jia, Philip K. Chan
- Abstract summary: We use a two-stage training strategy for the open set recognition (OSR) problems.
In the first stage, we introduce a self-supervised feature decoupling method that finds the content features of the input samples from the known classes.
In the second stage, we fine-tune the content features with the class labels.
Our experimental results indicate that our proposed self-supervised approach outperforms others in image and malware OSR problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Assuming unknown classes could be present during classification, the open set
recognition (OSR) task aims to classify an instance into a known class or
reject it as unknown. In this paper, we use a two-stage training strategy for
the OSR problems. In the first stage, we introduce a self-supervised feature
decoupling method that finds the content features of the input samples from the
known classes. Specifically, our feature decoupling approach learns a
representation that can be split into content features and transformation
features. In the second stage, we fine-tune the content features with the class
labels. The fine-tuned content features are then used for the OSR problems.
Moreover, we consider an unsupervised OSR scenario, where we cluster the
content features learned from the first stage. To measure representation
quality, we introduce intra-inter ratio (IIR). Our experimental results
indicate that our proposed self-supervised approach outperforms others in image
and malware OSR problems. Also, our analyses indicate that IIR is correlated
with OSR performance.
Related papers
- Disentangling CLIP Features for Enhanced Localized Understanding [58.73850193789384]
We propose Unmix-CLIP, a novel framework designed to reduce mutual feature information (MFI) and improve feature disentanglement.
For the COCO- 14 dataset, Unmix-CLIP reduces feature similarity by 24.9%.
arXiv Detail & Related papers (2025-02-05T08:20:31Z) - OpenSlot: Mixed Open-Set Recognition with Object-Centric Learning [21.933996792254998]
Open-set recognition (OSR) studies typically assume that each image contains only one class label, with the unknown test set having a disjoint label space from the known test set.
This paper introduces the mixed OSR problem, where test images contain multiple class semantics, with both known and unknown classes co-occurring in the negatives.
We propose the OpenSlot framework, based on object-centric learning, which uses slot features to represent diverse class semantics and generate class predictions.
arXiv Detail & Related papers (2024-07-02T16:00:55Z) - Learning Adversarial Semantic Embeddings for Zero-Shot Recognition in
Open Worlds [25.132219723741024]
Zero-Shot Learning (ZSL) focuses on classifying samples of unseen classes with only their side semantic information presented during training.
"Zero-Shot Open-Set Recognition" (ZS-OSR) is required to accurately classify samples from the unseen classes while rejecting samples from the unknown classes during inference.
We introduce a novel approach specifically designed for ZS-OSR, in which our model learns to generate adversarial semantic embeddings of the unknown classes to train an unknowns-informed ZS-OSR.
arXiv Detail & Related papers (2023-07-07T06:54:21Z) - Enlarging Instance-specific and Class-specific Information for Open-set
Action Recognition [47.69171542776917]
We find that features with richer semantic diversity can significantly improve the open-set performance under the same uncertainty scores.
A novel Prototypical Similarity Learning (PSL) framework is proposed to keep the instance variance within the same class to retain more IS information.
arXiv Detail & Related papers (2023-03-25T04:07:36Z) - Open Set Recognition using Vision Transformer with an Additional
Detection Head [6.476341388938684]
We propose a novel approach to open set recognition (OSR) based on the vision transformer (ViT) technique.
Our approach employs two separate training stages. First, a ViT model is trained to perform closed set classification.
Then, an additional detection head is attached to the embedded features extracted by the ViT, trained to force the representations of known data to class-specific clusters compactly.
arXiv Detail & Related papers (2022-03-16T07:34:58Z) - M2IOSR: Maximal Mutual Information Open Set Recognition [47.1393314282815]
We propose a mutual information-based method with a streamlined architecture for open set recognition.
The proposed method significantly improves the performance of baselines and achieves new state-of-the-art results on several benchmarks consistently.
arXiv Detail & Related papers (2021-08-05T05:08:12Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Neighborhood Contrastive Learning for Novel Class Discovery [79.14767688903028]
We build a new framework, named Neighborhood Contrastive Learning, to learn discriminative representations that are important to clustering performance.
We experimentally demonstrate that these two ingredients significantly contribute to clustering performance and lead our model to outperform state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2021-06-20T17:34:55Z) - SCAN: Learning to Classify Images without Labels [73.69513783788622]
We advocate a two-step approach where feature learning and clustering are decoupled.
A self-supervised task from representation learning is employed to obtain semantically meaningful features.
We obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime.
arXiv Detail & Related papers (2020-05-25T18:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.