Self-supervised Detransformation Autoencoder for Representation Learning
in Open Set Recognition
- URL: http://arxiv.org/abs/2105.13557v1
- Date: Fri, 28 May 2021 02:45:57 GMT
- Title: Self-supervised Detransformation Autoencoder for Representation Learning
in Open Set Recognition
- Authors: Jingyun Jia, Philip K. Chan
- Abstract summary: We propose a self-supervision method, Detransformation Autoencoder (DTAE) for the Open set recognition problem.
Our proposed self-supervision method achieves significant gains in detecting the unknown class and classifying the known classes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The objective of Open set recognition (OSR) is to learn a classifier that can
reject the unknown samples while classifying the known classes accurately. In
this paper, we propose a self-supervision method, Detransformation Autoencoder
(DTAE), for the OSR problem. This proposed method engages in learning
representations that are invariant to the transformations of the input data.
Experiments on several standard image datasets indicate that the pre-training
process significantly improves the model performance in the OSR tasks.
Meanwhile, our proposed self-supervision method achieves significant gains in
detecting the unknown class and classifying the known classes. Moreover, our
analysis indicates that DTAE can yield representations that contain more target
class information and less transformation information than RotNet.
Related papers
- EOL: Transductive Few-Shot Open-Set Recognition by Enhancing Outlier Logits [16.081748213657825]
In Few-Shot Learning, models are trained to recognise unseen objects from a query set, given a few labelled examples from a support set.
In this work, we explore the more nuanced and practical challenge of Open-Set Few-Shot Recognition.
arXiv Detail & Related papers (2024-08-04T15:00:22Z) - Informed Decision-Making through Advancements in Open Set Recognition and Unknown Sample Detection [0.0]
Open set recognition (OSR) aims to bring classification tasks in a situation that is more like reality.
This study provides an algorithm exploring a new representation of feature space to improve classification in OSR tasks.
arXiv Detail & Related papers (2024-05-09T15:15:34Z) - Enlarging Instance-specific and Class-specific Information for Open-set
Action Recognition [47.69171542776917]
We find that features with richer semantic diversity can significantly improve the open-set performance under the same uncertainty scores.
A novel Prototypical Similarity Learning (PSL) framework is proposed to keep the instance variance within the same class to retain more IS information.
arXiv Detail & Related papers (2023-03-25T04:07:36Z) - Learning Common Rationale to Improve Self-Supervised Representation for
Fine-Grained Visual Recognition Problems [61.11799513362704]
We propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes.
We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective.
arXiv Detail & Related papers (2023-03-03T02:07:40Z) - Class-Specific Semantic Reconstruction for Open Set Recognition [101.24781422480406]
Open set recognition enables deep neural networks (DNNs) to identify samples of unknown classes.
We propose a novel method, called Class-Specific Semantic Reconstruction (CSSR), that integrates the power of auto-encoder (AE) and prototype learning.
Results of experiments conducted on multiple datasets show that the proposed method achieves outstanding performance in both close and open set recognition.
arXiv Detail & Related papers (2022-07-05T16:25:34Z) - Open Set Recognition using Vision Transformer with an Additional
Detection Head [6.476341388938684]
We propose a novel approach to open set recognition (OSR) based on the vision transformer (ViT) technique.
Our approach employs two separate training stages. First, a ViT model is trained to perform closed set classification.
Then, an additional detection head is attached to the embedded features extracted by the ViT, trained to force the representations of known data to class-specific clusters compactly.
arXiv Detail & Related papers (2022-03-16T07:34:58Z) - A Generic Self-Supervised Framework of Learning Invariant Discriminative
Features [9.614694312155798]
This paper proposes a generic SSL framework based on a constrained self-labelling assignment process.
The proposed training strategy outperforms a majority of state-of-the-art representation learning methods based on AE structures.
arXiv Detail & Related papers (2022-02-14T18:09:43Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Guided Variational Autoencoder for Disentanglement Learning [79.02010588207416]
We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning.
We design an unsupervised strategy and a supervised strategy in Guided-VAE and observe enhanced modeling and controlling capability over the vanilla VAE.
arXiv Detail & Related papers (2020-04-02T20:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.