Conditional Variational Capsule Network for Open Set Recognition
- URL: http://arxiv.org/abs/2104.09159v1
- Date: Mon, 19 Apr 2021 09:39:30 GMT
- Title: Conditional Variational Capsule Network for Open Set Recognition
- Authors: Yunrui Guo, Guglielmo Camporese, Wenjing Yang, Alessandro Sperduti,
Lamberto Ballan
- Abstract summary: In open set recognition, a classifier has to detect unknown classes that are not known at training time.
Recently proposed Capsule Networks have shown to outperform alternatives in many fields, particularly in image recognition.
In our proposal, during training, capsules features of the same known class are encouraged to match a pre-defined gaussian, one for each class.
- Score: 64.18600886936557
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In open set recognition, a classifier has to detect unknown classes that are
not known at training time. In order to recognize new classes, the classifier
has to project the input samples of known classes in very compact and separated
regions of the features space in order to discriminate outlier samples of
unknown classes. Recently proposed Capsule Networks have shown to outperform
alternatives in many fields, particularly in image recognition, however they
have not been fully applied yet to open-set recognition. In capsule networks,
scalar neurons are replaced by capsule vectors or matrices, whose entries
represent different properties of objects. In our proposal, during training,
capsules features of the same known class are encouraged to match a pre-defined
gaussian, one for each class. To this end, we use the variational autoencoder
framework, with a set of gaussian prior as the approximation for the posterior
distribution. In this way, we are able to control the compactness of the
features of the same class around the center of the gaussians, thus controlling
the ability of the classifier in detecting samples from unknown classes. We
conducted several experiments and ablation of our model, obtaining state of the
art results on different datasets in the open set recognition and unknown
detection tasks.
Related papers
- Learning Classifiers of Prototypes and Reciprocal Points for Universal
Domain Adaptation [79.62038105814658]
Universal Domain aims to transfer the knowledge between datasets by handling two shifts: domain-shift and categoryshift.
Main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target.
Most existing methods approach this problem by first training the target adapted known and then relying on the single threshold to distinguish unknown target samples.
arXiv Detail & Related papers (2022-12-16T09:01:57Z) - Open-Set Recognition with Gradient-Based Representations [16.80077149399317]
We propose to utilize gradient-based representations to train an unknown detector with instances of known classes only.
We show that our gradient-based approach outperforms state-of-the-art methods by up to 11.6% in open-set classification.
arXiv Detail & Related papers (2022-06-16T14:54:12Z) - Open-set Recognition via Augmentation-based Similarity Learning [11.706887820422002]
We propose to detect unknowns (or unseen class samples) through learning pairwise similarities.
We call our method OPG (Open set recognition based on Pseudo unseen data Generation)
arXiv Detail & Related papers (2022-03-24T17:49:38Z) - Learning Placeholders for Open-Set Recognition [38.57786747665563]
We propose PlaceholdeRs for Open-SEt Recognition (Proser) to maintain classification performance on known classes and reject unknowns.
Proser efficiently generates novel class by manifold mixup, and adaptively sets the value of reserved open-set classifier during training.
arXiv Detail & Related papers (2021-03-28T09:18:15Z) - Open Set Recognition with Conditional Probabilistic Generative Models [51.40872765917125]
We propose Conditional Probabilistic Generative Models (CPGM) for open set recognition.
CPGM can detect unknown samples but also classify known classes by forcing different latent features to approximate conditional Gaussian distributions.
Experiment results on multiple benchmark datasets reveal that the proposed method significantly outperforms the baselines.
arXiv Detail & Related papers (2020-08-12T06:23:49Z) - Open-Set Recognition with Gaussian Mixture Variational Autoencoders [91.3247063132127]
In inference, open-set classification is to either classify a sample into a known class from training or reject it as an unknown class.
We train our model to cooperatively learn reconstruction and perform class-based clustering in the latent space.
Our model achieves more accurate and robust open-set classification results, with an average F1 improvement of 29.5%.
arXiv Detail & Related papers (2020-06-03T01:15:19Z) - Few-Shot Open-Set Recognition using Meta-Learning [72.15940446408824]
The problem of open-set recognition is considered.
A new oPen sEt mEta LEaRning (PEELER) algorithm is introduced.
arXiv Detail & Related papers (2020-05-27T23:49:26Z) - Hybrid Models for Open Set Recognition [28.62025409781781]
Open set recognition requires a classifier to detect samples not belonging to any of the classes in its training set.
We propose OpenHybrid, which is composed of an encoder to encode the input data into a joint embedding space, a classifier to classify samples to inlier classes, and a flow-based density estimator.
Experiments on standard open set benchmarks reveal that an end-to-end trained OpenHybrid model significantly outperforms state-of-the-art methods and flow-based baselines.
arXiv Detail & Related papers (2020-03-27T16:14:27Z) - Conditional Gaussian Distribution Learning for Open Set Recognition [10.90687687505665]
We propose Conditional Gaussian Distribution Learning (CGDL) for open set recognition.
In addition to detecting unknown samples, this method can also classify known samples by forcing different latent features to approximate different Gaussian models.
Experiments on several standard image reveal that the proposed method significantly outperforms the baseline method and achieves new state-of-the-art results.
arXiv Detail & Related papers (2020-03-19T14:32:08Z) - Learning Class Regularized Features for Action Recognition [68.90994813947405]
We introduce a novel method named Class Regularization that performs class-based regularization of layer activations.
We show that using Class Regularization blocks in state-of-the-art CNN architectures for action recognition leads to systematic improvement gains of 1.8%, 1.2% and 1.4% on the Kinetics, UCF-101 and HMDB-51 datasets, respectively.
arXiv Detail & Related papers (2020-02-07T07:27:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.