Generalized Zero-Shot Learning Via Over-Complete Distribution
- URL: http://arxiv.org/abs/2004.00666v1
- Date: Wed, 1 Apr 2020 19:05:28 GMT
- Title: Generalized Zero-Shot Learning Via Over-Complete Distribution
- Authors: Rohit Keshari, Richa Singh, Mayank Vatsa
- Abstract summary: We propose to generate an Over-Complete Distribution (OCD) using Conditional Variational Autoencoder (CVAE) of both seen and unseen classes.
The effectiveness of the framework is evaluated using both Zero-Shot Learning and Generalized Zero-Shot Learning protocols.
- Score: 79.5140590952889
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A well trained and generalized deep neural network (DNN) should be robust to
both seen and unseen classes. However, the performance of most of the existing
supervised DNN algorithms degrade for classes which are unseen in the training
set. To learn a discriminative classifier which yields good performance in
Zero-Shot Learning (ZSL) settings, we propose to generate an Over-Complete
Distribution (OCD) using Conditional Variational Autoencoder (CVAE) of both
seen and unseen classes. In order to enforce the separability between classes
and reduce the class scatter, we propose the use of Online Batch Triplet Loss
(OBTL) and Center Loss (CL) on the generated OCD. The effectiveness of the
framework is evaluated using both Zero-Shot Learning and Generalized Zero-Shot
Learning protocols on three publicly available benchmark databases, SUN, CUB
and AWA2. The results show that generating over-complete distributions and
enforcing the classifier to learn a transform function from overlapping to
non-overlapping distributions can improve the performance on both seen and
unseen classes.
Related papers
- Exploring Data Efficiency in Zero-Shot Learning with Diffusion Models [38.36200871549062]
Zero-Shot Learning (ZSL) aims to enable classifiers to identify unseen classes by enhancing data efficiency at the class level.
This is achieved by generating image features from pre-defined semantics of unseen classes.
In this paper, we demonstrate that limited seen examples generally result in deteriorated performance of generative models.
This unified framework incorporates diffusion models to improve data efficiency at both the class and instance levels.
arXiv Detail & Related papers (2024-06-05T04:37:06Z) - A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud
Semantic Segmentation via Decoupling Optimization [64.36097398869774]
Semi-supervised learning (SSL) has been an active research topic for large-scale 3D scene understanding.
The existing SSL-based methods suffer from severe training bias due to class imbalance and long-tail distributions of the point cloud data.
We introduce a new decoupling optimization framework, which disentangles feature representation learning and classifier in an alternative optimization manner to shift the bias decision boundary effectively.
arXiv Detail & Related papers (2024-01-13T04:16:40Z) - Deep Negative Correlation Classification [82.45045814842595]
Existing deep ensemble methods naively train many different models and then aggregate their predictions.
We propose deep negative correlation classification (DNCC)
DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated.
arXiv Detail & Related papers (2022-12-14T07:35:20Z) - Contrastive Fine-grained Class Clustering via Generative Adversarial
Networks [9.667133604169829]
We introduce C3-GAN, a method that leverages the categorical inference power of InfoGAN by applying contrastive learning.
C3-GAN achieved state-of-the-art clustering performance on four fine-grained benchmark datasets.
arXiv Detail & Related papers (2021-12-30T08:57:11Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - EC-GAN: Low-Sample Classification using Semi-Supervised Algorithms and
GANs [0.0]
Semi-supervised learning has been gaining attention as it allows for performing image analysis tasks such as classification with limited labeled data.
Some popular algorithms using Generative Adrial Networks (GANs) for semi-supervised classification share a single architecture for classification and discrimination.
This may require a model to converge to a separate data distribution for each task, which may reduce overall performance.
We propose a novel GAN model namely External GAN (ECGAN) that utilizes GANs and semi-supervised algorithms to improve classification in fully-supervised tasks.
arXiv Detail & Related papers (2020-12-26T05:58:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.