Two-Level Adversarial Visual-Semantic Coupling for Generalized Zero-shot
Learning
- URL: http://arxiv.org/abs/2007.07757v2
- Date: Mon, 30 Nov 2020 11:00:45 GMT
- Title: Two-Level Adversarial Visual-Semantic Coupling for Generalized Zero-shot
Learning
- Authors: Shivam Chandhok and Vineeth N Balasubramanian
- Abstract summary: We propose a new two-level joint idea to augment the generative network with an inference network during training.
This provides strong cross-modal interaction for effective transfer of knowledge between visual and semantic domains.
We evaluate our approach on four benchmark datasets against several state-of-the-art methods, and show its performance.
- Score: 21.89909688056478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of generative zero-shot methods mainly depends on the quality
of generated features and how well the model facilitates knowledge transfer
between visual and semantic domains. The quality of generated features is a
direct consequence of the ability of the model to capture the several modes of
the underlying data distribution. To address these issues, we propose a new
two-level joint maximization idea to augment the generative network with an
inference network during training which helps our model capture the several
modes of the data and generate features that better represent the underlying
data distribution. This provides strong cross-modal interaction for effective
transfer of knowledge between visual and semantic domains. Furthermore,
existing methods train the zero-shot classifier either on generate synthetic
image features or latent embeddings produced by leveraging representation
learning. In this work, we unify these paradigms into a single model which in
addition to synthesizing image features, also utilizes the representation
learning capabilities of the inference network to provide discriminative
features for the final zero-shot recognition task. We evaluate our approach on
four benchmark datasets i.e. CUB, FLO, AWA1 and AWA2 against several
state-of-the-art methods, and show its performance. We also perform ablation
studies to analyze and understand our method more carefully for the Generalized
Zero-shot Learning task.
Related papers
- Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - Neural Clustering based Visual Representation Learning [61.72646814537163]
Clustering is one of the most classic approaches in machine learning and data analysis.
We propose feature extraction with clustering (FEC), which views feature extraction as a process of selecting representatives from data.
FEC alternates between grouping pixels into individual clusters to abstract representatives and updating the deep features of pixels with current representatives.
arXiv Detail & Related papers (2024-03-26T06:04:50Z) - Harnessing Diffusion Models for Visual Perception with Meta Prompts [68.78938846041767]
We propose a simple yet effective scheme to harness a diffusion model for visual perception tasks.
We introduce learnable embeddings (meta prompts) to the pre-trained diffusion models to extract proper features for perception.
Our approach achieves new performance records in depth estimation tasks on NYU depth V2 and KITTI, and in semantic segmentation task on CityScapes.
arXiv Detail & Related papers (2023-12-22T14:40:55Z) - Generative Model-based Feature Knowledge Distillation for Action
Recognition [11.31068233536815]
Our paper introduces an innovative knowledge distillation framework, with the generative model for training a lightweight student model.
The efficacy of our approach is demonstrated through comprehensive experiments on diverse popular datasets.
arXiv Detail & Related papers (2023-12-14T03:55:29Z) - UniDiff: Advancing Vision-Language Models with Generative and
Discriminative Learning [86.91893533388628]
This paper presents UniDiff, a unified multi-modal model that integrates image-text contrastive learning (ITC), text-conditioned image synthesis learning (IS), and reciprocal semantic consistency modeling (RSC)
UniDiff demonstrates versatility in both multi-modal understanding and generative tasks.
arXiv Detail & Related papers (2023-06-01T15:39:38Z) - Cross-modal Representation Learning for Zero-shot Action Recognition [67.57406812235767]
We present a cross-modal Transformer-based framework, which jointly encodes video data and text labels for zero-shot action recognition (ZSAR)
Our model employs a conceptually new pipeline by which visual representations are learned in conjunction with visual-semantic associations in an end-to-end manner.
Experiment results show our model considerably improves upon the state of the arts in ZSAR, reaching encouraging top-1 accuracy on UCF101, HMDB51, and ActivityNet benchmark datasets.
arXiv Detail & Related papers (2022-05-03T17:39:27Z) - Multimodal Contrastive Training for Visual Representation Learning [45.94662252627284]
We develop an approach to learning visual representations that embraces multimodal data.
Our method exploits intrinsic data properties within each modality and semantic information from cross-modal correlation simultaneously.
By including multimodal training in a unified framework, our method can learn more powerful and generic visual features.
arXiv Detail & Related papers (2021-04-26T19:23:36Z) - Information Maximization Clustering via Multi-View Self-Labelling [9.947717243638289]
We propose a novel single-phase clustering method that simultaneously learns meaningful representations and assigns the corresponding annotations.
This is achieved by integrating a discrete representation into the self-supervised paradigm through a net.
Our empirical results show that the proposed framework outperforms state-of-the-art techniques with the average accuracy of 89.1% and 49.0%, respectively.
arXiv Detail & Related papers (2021-03-12T16:04:41Z) - A Joint Representation Learning and Feature Modeling Approach for
One-class Recognition [15.606362608483316]
We argue that both of these approaches have their own limitations; and a more effective solution can be obtained by combining the two.
The proposed approach is based on the combination of a generative framework and a one-class classification method.
We test the effectiveness of the proposed method on three one-class classification tasks and obtain state-of-the-art results.
arXiv Detail & Related papers (2021-01-24T19:51:46Z) - Adversarial Bipartite Graph Learning for Video Domain Adaptation [50.68420708387015]
Domain adaptation techniques, which focus on adapting models between distributionally different domains, are rarely explored in the video recognition area.
Recent works on visual domain adaptation which leverage adversarial learning to unify the source and target video representations are not highly effective on the videos.
This paper proposes an Adversarial Bipartite Graph (ABG) learning framework which directly models the source-target interactions.
arXiv Detail & Related papers (2020-07-31T03:48:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.