Learning Graph-Based Priors for Generalized Zero-Shot Learning
- URL: http://arxiv.org/abs/2010.11369v1
- Date: Thu, 22 Oct 2020 01:20:46 GMT
- Title: Learning Graph-Based Priors for Generalized Zero-Shot Learning
- Authors: Colin Samplawski, Jannik Wolff, Tassilo Klein, Moin Nabi
- Abstract summary: zero-shot learning (ZSL) requires correctly predicting the label of samples from classes which were unseen at training time.
Recent approaches to GZSL have shown the value of generative models, which are used to generate samples from unseen classes.
In this work, we incorporate an additional source of side information in the form of a relation graph over labels.
- Score: 21.43100823741393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of zero-shot learning (ZSL) requires correctly predicting the label
of samples from classes which were unseen at training time. This is achieved by
leveraging side information about class labels, such as label attributes or
word embeddings. Recently, attention has shifted to the more realistic task of
generalized ZSL (GZSL) where test sets consist of seen and unseen samples.
Recent approaches to GZSL have shown the value of generative models, which are
used to generate samples from unseen classes. In this work, we incorporate an
additional source of side information in the form of a relation graph over
labels. We leverage this graph in order to learn a set of prior distributions,
which encourage an aligned variational autoencoder (VAE) model to learn
embeddings which respect the graph structure. Using this approach we are able
to achieve improved performance on the CUB and SUN benchmarks over a strong
baseline.
Related papers
- Exploring Beyond Logits: Hierarchical Dynamic Labeling Based on Embeddings for Semi-Supervised Classification [49.09505771145326]
We propose a Hierarchical Dynamic Labeling (HDL) algorithm that does not depend on model predictions and utilizes image embeddings to generate sample labels.
Our approach has the potential to change the paradigm of pseudo-label generation in semi-supervised learning.
arXiv Detail & Related papers (2024-04-26T06:00:27Z) - LocalGCL: Local-aware Contrastive Learning for Graphs [17.04219759259025]
We propose underlineLocal-aware underlineGraph underlineContrastive underlineLearning (textbfmethnametrim) as a graph representation learner.
Experiments validate the superiority of methname against state-of-the-art methods, demonstrating its promise as a comprehensive graph representation learner.
arXiv Detail & Related papers (2024-02-27T09:23:54Z) - ZeroG: Investigating Cross-dataset Zero-shot Transferability in Graphs [36.749959232724514]
ZeroG is a new framework tailored to enable cross-dataset generalization.
We address the inherent challenges such as feature misalignment, mismatched label spaces, and negative transfer.
We propose a prompt-based subgraph sampling module that enriches the semantic information and structure information of extracted subgraphs.
arXiv Detail & Related papers (2024-02-17T09:52:43Z) - Graph-based Semi-supervised Learning: A Comprehensive Review [51.26862262550445]
Semi-supervised learning (SSL) has tremendous value in practice due to its ability to utilize both labeled data and unlabelled data.
An important class of SSL methods is to naturally represent data as graphs, which corresponds to graph-based semi-supervised learning (GSSL) methods.
GSSL methods have demonstrated their advantages in various domains due to their uniqueness of structure, the universality of applications, and their scalability to large scale data.
arXiv Detail & Related papers (2021-02-26T05:11:09Z) - SLADE: A Self-Training Framework For Distance Metric Learning [75.54078592084217]
We present a self-training framework, SLADE, to improve retrieval performance by leveraging additional unlabeled data.
We first train a teacher model on the labeled data and use it to generate pseudo labels for the unlabeled data.
We then train a student model on both labels and pseudo labels to generate final feature embeddings.
arXiv Detail & Related papers (2020-11-20T08:26:10Z) - Information Bottleneck Constrained Latent Bidirectional Embedding for
Zero-Shot Learning [59.58381904522967]
We propose a novel embedding based generative model with a tight visual-semantic coupling constraint.
We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces.
Our method can be easily extended to transductive ZSL setting by generating labels for unseen images.
arXiv Detail & Related papers (2020-09-16T03:54:12Z) - Generalized Zero-Shot Learning via VAE-Conditioned Generative Flow [83.27681781274406]
Generalized zero-shot learning aims to recognize both seen and unseen classes by transferring knowledge from semantic descriptions to visual representations.
Recent generative methods formulate GZSL as a missing data problem, which mainly adopts GANs or VAEs to generate visual features for unseen classes.
We propose a conditional version of generative flows for GZSL, i.e., VAE-Conditioned Generative Flow (VAE-cFlow)
arXiv Detail & Related papers (2020-09-01T09:12:31Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z) - Generative Adversarial Zero-shot Learning via Knowledge Graphs [32.42721467499858]
We introduce a new generative ZSL method named KG-GAN by incorporating rich semantics in a knowledge graph (KG) into GANs.
Specifically, we build upon Graph Neural Networks and encode KG from two views: class view and attribute view.
With well-learned semantic embeddings for each node (representing a visual category), we leverage GANs to synthesize compelling visual features for unseen classes.
arXiv Detail & Related papers (2020-04-07T03:55:26Z) - Rethinking Curriculum Learning with Incremental Labels and Adaptive
Compensation [35.593312267921256]
Like humans, deep networks have been shown to learn better when samples are organized and introduced in a meaningful order or curriculum.
We propose Learning with Incremental Labels and Adaptive Compensation (LILAC), a two-phase method that incrementally increases the number of unique output labels.
arXiv Detail & Related papers (2020-01-13T21:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.