Learning Graph-Based Priors for Generalized Zero-Shot Learning
- URL: http://arxiv.org/abs/2010.11369v1
- Date: Thu, 22 Oct 2020 01:20:46 GMT
- Title: Learning Graph-Based Priors for Generalized Zero-Shot Learning
- Authors: Colin Samplawski, Jannik Wolff, Tassilo Klein, Moin Nabi
- Abstract summary: zero-shot learning (ZSL) requires correctly predicting the label of samples from classes which were unseen at training time.
Recent approaches to GZSL have shown the value of generative models, which are used to generate samples from unseen classes.
In this work, we incorporate an additional source of side information in the form of a relation graph over labels.
- Score: 21.43100823741393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of zero-shot learning (ZSL) requires correctly predicting the label
of samples from classes which were unseen at training time. This is achieved by
leveraging side information about class labels, such as label attributes or
word embeddings. Recently, attention has shifted to the more realistic task of
generalized ZSL (GZSL) where test sets consist of seen and unseen samples.
Recent approaches to GZSL have shown the value of generative models, which are
used to generate samples from unseen classes. In this work, we incorporate an
additional source of side information in the form of a relation graph over
labels. We leverage this graph in order to learn a set of prior distributions,
which encourage an aligned variational autoencoder (VAE) model to learn
embeddings which respect the graph structure. Using this approach we are able
to achieve improved performance on the CUB and SUN benchmarks over a strong
baseline.
Related papers
- PyG-SSL: A Graph Self-Supervised Learning Toolkit [71.22547762704602]
Graph Self-Supervised Learning (SSL) has emerged as a pivotal area of research in recent years.
Despite the remarkable achievements of these graph SSL methods, their current implementation poses significant challenges for beginners.
We present a Graph SSL toolkit named PyG-SSL, which is built upon PyTorch and is compatible with various deep learning and scientific computing backends.
arXiv Detail & Related papers (2024-12-30T18:32:05Z) - Class Balance Matters to Active Class-Incremental Learning [61.11786214164405]
We aim to start from a pool of large-scale unlabeled data and then annotate the most informative samples for incremental learning.
We propose Class-Balanced Selection (CBS) strategy to achieve both class balance and informativeness in chosen samples.
Our CBS can be plugged and played into those CIL methods which are based on pretrained models with prompts tunning technique.
arXiv Detail & Related papers (2024-12-09T16:37:27Z) - Exploring Beyond Logits: Hierarchical Dynamic Labeling Based on Embeddings for Semi-Supervised Classification [49.09505771145326]
We propose a Hierarchical Dynamic Labeling (HDL) algorithm that does not depend on model predictions and utilizes image embeddings to generate sample labels.
Our approach has the potential to change the paradigm of pseudo-label generation in semi-supervised learning.
arXiv Detail & Related papers (2024-04-26T06:00:27Z) - LocalGCL: Local-aware Contrastive Learning for Graphs [17.04219759259025]
We propose underlineLocal-aware underlineGraph underlineContrastive underlineLearning (textbfmethnametrim) as a graph representation learner.
Experiments validate the superiority of methname against state-of-the-art methods, demonstrating its promise as a comprehensive graph representation learner.
arXiv Detail & Related papers (2024-02-27T09:23:54Z) - ZeroG: Investigating Cross-dataset Zero-shot Transferability in Graphs [36.749959232724514]
ZeroG is a new framework tailored to enable cross-dataset generalization.
We address the inherent challenges such as feature misalignment, mismatched label spaces, and negative transfer.
We propose a prompt-based subgraph sampling module that enriches the semantic information and structure information of extracted subgraphs.
arXiv Detail & Related papers (2024-02-17T09:52:43Z) - SLADE: A Self-Training Framework For Distance Metric Learning [75.54078592084217]
We present a self-training framework, SLADE, to improve retrieval performance by leveraging additional unlabeled data.
We first train a teacher model on the labeled data and use it to generate pseudo labels for the unlabeled data.
We then train a student model on both labels and pseudo labels to generate final feature embeddings.
arXiv Detail & Related papers (2020-11-20T08:26:10Z) - Information Bottleneck Constrained Latent Bidirectional Embedding for
Zero-Shot Learning [59.58381904522967]
We propose a novel embedding based generative model with a tight visual-semantic coupling constraint.
We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces.
Our method can be easily extended to transductive ZSL setting by generating labels for unseen images.
arXiv Detail & Related papers (2020-09-16T03:54:12Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z) - Generative Adversarial Zero-shot Learning via Knowledge Graphs [32.42721467499858]
We introduce a new generative ZSL method named KG-GAN by incorporating rich semantics in a knowledge graph (KG) into GANs.
Specifically, we build upon Graph Neural Networks and encode KG from two views: class view and attribute view.
With well-learned semantic embeddings for each node (representing a visual category), we leverage GANs to synthesize compelling visual features for unseen classes.
arXiv Detail & Related papers (2020-04-07T03:55:26Z) - Rethinking Curriculum Learning with Incremental Labels and Adaptive
Compensation [35.593312267921256]
Like humans, deep networks have been shown to learn better when samples are organized and introduced in a meaningful order or curriculum.
We propose Learning with Incremental Labels and Adaptive Compensation (LILAC), a two-phase method that incrementally increases the number of unique output labels.
arXiv Detail & Related papers (2020-01-13T21:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.