From Anchor Generation to Distribution Alignment: Learning a
Discriminative Embedding Space for Zero-Shot Recognition
- URL: http://arxiv.org/abs/2002.03554v1
- Date: Mon, 10 Feb 2020 05:25:33 GMT
- Title: From Anchor Generation to Distribution Alignment: Learning a
Discriminative Embedding Space for Zero-Shot Recognition
- Authors: Fuzhen Li, Zhenfeng Zhu, Xingxing Zhang, Jian Cheng, Yao Zhao
- Abstract summary: In zero-shot learning (ZSL), the samples to be classified are usually projected into side information templates such as attributes.
We propose a novel framework called Discriminative Anchor Generation and Distribution Alignment Model (DAGDA)
Firstly, in order to rectify the distribution of original templates, a diffusion based graph convolutional network, which can explicitly model the interaction between class and side information, is proposed to produce discriminative anchors.
Secondly, to further align the samples with the corresponding anchors in anchor space, which aims to refine the distribution in a fine-grained manner, we introduce a semantic relation regularization
- Score: 46.47620562161315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In zero-shot learning (ZSL), the samples to be classified are usually
projected into side information templates such as attributes. However, the
irregular distribution of templates makes classification results confused. To
alleviate this issue, we propose a novel framework called Discriminative Anchor
Generation and Distribution Alignment Model (DAGDA). Firstly, in order to
rectify the distribution of original templates, a diffusion based graph
convolutional network, which can explicitly model the interaction between class
and side information, is proposed to produce discriminative anchors. Secondly,
to further align the samples with the corresponding anchors in anchor space,
which aims to refine the distribution in a fine-grained manner, we introduce a
semantic relation regularization in anchor space. Following the way of
inductive learning, our approach outperforms some existing state-of-the-art
methods, on several benchmark datasets, for both conventional as well as
generalized ZSL setting. Meanwhile, the ablation experiments strongly
demonstrate the effectiveness of each component.
Related papers
- Prototype Fission: Closing Set for Robust Open-set Semi-supervised
Learning [6.645479471664253]
Semi-supervised Learning (SSL) has been proven vulnerable to out-of-distribution (OOD) samples in realistic large-scale unsupervised datasets.
We propose Prototype Fission(PF) to divide class-wise latent spaces into compact sub-spaces by automatic fine-grained latent space mining.
arXiv Detail & Related papers (2023-08-29T19:04:42Z) - Bi-directional Distribution Alignment for Transductive Zero-Shot
Learning [48.80413182126543]
We propose a novel zero-shot learning model (TZSL) called Bi-VAEGAN.
It largely improves the shift by a strengthened distribution alignment between the visual and auxiliary spaces.
In benchmark evaluation, Bi-VAEGAN achieves the new state of the arts under both the standard and generalized TZSL settings.
arXiv Detail & Related papers (2023-03-15T15:32:59Z) - Chaos to Order: A Label Propagation Perspective on Source-Free Domain
Adaptation [8.27771856472078]
We present Chaos to Order (CtO), a novel approach for source-free domain adaptation (SFDA)
CtO strives to constrain semantic credibility and propagate label information among target subpopulations.
Empirical evidence demonstrates that CtO outperforms the state of the arts on three public benchmarks.
arXiv Detail & Related papers (2023-01-20T03:39:35Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Distribution Regularized Self-Supervised Learning for Domain Adaptation
of Semantic Segmentation [3.284878354988896]
This paper proposes a pixel-level distribution regularization scheme (DRSL) for self-supervised domain adaptation of semantic segmentation.
In a typical setting, the classification loss forces the semantic segmentation model to greedily learn the representations that capture inter-class variations.
We capture pixel-level intra-class variations through class-aware multi-modal distribution learning.
arXiv Detail & Related papers (2022-06-20T09:52:49Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - HSVA: Hierarchical Semantic-Visual Adaptation for Zero-Shot Learning [74.76431541169342]
Zero-shot learning (ZSL) tackles the unseen class recognition problem, transferring semantic knowledge from seen classes to unseen ones.
We propose a novel hierarchical semantic-visual adaptation (HSVA) framework to align semantic and visual domains.
Experiments on four benchmark datasets demonstrate HSVA achieves superior performance on both conventional and generalized ZSL.
arXiv Detail & Related papers (2021-09-30T14:27:50Z) - A Boundary Based Out-of-Distribution Classifier for Generalized
Zero-Shot Learning [83.1490247844899]
Generalized Zero-Shot Learning (GZSL) is a challenging topic that has promising prospects in many realistic scenarios.
We propose a boundary based Out-of-Distribution (OOD) classifier which classifies the unseen and seen domains by only using seen samples for training.
We extensively validate our approach on five popular benchmark datasets including AWA1, AWA2, CUB, FLO and SUN.
arXiv Detail & Related papers (2020-08-09T11:27:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.