Domain-aware Visual Bias Eliminating for Generalized Zero-Shot Learning
- URL: http://arxiv.org/abs/2003.13261v2
- Date: Fri, 10 Apr 2020 07:12:53 GMT
- Title: Domain-aware Visual Bias Eliminating for Generalized Zero-Shot Learning
- Authors: Shaobo Min, Hantao Yao, Hongtao Xie, Chaoqun Wang, Zheng-Jun Zha, and
Yongdong Zhang
- Abstract summary: Domain-aware Visual Bias Eliminating (DVBE) network constructs two complementary visual representations.
For unseen images, we automatically search an optimal semantic-visual alignment architecture.
- Score: 150.42959029611657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent methods focus on learning a unified semantic-aligned visual
representation to transfer knowledge between two domains, while ignoring the
effect of semantic-free visual representation in alleviating the biased
recognition problem. In this paper, we propose a novel Domain-aware Visual Bias
Eliminating (DVBE) network that constructs two complementary visual
representations, i.e., semantic-free and semantic-aligned, to treat seen and
unseen domains separately. Specifically, we explore cross-attentive
second-order visual statistics to compact the semantic-free representation, and
design an adaptive margin Softmax to maximize inter-class divergences. Thus,
the semantic-free representation becomes discriminative enough to not only
predict seen class accurately but also filter out unseen images, i.e., domain
detection, based on the predicted class entropy. For unseen images, we
automatically search an optimal semantic-visual alignment architecture, rather
than manual designs, to predict unseen classes. With accurate domain detection,
the biased recognition problem towards the seen domain is significantly
reduced. Experiments on five benchmarks for classification and segmentation
show that DVBE outperforms existing methods by averaged 5.7% improvement.
Related papers
- SGDR: Semantic-guided Disentangled Representation for Unsupervised
Cross-modality Medical Image Segmentation [5.090366802287405]
We propose a novel framework, called semantic-guided disentangled representation (SGDR), to exact semantically meaningful feature for segmentation task.
We validated our method on two public datasets and experiment results show that our approach outperforms the state of the art methods on two evaluation metrics by a significant margin.
arXiv Detail & Related papers (2022-03-26T08:31:00Z) - Semantic Distribution-aware Contrastive Adaptation for Semantic
Segmentation [50.621269117524925]
Domain adaptive semantic segmentation refers to making predictions on a certain target domain with only annotations of a specific source domain.
We present a semantic distribution-aware contrastive adaptation algorithm that enables pixel-wise representation alignment.
We evaluate SDCA on multiple benchmarks, achieving considerable improvements over existing algorithms.
arXiv Detail & Related papers (2021-05-11T13:21:25Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Adversarial Bipartite Graph Learning for Video Domain Adaptation [50.68420708387015]
Domain adaptation techniques, which focus on adapting models between distributionally different domains, are rarely explored in the video recognition area.
Recent works on visual domain adaptation which leverage adversarial learning to unify the source and target video representations are not highly effective on the videos.
This paper proposes an Adversarial Bipartite Graph (ABG) learning framework which directly models the source-target interactions.
arXiv Detail & Related papers (2020-07-31T03:48:41Z) - Attribute-Induced Bias Eliminating for Transductive Zero-Shot Learning [144.94728981314717]
We propose a novel Attribute-Induced Bias Eliminating (AIBE) module for Transductive ZSL.
For the visual bias between two domains, the Mean-Teacher module is first leveraged to bridge the visual representation discrepancy between two domains.
An attentional graph attribute embedding is proposed to reduce the semantic bias between seen and unseen categories.
Finally, for the semantic-visual bias in the unseen domain, an unseen semantic alignment constraint is designed to align visual and semantic space in an unsupervised manner.
arXiv Detail & Related papers (2020-05-31T02:08:01Z) - Phase Consistent Ecological Domain Adaptation [76.75730500201536]
We focus on the task of semantic segmentation, where annotated synthetic data are aplenty, but annotating real data is laborious.
The first criterion, inspired by visual psychophysics, is that the map between the two image domains be phase-preserving.
The second criterion aims to leverage ecological statistics, or regularities in the scene which are manifest in any image of it, regardless of the characteristics of the illuminant or the imaging sensor.
arXiv Detail & Related papers (2020-04-10T06:58:03Z) - Unsupervised Domain Adaptive Object Detection using Forward-Backward
Cyclic Adaptation [13.163271874039191]
We present a novel approach to perform the unsupervised domain adaptation for object detection through forward-backward cyclic (FBC) training.
Recent adversarial training based domain adaptation methods have shown their effectiveness on minimizing domain discrepancy via marginal feature distributions alignment.
We propose Forward-Backward Cyclic Adaptation, which iteratively computes adaptation from source to target via backward hopping and from target to source via forward passing.
arXiv Detail & Related papers (2020-02-03T06:24:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.