Open Set Domain Recognition via Attention-Based GCN and Semantic
Matching Optimization
- URL: http://arxiv.org/abs/2105.04967v2
- Date: Wed, 12 May 2021 02:16:33 GMT
- Title: Open Set Domain Recognition via Attention-Based GCN and Semantic
Matching Optimization
- Authors: Xinxing He, Yuan Yuan, Zhiyu Jiang
- Abstract summary: This work presents an end-to-end model based on attention-based GCN and semantic matching optimization.
Experimental results validate that the proposed model not only has superiority on recognizing the images of known and unknown classes, but also can adapt to various openness of the target domain.
- Score: 8.831857715361624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open set domain recognition has got the attention in recent years. The task
aims to specifically classify each sample in the practical unlabeled target
domain, which consists of all known classes in the manually labeled source
domain and target-specific unknown categories. The absence of annotated
training data or auxiliary attribute information for unknown categories makes
this task especially difficult. Moreover, exiting domain discrepancy in label
space and data distribution further distracts the knowledge transferred from
known classes to unknown classes. To address these issues, this work presents
an end-to-end model based on attention-based GCN and semantic matching
optimization, which first employs the attention mechanism to enable the central
node to learn more discriminating representations from its neighbors in the
knowledge graph. Moreover, a coarse-to-fine semantic matching optimization
approach is proposed to progressively bridge the domain gap. Experimental
results validate that the proposed model not only has superiority on
recognizing the images of known and unknown classes, but also can adapt to
various openness of the target domain.
Related papers
- Learning to Discover Knowledge: A Weakly-Supervised Partial Domain Adaptation Approach [20.899013563493202]
Domain adaptation has shown appealing performance by leveraging knowledge from a source domain with rich annotations.
For a specific target task, it is cumbersome to collect related and high-quality source domains.
In this paper, we propose a simple yet effective domain adaptation approach, termed as self-paced transfer classifier learning (SP-TCL)
arXiv Detail & Related papers (2024-06-20T12:54:07Z) - Unknown Prompt, the only Lacuna: Unveiling CLIP's Potential for Open Domain Generalization [12.126495847808803]
We introduce ODG-CLIP, harnessing the semantic prowess of the vision-language model, CLIP.
We conceptualize ODG as a multi-class classification challenge encompassing both known and novel categories.
We infuse images with class-discriminative knowledge derived from the prompt space to augment the fidelity of CLIP's visual embeddings.
arXiv Detail & Related papers (2024-03-31T15:03:31Z) - INSURE: An Information Theory Inspired Disentanglement and Purification
Model for Domain Generalization [55.86299081580768]
Domain Generalization (DG) aims to learn a generalizable model on the unseen target domain by only training on the multiple observed source domains.
We propose an Information theory iNspired diSentanglement and pURification modEl (INSURE) to explicitly disentangle the latent features.
We conduct experiments on four widely used DG benchmark datasets including PACS, OfficeHome, TerraIncognita, and DomainNet.
arXiv Detail & Related papers (2023-09-08T01:41:35Z) - Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Against Adversarial Learning: Naturally Distinguish Known and Unknown in
Open Set Domain Adaptation [17.819949636876018]
Open set domain adaptation refers to the scenario that the target domain contains categories that do not exist in the source domain.
We propose an "against adversarial learning" method that can distinguish unknown target data and known data naturally.
Experimental results show that the proposed method can make significant improvement in performance compared with several state-of-the-art methods.
arXiv Detail & Related papers (2020-11-04T10:30:43Z) - Domain Adaptation with Auxiliary Target Domain-Oriented Classifier [115.39091109079622]
Domain adaptation aims to transfer knowledge from a label-rich but heterogeneous domain to a label-scare domain.
One of the most popular SSL techniques is pseudo-labeling that assigns pseudo labels for each unlabeled data.
We propose a new pseudo-labeling framework called Auxiliary Target Domain-Oriented (ATDOC)
ATDOC alleviates the bias by introducing an auxiliary classifier for target data only, to improve the quality of pseudo labels.
arXiv Detail & Related papers (2020-07-08T15:01:35Z) - Exploring Category-Agnostic Clusters for Open-Set Domain Adaptation [138.29273453811945]
We present Self-Ensembling with Category-agnostic Clusters (SE-CC) -- a novel architecture that steers domain adaptation with category-agnostic clusters in target domain.
clustering is performed over all the unlabeled target samples to obtain the category-agnostic clusters, which reveal the underlying data space structure peculiar to target domain.
arXiv Detail & Related papers (2020-06-11T16:19:02Z) - Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive
Object Re-ID [55.21702895051287]
Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain.
We propose a novel self-paced contrastive learning framework with hybrid memory.
Our method outperforms state-of-the-arts on multiple domain adaptation tasks of object re-ID.
arXiv Detail & Related papers (2020-06-04T09:12:44Z) - Class Conditional Alignment for Partial Domain Adaptation [10.506584969668792]
Adrial adaptation models have demonstrated significant progress towards transferring knowledge from a labeled source dataset to an unlabeled target dataset.
PDA investigates the scenarios in which the source domain is large and diverse, and the target label space is a subset of the source label space.
We propose a multi-class adversarial architecture for PDA.
arXiv Detail & Related papers (2020-03-14T23:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.