Progressively Select and Reject Pseudo-labelled Samples for Open-Set
Domain Adaptation
- URL: http://arxiv.org/abs/2110.12635v1
- Date: Mon, 25 Oct 2021 04:28:55 GMT
- Title: Progressively Select and Reject Pseudo-labelled Samples for Open-Set
Domain Adaptation
- Authors: Qian Wang, Fanlin Meng, Toby P. Breckon
- Abstract summary: Domain adaptation solves image classification problems in the target domain by taking advantage of the labelled source data and unlabelled target data.
Our proposed method learns discriminative common subspaces for the source and target domains using a novel Open-Set Locality Preserving Projection (OSLPP) algorithm.
The common subspace learning and the pseudo-labelled sample selection/rejection facilitate each other in an iterative learning framework.
- Score: 26.889303784575805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation solves image classification problems in the target domain
by taking advantage of the labelled source data and unlabelled target data.
Usually, the source and target domains share the same set of classes. As a
special case, Open-Set Domain Adaptation (OSDA) assumes there exist additional
classes in the target domain but not present in the source domain. To solve
such a domain adaptation problem, our proposed method learns discriminative
common subspaces for the source and target domains using a novel Open-Set
Locality Preserving Projection (OSLPP) algorithm. The source and target domain
data are aligned in the learned common spaces class-wisely. To handle the
open-set classification problem, our method progressively selects target
samples to be pseudo-labelled as known classes and rejects the outliers if they
are detected as from unknown classes. The common subspace learning algorithm
OSLPP simultaneously aligns the labelled source data and pseudo-labelled target
data from known classes and pushes the rejected target data away from the known
classes. The common subspace learning and the pseudo-labelled sample
selection/rejection facilitate each other in an iterative learning framework
and achieves state-of-the-art performance on benchmark datasets Office-31 and
Office-Home with the average HOS of 87.4% and 67.0% respectively.
Related papers
- Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - Unsupervised Domain Adaptation via Distilled Discriminative Clustering [45.39542287480395]
We re-cast the domain adaptation problem as discriminative clustering of target data.
We propose to jointly train the network using parallel, supervised learning objectives over labeled source data.
We conduct careful ablation studies and extensive experiments on five popular benchmark datasets.
arXiv Detail & Related papers (2023-02-23T13:03:48Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Open Set Domain Adaptation by Extreme Value Theory [22.826118321715455]
We tackle the open set domain adaptation problem under the assumption that the source and the target label spaces only partially overlap.
We propose an instance-level reweighting strategy for domain adaptation where the weights indicate the likelihood of a sample belonging to known classes.
Experiments on conventional domain adaptation datasets show that the proposed method outperforms the state-of-the-art models.
arXiv Detail & Related papers (2020-12-22T19:31:32Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Domain Adaptation with Auxiliary Target Domain-Oriented Classifier [115.39091109079622]
Domain adaptation aims to transfer knowledge from a label-rich but heterogeneous domain to a label-scare domain.
One of the most popular SSL techniques is pseudo-labeling that assigns pseudo labels for each unlabeled data.
We propose a new pseudo-labeling framework called Auxiliary Target Domain-Oriented (ATDOC)
ATDOC alleviates the bias by introducing an auxiliary classifier for target data only, to improve the quality of pseudo labels.
arXiv Detail & Related papers (2020-07-08T15:01:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.