Cross-Domain Structure Preserving Projection for Heterogeneous Domain
Adaptation
- URL: http://arxiv.org/abs/2004.12427v3
- Date: Sat, 9 Oct 2021 03:27:41 GMT
- Title: Cross-Domain Structure Preserving Projection for Heterogeneous Domain
Adaptation
- Authors: Qian Wang, Toby P. Breckon
- Abstract summary: Heterogeneous Domain Adaptation (HDA) addresses the transfer learning problems where data from source and target domains are of different modalities.
Traditional domain adaptation algorithms assume that the representations of source and target samples reside in the same feature space.
We propose a novel Cross-Domain Structure Preserving Projection (CDSPP) algorithm for HDA.
- Score: 23.18781318003242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heterogeneous Domain Adaptation (HDA) addresses the transfer learning
problems where data from the source and target domains are of different
modalities (e.g., texts and images) or feature dimensions (e.g., features
extracted with different methods). It is useful for multi-modal data analysis.
Traditional domain adaptation algorithms assume that the representations of
source and target samples reside in the same feature space, hence are likely to
fail in solving the heterogeneous domain adaptation problem. Contemporary
state-of-the-art HDA approaches are usually composed of complex optimization
objectives for favourable performance and are therefore computationally
expensive and less generalizable. To address these issues, we propose a novel
Cross-Domain Structure Preserving Projection (CDSPP) algorithm for HDA. As an
extension of the classic LPP to heterogeneous domains, CDSPP aims to learn
domain-specific projections to map sample features from source and target
domains into a common subspace such that the class consistency is preserved and
data distributions are sufficiently aligned. CDSPP is simple and has
deterministic solutions by solving a generalized eigenvalue problem. It is
naturally suitable for supervised HDA but has also been extended for
semi-supervised HDA where the unlabelled target domain samples are available.
Extensive experiments have been conducted on commonly used benchmark datasets
(i.e. Office-Caltech, Multilingual Reuters Collection, NUS-WIDE-ImageNet) for
HDA as well as the Office-Home dataset firstly introduced for HDA by ourselves
due to its significantly larger number of classes than the existing ones (65 vs
10, 6 and 8). The experimental results of both supervised and semi-supervised
HDA demonstrate the superior performance of our proposed method against
contemporary state-of-the-art methods.
Related papers
- Semi Supervised Heterogeneous Domain Adaptation via Disentanglement and Pseudo-Labelling [4.33404822906643]
Semi-supervised domain adaptation methods leverage information from a source labelled domain to generalize over a scarcely labelled target domain.
Such a setting is denoted as Semi-Supervised Heterogeneous Domain Adaptation (SSHDA)
We introduce SHeDD (Semi-supervised Heterogeneous Domain Adaptation via Disentanglement) an end-to-end neural framework tailored to learning a target domain.
arXiv Detail & Related papers (2024-06-20T08:02:49Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Domain-Agnostic Prior for Transfer Semantic Segmentation [197.9378107222422]
Unsupervised domain adaptation (UDA) is an important topic in the computer vision community.
We present a mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP)
Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
arXiv Detail & Related papers (2022-04-06T09:13:25Z) - Domain Generalisation for Object Detection under Covariate and Concept Shift [10.32461766065764]
Domain generalisation aims to promote the learning of domain-invariant features while suppressing domain-specific features.
An approach to domain generalisation for object detection is proposed, the first such approach applicable to any object detection architecture.
arXiv Detail & Related papers (2022-03-10T11:14:18Z) - Seeking Similarities over Differences: Similarity-based Domain Alignment
for Adaptive Object Detection [86.98573522894961]
We propose a framework that generalizes the components commonly used by Unsupervised Domain Adaptation (UDA) algorithms for detection.
Specifically, we propose a novel UDA algorithm, ViSGA, that leverages the best design choices and introduces a simple but effective method to aggregate features at instance-level.
We show that both similarity-based grouping and adversarial training allows our model to focus on coarsely aligning feature groups, without being forced to match all instances across loosely aligned domains.
arXiv Detail & Related papers (2021-10-04T13:09:56Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Heuristic Domain Adaptation [105.59792285047536]
Heuristic Domain Adaptation Network (HDAN) explicitly learns the domain-invariant and domain-specific representations.
Heuristic Domain Adaptation Network (HDAN) has exceeded state-of-the-art on unsupervised DA, multi-source DA and semi-supervised DA.
arXiv Detail & Related papers (2020-11-30T04:21:35Z) - Simultaneous Semantic Alignment Network for Heterogeneous Domain
Adaptation [67.37606333193357]
We propose aSimultaneous Semantic Alignment Network (SSAN) to simultaneously exploit correlations among categories and align the centroids for each category across domains.
By leveraging target pseudo-labels, a robust triplet-centroid alignment mechanism is explicitly applied to align feature representations for each category.
Experiments on various HDA tasks across text-to-image, image-to-image and text-to-text successfully validate the superiority of our SSAN against state-of-the-art HDA methods.
arXiv Detail & Related papers (2020-08-04T16:20:37Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.