Implicit Semantic Augmentation for Distance Metric Learning in Domain
Generalization
- URL: http://arxiv.org/abs/2208.02803v1
- Date: Tue, 2 Aug 2022 11:37:23 GMT
- Title: Implicit Semantic Augmentation for Distance Metric Learning in Domain
Generalization
- Authors: Meng Wang, Jianlong Yuna, Qi Qian, Zhibin Wang, Hao Li
- Abstract summary: Domain generalization (DG) aims to learn a model on one or more different but related source domains that could be generalized into an unseen target domain.
Existing DG methods try to prompt the diversity of source domains for the model's generalization ability.
This work applies the implicit semantic augmentation in feature space to capture the diversity of source domains.
- Score: 25.792285194055797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization (DG) aims to learn a model on one or more different but
related source domains that could be generalized into an unseen target domain.
Existing DG methods try to prompt the diversity of source domains for the
model's generalization ability, while they may have to introduce auxiliary
networks or striking computational costs. On the contrary, this work applies
the implicit semantic augmentation in feature space to capture the diversity of
source domains. Concretely, an additional loss function of distance metric
learning (DML) is included to optimize the local geometry of data distribution.
Besides, the logits from cross entropy loss with infinite augmentations is
adopted as input features for the DML loss in lieu of the deep features. We
also provide a theoretical analysis to show that the logits can approximate the
distances defined on original features well. Further, we provide an in-depth
analysis of the mechanism and rational behind our approach, which gives us a
better understanding of why leverage logits in lieu of features can help domain
generalization. The proposed DML loss with the implicit augmentation is
incorporated into a recent DG method, that is, Fourier Augmented Co-Teacher
framework (FACT). Meanwhile, our method also can be easily plugged into various
DG methods. Extensive experiments on three benchmarks (Digits-DG, PACS and
Office-Home) have demonstrated that the proposed method is able to achieve the
state-of-the-art performance.
Related papers
- Unsupervised Domain Adaptation Using Compact Internal Representations [23.871860648919593]
A technique for tackling unsupervised domain adaptation involves mapping data points from both the source and target domains into a shared embedding space.
We develop an additional technique which makes the internal distribution of the source domain more compact.
We demonstrate that by increasing the margins between data representations for different classes in the embedding space, we can improve the model performance for UDA.
arXiv Detail & Related papers (2024-01-14T05:53:33Z) - Unified Domain Adaptive Semantic Segmentation [96.74199626935294]
Unsupervised Adaptive Domain Semantic (UDA-SS) aims to transfer the supervision from a labeled source domain to an unlabeled target domain.
We propose a Quad-directional Mixup (QuadMix) method, characterized by tackling distinct point attributes and feature inconsistencies.
Our method outperforms the state-of-the-art works by large margins on four challenging UDA-SS benchmarks.
arXiv Detail & Related papers (2023-11-22T09:18:49Z) - MADG: Margin-based Adversarial Learning for Domain Generalization [25.45950080930517]
We propose a novel adversarial learning DG algorithm, MADG, motivated by a margin loss-based discrepancy metric.
The proposed MADG model learns domain-invariant features across all source domains and uses adversarial training to generalize well to the unseen target domain.
We extensively experiment with the MADG model on popular real-world DG datasets.
arXiv Detail & Related papers (2023-11-14T19:53:09Z) - Increasing Model Generalizability for Unsupervised Domain Adaptation [12.013345715187285]
We show that increasing the interclass margins in the embedding space can help to develop a UDA algorithm with improved performance.
We demonstrate that using our approach leads to improved model generalizability on four standard benchmark UDA image classification datasets.
arXiv Detail & Related papers (2022-09-29T09:08:04Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - Learning Feature Decomposition for Domain Adaptive Monocular Depth
Estimation [51.15061013818216]
Supervised approaches have led to great success with the advance of deep learning, but they rely on large quantities of ground-truth depth annotations.
Unsupervised domain adaptation (UDA) transfers knowledge from labeled source data to unlabeled target data, so as to relax the constraint of supervised learning.
We propose a novel UDA method for MDE, referred to as Learning Feature Decomposition for Adaptation (LFDA), which learns to decompose the feature space into content and style components.
arXiv Detail & Related papers (2022-07-30T08:05:35Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - Learning Domain-invariant Graph for Adaptive Semi-supervised Domain
Adaptation with Few Labeled Source Samples [65.55521019202557]
Domain adaptation aims to generalize a model from a source domain to tackle tasks in a related but different target domain.
Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain.
We propose a Domain-invariant Graph Learning (DGL) approach for domain adaptation with only a few labeled source samples.
arXiv Detail & Related papers (2020-08-21T08:13:25Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.