Disentanglement by Cyclic Reconstruction
- URL: http://arxiv.org/abs/2112.12980v1
- Date: Fri, 24 Dec 2021 07:47:59 GMT
- Title: Disentanglement by Cyclic Reconstruction
- Authors: David Bertoin, Emmanuel Rachelson (DMIA)
- Abstract summary: In supervised learning, information specific to the dataset used for training, but irrelevant to the task at hand, may remain encoded in the extracted representations.
We propose splitting the information into a task-related representation and its complementary context representation.
We then adapt this method to the unsupervised domain adaptation problem, consisting of training a model capable of performing on both a source and a target domain.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have demonstrated their ability to automatically extract
meaningful features from data. However, in supervised learning, information
specific to the dataset used for training, but irrelevant to the task at hand,
may remain encoded in the extracted representations. This remaining information
introduces a domain-specific bias, weakening the generalization performance. In
this work, we propose splitting the information into a task-related
representation and its complementary context representation. We propose an
original method, combining adversarial feature predictors and cyclic
reconstruction, to disentangle these two representations in the single-domain
supervised case. We then adapt this method to the unsupervised domain
adaptation problem, consisting of training a model capable of performing on
both a source and a target domain. In particular, our method promotes
disentanglement in the target domain, despite the absence of training labels.
This enables the isolation of task-specific information from both domains and a
projection into a common representation. The task-specific representation
allows efficient transfer of knowledge acquired from the source domain to the
target domain. In the single-domain case, we demonstrate the quality of our
representations on information retrieval tasks and the generalization benefits
induced by sharpened task-specific representations. We then validate the
proposed method on several classical domain adaptation benchmarks and
illustrate the benefits of disentanglement for domain adaptation.
Related papers
- Causally Inspired Regularization Enables Domain General Representations [14.036422506623383]
Given a causal graph representing the data-generating process shared across different domains/distributions, enforcing sufficient graph-implied conditional independencies can identify domain-general (non-spurious) feature representations.
We propose a novel framework with regularizations, which we demonstrate are sufficient for identifying domain-general feature representations without a priori knowledge (or proxies) of the spurious features.
Our proposed method is effective for both (semi) synthetic and real-world data, outperforming other state-of-the-art methods in average and worst-domain transfer accuracy.
arXiv Detail & Related papers (2024-04-25T01:33:55Z) - Unsupervised Domain Adaptation for Point Cloud Semantic Segmentation via
Graph Matching [14.876681993079062]
We propose a graph-based framework to explore the local-level feature alignment between the two domains.
We also formulate a category-guided contrastive loss to guide the segmentation model to learn discriminative features on the target domain.
arXiv Detail & Related papers (2022-08-09T02:30:15Z) - TAL: Two-stream Adaptive Learning for Generalizable Person
Re-identification [115.31432027711202]
We argue that both domain-specific and domain-invariant features are crucial for improving the generalization ability of re-id models.
We name two-stream adaptive learning (TAL) to simultaneously model these two kinds of information.
Our framework can be applied to both single-source and multi-source domain generalization tasks.
arXiv Detail & Related papers (2021-11-29T01:27:42Z) - Coarse to Fine: Domain Adaptive Crowd Counting via Adversarial Scoring
Network [58.05473757538834]
This paper proposes a novel adversarial scoring network (ASNet) to bridge the gap across domains from coarse to fine granularity.
Three sets of migration experiments show that the proposed methods achieve state-of-the-art counting performance.
arXiv Detail & Related papers (2021-07-27T14:47:24Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Self-Supervised Domain Adaptation with Consistency Training [0.2462953128215087]
We consider the problem of unsupervised domain adaptation for image classification.
We create a self-supervised pretext task by augmenting the unlabeled data with a certain type of transformation.
We force the representation of the augmented data to be consistent with that of the original data.
arXiv Detail & Related papers (2020-10-15T06:03:47Z) - Unsupervised Cross-domain Image Classification by Distance Metric Guided
Feature Alignment [11.74643883335152]
Unsupervised domain adaptation is a promising avenue which transfers knowledge from a source domain to a target domain.
We propose distance metric guided feature alignment (MetFA) to extract discriminative as well as domain-invariant features on both source and target domains.
Our model integrates class distribution alignment to transfer semantic knowledge from a source domain to a target domain.
arXiv Detail & Related papers (2020-08-19T13:36:57Z) - Learning Task-oriented Disentangled Representations for Unsupervised
Domain Adaptation [165.61511788237485]
Unsupervised domain adaptation (UDA) aims to address the domain-shift problem between a labeled source domain and an unlabeled target domain.
We propose a dynamic task-oriented disentangling network (DTDN) to learn disentangled representations in an end-to-end fashion for UDA.
arXiv Detail & Related papers (2020-07-27T01:21:18Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z) - Spatial Attention Pyramid Network for Unsupervised Domain Adaptation [66.75008386980869]
Unsupervised domain adaptation is critical in various computer vision tasks.
We design a new spatial attention pyramid network for unsupervised domain adaptation.
Our method performs favorably against the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-03-29T09:03:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.