Domain Transformer: Predicting Samples of Unseen, Future Domains
- URL: http://arxiv.org/abs/2106.06057v1
- Date: Thu, 10 Jun 2021 21:20:00 GMT
- Title: Domain Transformer: Predicting Samples of Unseen, Future Domains
- Authors: Johannes Schneider
- Abstract summary: We learn a domain transformer in an unsupervised manner that allows generating data of unseen domains.
Our approach first matches independently learned latent representations of two given domains obtained from an auto-encoder using a Cycle-GAN.
In turn, a transformation of the original samples can be learned that can be applied iteratively to extrapolate to unseen domains.
- Score: 1.7310589008573272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The data distribution commonly evolves over time leading to problems such as
concept drift that often decrease classifier performance. We seek to predict
unseen data (and their labels) allowing us to tackle challenges due to a
non-constant data distribution in a \emph{proactive} manner rather than
detecting and reacting to already existing changes that might already have led
to errors. To this end, we learn a domain transformer in an unsupervised manner
that allows generating data of unseen domains. Our approach first matches
independently learned latent representations of two given domains obtained from
an auto-encoder using a Cycle-GAN. In turn, a transformation of the original
samples can be learned that can be applied iteratively to extrapolate to unseen
domains. Our evaluation on CNNs on image data confirms the usefulness of the
approach. It also achieves very good results on the well-known problem of
unsupervised domain adaption, where labels but not samples have to be
predicted.
Related papers
- AdaptDiff: Cross-Modality Domain Adaptation via Weak Conditional Semantic Diffusion for Retinal Vessel Segmentation [10.958821619282748]
We present an unsupervised domain adaptation (UDA) method named AdaptDiff.
It enables a retinal vessel segmentation network trained on fundus photography (FP) to produce satisfactory results on unseen modalities.
Our results demonstrate a significant improvement in segmentation performance across all unseen datasets.
arXiv Detail & Related papers (2024-10-06T23:04:29Z) - Probabilistic Test-Time Generalization by Variational Neighbor-Labeling [62.158807685159736]
This paper strives for domain generalization, where models are trained exclusively on source domains before being deployed on unseen target domains.
Probability pseudo-labeling of target samples to generalize the source-trained model to the target domain at test time.
Variational neighbor labels that incorporate the information of neighboring target samples to generate more robust pseudo labels.
arXiv Detail & Related papers (2023-07-08T18:58:08Z) - Adapting to Latent Subgroup Shifts via Concepts and Proxies [82.01141290360562]
We show that the optimal target predictor can be non-parametrically identified with the help of concept and proxy variables available only in the source domain.
For continuous observations, we propose a latent variable model specific to the data generation process at hand.
arXiv Detail & Related papers (2022-12-21T18:30:22Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Unsupervised Out-of-Domain Detection via Pre-trained Transformers [56.689635664358256]
Out-of-domain inputs can lead to unpredictable outputs and sometimes catastrophic safety issues.
Our work tackles the problem of detecting out-of-domain samples with only unsupervised in-domain data.
Two domain-specific fine-tuning approaches are further proposed to boost detection accuracy.
arXiv Detail & Related papers (2021-06-02T05:21:25Z) - A Free Lunch for Unsupervised Domain Adaptive Object Detection without
Source Data [69.091485888121]
Unsupervised domain adaptation assumes that source and target domain data are freely available and usually trained together to reduce the domain gap.
We propose a source data-free domain adaptive object detection (SFOD) framework via modeling it into a problem of learning with noisy labels.
arXiv Detail & Related papers (2020-12-10T01:42:35Z) - Semi-supervised Collaborative Filtering by Text-enhanced Domain
Adaptation [32.93934837792708]
We consider the problem of recommendation on sparse implicit feedbacks as a semi-supervised learning task.
We focus on the most challenging case -- there is no user or item overlap.
We adopt domain-invariant textual features as the anchor points to align the latent spaces.
arXiv Detail & Related papers (2020-06-28T05:28:05Z) - Improving Adversarial Robustness via Unlabeled Out-of-Domain Data [30.58040078862511]
We investigate how adversarial robustness can be enhanced by leveraging out-of-domain unlabeled data.
We show settings where we achieve better adversarial robustness when the unlabeled data come from a shifted domain rather than the same domain as the labeled data.
arXiv Detail & Related papers (2020-06-15T15:25:56Z) - Understanding Self-Training for Gradual Domain Adaptation [107.37869221297687]
We consider gradual domain adaptation, where the goal is to adapt an initial classifier trained on a source domain given only unlabeled data that shifts gradually in distribution towards a target domain.
We prove the first non-vacuous upper bound on the error of self-training with gradual shifts, under settings where directly adapting to the target domain can result in unbounded error.
The theoretical analysis leads to algorithmic insights, highlighting that regularization and label sharpening are essential even when we have infinite data, and suggesting that self-training works particularly well for shifts with small Wasserstein-infinity distance.
arXiv Detail & Related papers (2020-02-26T08:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.