Feed-Forward Latent Domain Adaptation
- URL: http://arxiv.org/abs/2207.07624v2
- Date: Wed, 31 Jan 2024 19:19:45 GMT
- Title: Feed-Forward Latent Domain Adaptation
- Authors: Ondrej Bohdal, Da Li, Shell Xu Hu, Timothy Hospedales
- Abstract summary: We study a new highly-practical problem setting that enables resource-constrained edge devices to adapt a pre-trained model to their local data distributions.
Considering limitations of edge devices, we aim to only use a pre-trained model and adapt it in a feed-forward way, without using back-propagation and without access to the source data.
Our solution is to meta-learn a network capable of embedding the mixed-relevance target dataset and dynamically adapting inference for target examples using cross-attention.
- Score: 17.71179872529747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study a new highly-practical problem setting that enables
resource-constrained edge devices to adapt a pre-trained model to their local
data distributions. Recognizing that device's data are likely to come from
multiple latent domains that include a mixture of unlabelled domain-relevant
and domain-irrelevant examples, we focus on the comparatively under-studied
problem of latent domain adaptation. Considering limitations of edge devices,
we aim to only use a pre-trained model and adapt it in a feed-forward way,
without using back-propagation and without access to the source data. Modelling
these realistic constraints bring us to the novel and practically important
problem setting of feed-forward latent domain adaptation. Our solution is to
meta-learn a network capable of embedding the mixed-relevance target dataset
and dynamically adapting inference for target examples using cross-attention.
The resulting framework leads to consistent improvements over strong ERM
baselines. We also show that our framework sometimes even improves on the upper
bound of domain-supervised adaptation, where only domain-relevant instances are
provided for adaptation. This suggests that human annotated domain labels may
not always be optimal, and raises the possibility of doing better through
automated instance selection.
Related papers
- Progressive Conservative Adaptation for Evolving Target Domains [76.9274842289221]
Conventional domain adaptation typically transfers knowledge from a source domain to a stationary target domain.
Restoring and adapting to such target data results in escalating computational and resource consumption over time.
We propose a simple yet effective approach, termed progressive conservative adaptation (PCAda)
arXiv Detail & Related papers (2024-02-07T04:11:25Z) - Semi-Supervised Domain Adaptation with Auto-Encoder via Simultaneous
Learning [18.601226898819476]
We present a new semi-supervised domain adaptation framework that combines a novel auto-encoder-based domain adaptation model with a simultaneous learning scheme.
Our framework holds strong distribution matching property by training both source and target auto-encoders.
arXiv Detail & Related papers (2022-10-18T00:10:11Z) - Domain Adaptation from Scratch [24.612696638386623]
We present a new learning setup, domain adaptation from scratch'', which we believe to be crucial for extending the reach of NLP to sensitive domains.
In this setup, we aim to efficiently annotate data from a set of source domains such that the trained model performs well on a sensitive target domain.
Our study compares several approaches for this challenging setup, ranging from data selection and domain adaptation algorithms to active learning paradigms.
arXiv Detail & Related papers (2022-09-02T05:55:09Z) - Labeling Where Adapting Fails: Cross-Domain Semantic Segmentation with
Point Supervision via Active Selection [81.703478548177]
Training models dedicated to semantic segmentation require a large amount of pixel-wise annotated data.
Unsupervised domain adaptation approaches aim at aligning the feature distributions between the labeled source and the unlabeled target data.
Previous works attempted to include human interactions in this process under the form of sparse single-pixel annotations in the target data.
We propose a new domain adaptation framework for semantic segmentation with annotated points via active selection.
arXiv Detail & Related papers (2022-06-01T01:52:28Z) - The Norm Must Go On: Dynamic Unsupervised Domain Adaptation by
Normalization [10.274423413222763]
Domain adaptation is crucial to adapt a learned model to new scenarios, such as domain shifts or changing data distributions.
Current approaches usually require a large amount of labeled or unlabeled data from the shifted domain.
We propose Dynamic Unsupervised Adaptation (DUA) to overcome this problem.
arXiv Detail & Related papers (2021-12-01T12:43:41Z) - Dynamic Feature Alignment for Semi-supervised Domain Adaptation [23.67093835143]
We propose to use dynamic feature alignment to address both inter- and intra-domain discrepancy.
Our approach, which doesn't require extensive tuning or adversarial training, significantly improves the state of the art for semi-supervised domain adaptation.
arXiv Detail & Related papers (2021-10-18T22:26:27Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.