Latent Domain Learning with Dynamic Residual Adapters
- URL: http://arxiv.org/abs/2006.00996v1
- Date: Mon, 1 Jun 2020 15:00:11 GMT
- Title: Latent Domain Learning with Dynamic Residual Adapters
- Authors: Lucas Deecke, Timothy Hospedales, Hakan Bilen
- Abstract summary: A practical shortcoming of deep neural networks is their specialization to a single task and domain.
Here we focus on a less explored, but more realistic case: learning from data from multiple domains, without access to domain annotations.
We address this limitation via dynamic residual adapters, an adaptive gating mechanism that helps account for latent domains.
- Score: 26.018759356470767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A practical shortcoming of deep neural networks is their specialization to a
single task and domain. While recent techniques in domain adaptation and
multi-domain learning enable the learning of more domain-agnostic features,
their success relies on the presence of domain labels, typically requiring
manual annotation and careful curation of datasets. Here we focus on a less
explored, but more realistic case: learning from data from multiple domains,
without access to domain annotations. In this scenario, standard model training
leads to the overfitting of large domains, while disregarding smaller ones. We
address this limitation via dynamic residual adapters, an adaptive gating
mechanism that helps account for latent domains, coupled with an augmentation
strategy inspired by recent style transfer techniques. Our proposed approach is
examined on image classification tasks containing multiple latent domains, and
we showcase its ability to obtain robust performance across these. Dynamic
residual adapters significantly outperform off-the-shelf networks with much
larger capacity, and can be incorporated seamlessly with existing architectures
in an end-to-end manner.
Related papers
- Learning to Generalize Unseen Domains via Multi-Source Meta Learning for Text Classification [71.08024880298613]
We study the multi-source Domain Generalization of text classification.
We propose a framework to use multiple seen domains to train a model that can achieve high accuracy in an unseen domain.
arXiv Detail & Related papers (2024-09-20T07:46:21Z) - Multi-scale Feature Alignment for Continual Learning of Unlabeled
Domains [3.9498537297431167]
generative feature-driven image replay in conjunction with a dual-purpose discriminator enables the generation of images with realistic features for replay.
We present detailed ablation experiments studying our proposed method components and demonstrate a possible use-case of our continual UDA method for an unsupervised patch-based segmentation task.
arXiv Detail & Related papers (2023-02-02T18:19:01Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Domain Adaptation for Semantic Segmentation via Patch-Wise Contrastive
Learning [62.7588467386166]
We leverage contrastive learning to bridge the domain gap by aligning the features of structurally similar label patches across domains.
Our approach consistently outperforms state-of-the-art unsupervised and semi-supervised methods on two challenging domain adaptive segmentation tasks.
arXiv Detail & Related papers (2021-04-22T13:39:12Z) - Towards Recognizing New Semantic Concepts in New Visual Domains [9.701036831490768]
We argue that it is crucial to design deep architectures that can operate in previously unseen visual domains and recognize novel semantic concepts.
In the first part of the thesis, we describe different solutions to enable deep models to generalize to new visual domains.
In the second part, we show how to extend the knowledge of a pretrained deep model to new semantic concepts, without access to the original training set.
arXiv Detail & Related papers (2020-12-16T16:23:40Z) - Towards Adaptive Semantic Segmentation by Progressive Feature Refinement [16.40758125170239]
We propose an innovative progressive feature refinement framework, along with domain adversarial learning to boost the transferability of segmentation networks.
As a result, the segmentation models trained with source domain images can be transferred to a target domain without significant performance degradation.
arXiv Detail & Related papers (2020-09-30T04:17:48Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog [70.79442700890843]
We propose a novel Dynamic Fusion Network (DF-Net) which automatically exploit the relevance between the target domain and each domain.
With little training data, we show its transferability by outperforming prior best model by 13.9% on average.
arXiv Detail & Related papers (2020-04-23T08:17:22Z) - Learning to adapt class-specific features across domains for semantic
segmentation [36.36210909649728]
In this thesis, we present a novel architecture, which learns to adapt features across domains by taking into account per class information.
We adopt the recently introduced StarGAN architecture as image translation backbone, since it is able to perform translations across multiple domains by means of a single generator network.
arXiv Detail & Related papers (2020-01-22T23:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.