Continual Adaptation of Visual Representations via Domain Randomization
and Meta-learning
- URL: http://arxiv.org/abs/2012.04324v2
- Date: Thu, 8 Apr 2021 15:58:04 GMT
- Title: Continual Adaptation of Visual Representations via Domain Randomization
and Meta-learning
- Authors: Riccardo Volpi, Diane Larlus, Gr\'egory Rogez
- Abstract summary: Most standard learning approaches lead to fragile models which are prone to drift when sequentially trained on samples of a different nature.
We show that one way to learn models that are inherently more robust against forgetting is domain randomization.
We devise a meta-learning strategy where a regularizer explicitly penalizes any loss associated with transferring the model from the current domain to different "auxiliary" meta-domains.
- Score: 21.50683576864347
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most standard learning approaches lead to fragile models which are prone to
drift when sequentially trained on samples of a different nature - the
well-known "catastrophic forgetting" issue. In particular, when a model
consecutively learns from different visual domains, it tends to forget the past
domains in favor of the most recent ones. In this context, we show that one way
to learn models that are inherently more robust against forgetting is domain
randomization - for vision tasks, randomizing the current domain's distribution
with heavy image manipulations. Building on this result, we devise a
meta-learning strategy where a regularizer explicitly penalizes any loss
associated with transferring the model from the current domain to different
"auxiliary" meta-domains, while also easing adaptation to them. Such
meta-domains are also generated through randomized image manipulations. We
empirically demonstrate in a variety of experiments - spanning from
classification to semantic segmentation - that our approach results in models
that are less prone to catastrophic forgetting when transferred to new domains.
Related papers
- SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Domain-General Crowd Counting in Unseen Scenarios [25.171343652312974]
Domain shift across crowd data severely hinders crowd counting models to generalize to unseen scenarios.
We introduce a dynamic sub-domain division scheme which divides the source domain into multiple sub-domains.
In order to disentangle domain-invariant information from domain-specific information in image features, we design the domain-invariant and -specific crowd memory modules.
arXiv Detail & Related papers (2022-12-05T19:52:28Z) - Multi-Domain Long-Tailed Learning by Augmenting Disentangled
Representations [80.76164484820818]
There is an inescapable long-tailed class-imbalance issue in many real-world classification problems.
We study this multi-domain long-tailed learning problem and aim to produce a model that generalizes well across all classes and domains.
Built upon a proposed selective balanced sampling strategy, TALLY achieves this by mixing the semantic representation of one example with the domain-associated nuisances of another.
arXiv Detail & Related papers (2022-10-25T21:54:26Z) - Forget Less, Count Better: A Domain-Incremental Self-Distillation
Learning Benchmark for Lifelong Crowd Counting [51.44987756859706]
Off-the-shelf methods have some drawbacks to handle multiple domains.
Lifelong Crowd Counting aims at alleviating the catastrophic forgetting and improving the generalization ability.
arXiv Detail & Related papers (2022-05-06T15:37:56Z) - A Domain Gap Aware Generative Adversarial Network for Multi-domain Image
Translation [22.47113158859034]
The paper proposes a unified model to translate images across multiple domains with significant domain gaps.
With a single unified generator, the model can maintain consistency over the global shapes as well as the local texture information across multiple domains.
arXiv Detail & Related papers (2021-10-21T00:33:06Z) - Domain Generalization via Gradient Surgery [5.38147998080533]
In real-life applications, machine learning models often face scenarios where there is a change in data distribution between training and test domains.
In this work, we characterize the conflicting gradients emerging in domain shift scenarios and devise novel gradient agreement strategies.
arXiv Detail & Related papers (2021-08-03T16:49:25Z) - Semi-supervised Meta-learning with Disentanglement for
Domain-generalised Medical Image Segmentation [15.351113774542839]
Generalising models to new data from new centres (termed here domains) remains a challenge.
We propose a novel semi-supervised meta-learning framework with disentanglement.
We show that the proposed method is robust on different segmentation tasks and achieves state-of-the-art generalisation performance on two public benchmarks.
arXiv Detail & Related papers (2021-06-24T19:50:07Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.