Learning with Style: Continual Semantic Segmentation Across Tasks and
Domains
- URL: http://arxiv.org/abs/2210.07016v1
- Date: Thu, 13 Oct 2022 13:24:34 GMT
- Title: Learning with Style: Continual Semantic Segmentation Across Tasks and
Domains
- Authors: Marco Toldo, Umberto Michieli, Pietro Zanuttigh
- Abstract summary: Domain adaptation and class incremental learning deal with domain and task variability separately, whereas their unified solution is still an open problem.
We tackle both facets of the problem together, taking into account the semantic shift within both input and label spaces.
We show how the proposed method outperforms existing approaches, which prove to be ill-equipped to deal with continual semantic segmentation under both task and domain shift.
- Score: 25.137859989323537
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep learning models dealing with image understanding in real-world settings
must be able to adapt to a wide variety of tasks across different domains.
Domain adaptation and class incremental learning deal with domain and task
variability separately, whereas their unified solution is still an open
problem. We tackle both facets of the problem together, taking into account the
semantic shift within both input and label spaces. We start by formally
introducing continual learning under task and domain shift. Then, we address
the proposed setup by using style transfer techniques to extend knowledge
across domains when learning incremental tasks and a robust distillation
framework to effectively recollect task knowledge under incremental domain
shift. The devised framework (LwS, Learning with Style) is able to generalize
incrementally acquired task knowledge across all the domains encountered,
proving to be robust against catastrophic forgetting. Extensive experimental
evaluation on multiple autonomous driving datasets shows how the proposed
method outperforms existing approaches, which prove to be ill-equipped to deal
with continual semantic segmentation under both task and domain shift.
Related papers
- Learning Good Features to Transfer Across Tasks and Domains [16.05821129333396]
We first show that such knowledge can be shared across tasks by learning a mapping between task-specific deep features in a given domain.
Then, we show that this mapping function, implemented by a neural network, is able to generalize to novel unseen domains.
arXiv Detail & Related papers (2023-01-26T18:49:39Z) - Learn what matters: cross-domain imitation learning with task-relevant
embeddings [77.34726150561087]
We study how an autonomous agent learns to perform a task from demonstrations in a different domain, such as a different environment or different agent.
We propose a scalable framework that enables cross-domain imitation learning without access to additional demonstrations or further domain knowledge.
arXiv Detail & Related papers (2022-09-24T21:56:58Z) - Set-based Meta-Interpolation for Few-Task Meta-Learning [79.4236527774689]
We propose a novel domain-agnostic task augmentation method, Meta-Interpolation, to densify the meta-training task distribution.
We empirically validate the efficacy of Meta-Interpolation on eight datasets spanning across various domains.
arXiv Detail & Related papers (2022-05-20T06:53:03Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Meta Learning on a Sequence of Imbalanced Domains with Difficulty
Awareness [6.648670454325191]
A typical setting across current meta learning algorithms assumes a stationary task distribution during meta training.
We consider realistic scenarios where task distribution is highly imbalanced with domain labels unavailable in nature.
We propose a kernel-based method for domain change detection and a difficulty-aware memory management mechanism.
arXiv Detail & Related papers (2021-09-29T00:53:09Z) - Structured Latent Embeddings for Recognizing Unseen Classes in Unseen
Domains [108.11746235308046]
We propose a novel approach that learns domain-agnostic structured latent embeddings by projecting images from different domains.
Our experiments on the challenging DomainNet and DomainNet-LS benchmarks show the superiority of our approach over existing methods.
arXiv Detail & Related papers (2021-07-12T17:57:46Z) - Cross-domain Imitation from Observations [50.669343548588294]
Imitation learning seeks to circumvent the difficulty in designing proper reward functions for training agents by utilizing expert behavior.
In this paper, we study the problem of how to imitate tasks when there exist discrepancies between the expert and agent MDP.
We present a novel framework to learn correspondences across such domains.
arXiv Detail & Related papers (2021-05-20T21:08:25Z) - Domain-Robust Visual Imitation Learning with Mutual Information
Constraints [0.0]
We introduce a new algorithm called Disentangling Generative Adversarial Imitation Learning (DisentanGAIL)
Our algorithm enables autonomous agents to learn directly from high dimensional observations of an expert performing a task.
arXiv Detail & Related papers (2021-03-08T21:18:58Z) - Domain Adaptive Knowledge Distillation for Driving Scene Semantic
Segmentation [9.203485172547824]
We present a novel approach to learn domain adaptive knowledge in models with limited memory.
We propose a multi-level distillation strategy to effectively distil knowledge at different levels.
We carry out extensive experiments and ablation studies on real-to-real as well as synthetic-to-real scenarios.
arXiv Detail & Related papers (2020-11-03T03:01:09Z) - Learning Task-oriented Disentangled Representations for Unsupervised
Domain Adaptation [165.61511788237485]
Unsupervised domain adaptation (UDA) aims to address the domain-shift problem between a labeled source domain and an unlabeled target domain.
We propose a dynamic task-oriented disentangling network (DTDN) to learn disentangled representations in an end-to-end fashion for UDA.
arXiv Detail & Related papers (2020-07-27T01:21:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.