Domain Adaptive Knowledge Distillation for Driving Scene Semantic
Segmentation
- URL: http://arxiv.org/abs/2011.08007v2
- Date: Thu, 26 Nov 2020 13:02:27 GMT
- Title: Domain Adaptive Knowledge Distillation for Driving Scene Semantic
Segmentation
- Authors: Divya Kothandaraman, Athira Nambiar, Anurag Mittal
- Abstract summary: We present a novel approach to learn domain adaptive knowledge in models with limited memory.
We propose a multi-level distillation strategy to effectively distil knowledge at different levels.
We carry out extensive experiments and ablation studies on real-to-real as well as synthetic-to-real scenarios.
- Score: 9.203485172547824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Practical autonomous driving systems face two crucial challenges: memory
constraints and domain gap issues. In this paper, we present a novel approach
to learn domain adaptive knowledge in models with limited memory, thus
bestowing the model with the ability to deal with these issues in a
comprehensive manner. We term this as "Domain Adaptive Knowledge Distillation"
and address the same in the context of unsupervised domain-adaptive semantic
segmentation by proposing a multi-level distillation strategy to effectively
distil knowledge at different levels. Further, we introduce a novel cross
entropy loss that leverages pseudo labels from the teacher. These pseudo
teacher labels play a multifaceted role towards: (i) knowledge distillation
from the teacher network to the student network & (ii) serving as a proxy for
the ground truth for target domain images, where the problem is completely
unsupervised. We introduce four paradigms for distilling domain adaptive
knowledge and carry out extensive experiments and ablation studies on
real-to-real as well as synthetic-to-real scenarios. Our experiments
demonstrate the profound success of our proposed method.
Related papers
- Direct Distillation between Different Domains [97.39470334253163]
We propose a new one-stage method dubbed Direct Distillation between Different Domains" (4Ds)
We first design a learnable adapter based on the Fourier transform to separate the domain-invariant knowledge from the domain-specific knowledge.
We then build a fusion-activation mechanism to transfer the valuable domain-invariant knowledge to the student network.
arXiv Detail & Related papers (2024-01-12T02:48:51Z) - Transfer RL via the Undo Maps Formalism [29.798971172941627]
Transferring knowledge across domains is one of the most fundamental problems in machine learning.
We propose TvD: transfer via distribution matching, a framework to transfer knowledge across interactive domains.
We show this objective leads to a policy update scheme reminiscent of imitation learning, and derive an efficient algorithm to implement it.
arXiv Detail & Related papers (2022-11-26T03:44:28Z) - Learning with Style: Continual Semantic Segmentation Across Tasks and
Domains [25.137859989323537]
Domain adaptation and class incremental learning deal with domain and task variability separately, whereas their unified solution is still an open problem.
We tackle both facets of the problem together, taking into account the semantic shift within both input and label spaces.
We show how the proposed method outperforms existing approaches, which prove to be ill-equipped to deal with continual semantic segmentation under both task and domain shift.
arXiv Detail & Related papers (2022-10-13T13:24:34Z) - Learn what matters: cross-domain imitation learning with task-relevant
embeddings [77.34726150561087]
We study how an autonomous agent learns to perform a task from demonstrations in a different domain, such as a different environment or different agent.
We propose a scalable framework that enables cross-domain imitation learning without access to additional demonstrations or further domain knowledge.
arXiv Detail & Related papers (2022-09-24T21:56:58Z) - Cross-domain Imitation from Observations [50.669343548588294]
Imitation learning seeks to circumvent the difficulty in designing proper reward functions for training agents by utilizing expert behavior.
In this paper, we study the problem of how to imitate tasks when there exist discrepancies between the expert and agent MDP.
We present a novel framework to learn correspondences across such domains.
arXiv Detail & Related papers (2021-05-20T21:08:25Z) - Towards Recognizing New Semantic Concepts in New Visual Domains [9.701036831490768]
We argue that it is crucial to design deep architectures that can operate in previously unseen visual domains and recognize novel semantic concepts.
In the first part of the thesis, we describe different solutions to enable deep models to generalize to new visual domains.
In the second part, we show how to extend the knowledge of a pretrained deep model to new semantic concepts, without access to the original training set.
arXiv Detail & Related papers (2020-12-16T16:23:40Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Unsupervised Transfer Learning with Self-Supervised Remedy [60.315835711438936]
Generalising deep networks to novel domains without manual labels is challenging to deep learning.
Pre-learned knowledge does not transfer well without making strong assumptions about the learned and the novel domains.
In this work, we aim to learn a discriminative latent space of the unlabelled target data in a novel domain by knowledge transfer from labelled related domains.
arXiv Detail & Related papers (2020-06-08T16:42:17Z) - Continuous Domain Adaptation with Variational Domain-Agnostic Feature
Replay [78.7472257594881]
Learning in non-stationary environments is one of the biggest challenges in machine learning.
Non-stationarity can be caused by either task drift, or the domain drift.
We propose variational domain-agnostic feature replay, an approach that is composed of three components.
arXiv Detail & Related papers (2020-03-09T19:50:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.