Learning to Relate Depth and Semantics for Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2105.07830v1
- Date: Mon, 17 May 2021 13:42:09 GMT
- Title: Learning to Relate Depth and Semantics for Unsupervised Domain
Adaptation
- Authors: Suman Saha, Anton Obukhov, Danda Pani Paudel, Menelaos Kanakis, Yuhua
Chen, Stamatios Georgoulis, Luc Van Gool
- Abstract summary: We present an approach for encoding visual task relationships to improve model performance in an Unsupervised Domain Adaptation (UDA) setting.
We propose a novel Cross-Task Relation Layer (CTRL), which encodes task dependencies between the semantic and depth predictions.
Furthermore, we propose an Iterative Self-Learning (ISL) training scheme, which exploits semantic pseudo-labels to provide extra supervision on the target domain.
- Score: 87.1188556802942
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present an approach for encoding visual task relationships to improve
model performance in an Unsupervised Domain Adaptation (UDA) setting. Semantic
segmentation and monocular depth estimation are shown to be complementary
tasks; in a multi-task learning setting, a proper encoding of their
relationships can further improve performance on both tasks. Motivated by this
observation, we propose a novel Cross-Task Relation Layer (CTRL), which encodes
task dependencies between the semantic and depth predictions. To capture the
cross-task relationships, we propose a neural network architecture that
contains task-specific and cross-task refinement heads. Furthermore, we propose
an Iterative Self-Learning (ISL) training scheme, which exploits semantic
pseudo-labels to provide extra supervision on the target domain. We
experimentally observe improvements in both tasks' performance because the
complementary information present in these tasks is better captured.
Specifically, we show that: (1) our approach improves performance on all tasks
when they are complementary and mutually dependent; (2) the CTRL helps to
improve both semantic segmentation and depth estimation tasks performance in
the challenging UDA setting; (3) the proposed ISL training scheme further
improves the semantic segmentation performance. The implementation is available
at https://github.com/susaha/ctrl-uda.
Related papers
- Auxiliary Tasks Enhanced Dual-affinity Learning for Weakly Supervised
Semantic Segmentation [79.05949524349005]
We propose AuxSegNet+, a weakly supervised auxiliary learning framework to explore the rich information from saliency maps.
We also propose a cross-task affinity learning mechanism to learn pixel-level affinities from the saliency and segmentation feature maps.
arXiv Detail & Related papers (2024-03-02T10:03:21Z) - Composite Learning for Robust and Effective Dense Predictions [81.2055761433725]
Multi-task learning promises better model generalization on a target task by jointly optimizing it with an auxiliary task.
We find that jointly training a dense prediction (target) task with a self-supervised (auxiliary) task can consistently improve the performance of the target task, while eliminating the need for labeling auxiliary tasks.
arXiv Detail & Related papers (2022-10-13T17:59:16Z) - Semantics-Depth-Symbiosis: Deeply Coupled Semi-Supervised Learning of
Semantics and Depth [83.94528876742096]
We tackle the MTL problem of two dense tasks, ie, semantic segmentation and depth estimation, and present a novel attention module called Cross-Channel Attention Module (CCAM)
In a true symbiotic spirit, we then formulate a novel data augmentation for the semantic segmentation task using predicted depth called AffineMix, and a simple depth augmentation using predicted semantics called ColorAug.
Finally, we validate the performance gain of the proposed method on the Cityscapes dataset, which helps us achieve state-of-the-art results for a semi-supervised joint model based on depth and semantic
arXiv Detail & Related papers (2022-06-21T17:40:55Z) - Counting with Adaptive Auxiliary Learning [23.715818463425503]
This paper proposes an adaptive auxiliary task learning based approach for object counting problems.
We develop an attention-enhanced adaptively shared backbone network to enable both task-shared and task-tailored features learning.
Our method achieves superior performance to the state-of-the-art auxiliary task learning based counting methods.
arXiv Detail & Related papers (2022-03-08T13:10:17Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z) - Domain Adaptive Semantic Segmentation with Self-Supervised Depth
Estimation [84.34227665232281]
Domain adaptation for semantic segmentation aims to improve the model performance in the presence of a distribution shift between source and target domain.
We leverage the guidance from self-supervised depth estimation, which is available on both domains, to bridge the domain gap.
We demonstrate the effectiveness of our proposed approach on the benchmark tasks SYNTHIA-to-Cityscapes and GTA-to-Cityscapes.
arXiv Detail & Related papers (2021-04-28T07:47:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.