Composite Learning for Robust and Effective Dense Predictions
- URL: http://arxiv.org/abs/2210.07239v1
- Date: Thu, 13 Oct 2022 17:59:16 GMT
- Title: Composite Learning for Robust and Effective Dense Predictions
- Authors: Menelaos Kanakis, Thomas E. Huang, David Bruggemann, Fisher Yu, Luc
Van Gool
- Abstract summary: Multi-task learning promises better model generalization on a target task by jointly optimizing it with an auxiliary task.
We find that jointly training a dense prediction (target) task with a self-supervised (auxiliary) task can consistently improve the performance of the target task, while eliminating the need for labeling auxiliary tasks.
- Score: 81.2055761433725
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Multi-task learning promises better model generalization on a target task by
jointly optimizing it with an auxiliary task. However, the current practice
requires additional labeling efforts for the auxiliary task, while not
guaranteeing better model performance. In this paper, we find that jointly
training a dense prediction (target) task with a self-supervised (auxiliary)
task can consistently improve the performance of the target task, while
eliminating the need for labeling auxiliary tasks. We refer to this joint
training as Composite Learning (CompL). Experiments of CompL on monocular depth
estimation, semantic segmentation, and boundary detection show consistent
performance improvements in fully and partially labeled datasets. Further
analysis on depth estimation reveals that joint training with self-supervision
outperforms most labeled auxiliary tasks. We also find that CompL can improve
model robustness when the models are evaluated in new domains. These results
demonstrate the benefits of self-supervision as an auxiliary task, and
establish the design of novel task-specific self-supervised methods as a new
axis of investigation for future multi-task learning research.
Related papers
- Functional Knowledge Transfer with Self-supervised Representation
Learning [11.566644244783305]
This work investigates the unexplored usability of self-supervised representation learning in the direction of functional knowledge transfer.
In this work, functional knowledge transfer is achieved by joint optimization of self-supervised learning pseudo task and supervised learning task.
arXiv Detail & Related papers (2023-03-12T21:14:59Z) - ForkMerge: Mitigating Negative Transfer in Auxiliary-Task Learning [59.08197876733052]
Auxiliary-Task Learning (ATL) aims to improve the performance of the target task by leveraging the knowledge obtained from related tasks.
Sometimes, learning multiple tasks simultaneously results in lower accuracy than learning only the target task, known as negative transfer.
ForkMerge is a novel approach that periodically forks the model into multiple branches, automatically searches the varying task weights.
arXiv Detail & Related papers (2023-01-30T02:27:02Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Semantics-Depth-Symbiosis: Deeply Coupled Semi-Supervised Learning of
Semantics and Depth [83.94528876742096]
We tackle the MTL problem of two dense tasks, ie, semantic segmentation and depth estimation, and present a novel attention module called Cross-Channel Attention Module (CCAM)
In a true symbiotic spirit, we then formulate a novel data augmentation for the semantic segmentation task using predicted depth called AffineMix, and a simple depth augmentation using predicted semantics called ColorAug.
Finally, we validate the performance gain of the proposed method on the Cityscapes dataset, which helps us achieve state-of-the-art results for a semi-supervised joint model based on depth and semantic
arXiv Detail & Related papers (2022-06-21T17:40:55Z) - Boosting Supervised Learning Performance with Co-training [15.986635379046602]
We propose a new light-weight self-supervised learning framework that could boost supervised learning performance with minimum additional cost.
Our results show that both self-supervised tasks can improve the accuracy of the supervised task and, at the same time, demonstrates strong domain adaption capability.
arXiv Detail & Related papers (2021-11-18T17:01:17Z) - Adaptive Transfer Learning on Graph Neural Networks [4.233435459239147]
Graph neural networks (GNNs) are widely used to learn a powerful representation of graph-structured data.
Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation.
We propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task.
arXiv Detail & Related papers (2021-07-19T11:46:28Z) - Learning to Relate Depth and Semantics for Unsupervised Domain
Adaptation [87.1188556802942]
We present an approach for encoding visual task relationships to improve model performance in an Unsupervised Domain Adaptation (UDA) setting.
We propose a novel Cross-Task Relation Layer (CTRL), which encodes task dependencies between the semantic and depth predictions.
Furthermore, we propose an Iterative Self-Learning (ISL) training scheme, which exploits semantic pseudo-labels to provide extra supervision on the target domain.
arXiv Detail & Related papers (2021-05-17T13:42:09Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.