Learning Good Features to Transfer Across Tasks and Domains
- URL: http://arxiv.org/abs/2301.11310v1
- Date: Thu, 26 Jan 2023 18:49:39 GMT
- Title: Learning Good Features to Transfer Across Tasks and Domains
- Authors: Pierluigi Zama Ramirez, Adriano Cardace, Luca De Luigi, Alessio
Tonioni, Samuele Salti, Luigi Di Stefano
- Abstract summary: We first show that such knowledge can be shared across tasks by learning a mapping between task-specific deep features in a given domain.
Then, we show that this mapping function, implemented by a neural network, is able to generalize to novel unseen domains.
- Score: 16.05821129333396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Availability of labelled data is the major obstacle to the deployment of deep
learning algorithms for computer vision tasks in new domains. The fact that
many frameworks adopted to solve different tasks share the same architecture
suggests that there should be a way of reusing the knowledge learned in a
specific setting to solve novel tasks with limited or no additional
supervision. In this work, we first show that such knowledge can be shared
across tasks by learning a mapping between task-specific deep features in a
given domain. Then, we show that this mapping function, implemented by a neural
network, is able to generalize to novel unseen domains. Besides, we propose a
set of strategies to constrain the learned feature spaces, to ease learning and
increase the generalization capability of the mapping network, thereby
considerably improving the final performance of our framework. Our proposal
obtains compelling results in challenging synthetic-to-real adaptation
scenarios by transferring knowledge between monocular depth estimation and
semantic segmentation tasks.
Related papers
- Prompt-Based Spatio-Temporal Graph Transfer Learning [22.855189872649376]
We propose a prompt-based framework capable of adapting to multi-diverse tasks in a data-scarce domain.
We employ learnable prompts to achieve domain and task transfer in a two-stage pipeline.
Our experiments demonstrate that STGP outperforms state-of-the-art baselines in three tasks-forecasting, kriging, and extrapolation-achieving an improvement of up to 10.7%.
arXiv Detail & Related papers (2024-05-21T02:06:40Z) - Proto-Value Networks: Scaling Representation Learning with Auxiliary
Tasks [33.98624423578388]
Auxiliary tasks improve representations learned by deep reinforcement learning agents.
We derive a new family of auxiliary tasks based on the successor measure.
We show that proto-value networks produce rich features that may be used to obtain performance comparable to established algorithms.
arXiv Detail & Related papers (2023-04-25T04:25:08Z) - Learning with Style: Continual Semantic Segmentation Across Tasks and
Domains [25.137859989323537]
Domain adaptation and class incremental learning deal with domain and task variability separately, whereas their unified solution is still an open problem.
We tackle both facets of the problem together, taking into account the semantic shift within both input and label spaces.
We show how the proposed method outperforms existing approaches, which prove to be ill-equipped to deal with continual semantic segmentation under both task and domain shift.
arXiv Detail & Related papers (2022-10-13T13:24:34Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Counting with Adaptive Auxiliary Learning [23.715818463425503]
This paper proposes an adaptive auxiliary task learning based approach for object counting problems.
We develop an attention-enhanced adaptively shared backbone network to enable both task-shared and task-tailored features learning.
Our method achieves superior performance to the state-of-the-art auxiliary task learning based counting methods.
arXiv Detail & Related papers (2022-03-08T13:10:17Z) - Learning to Relate Depth and Semantics for Unsupervised Domain
Adaptation [87.1188556802942]
We present an approach for encoding visual task relationships to improve model performance in an Unsupervised Domain Adaptation (UDA) setting.
We propose a novel Cross-Task Relation Layer (CTRL), which encodes task dependencies between the semantic and depth predictions.
Furthermore, we propose an Iterative Self-Learning (ISL) training scheme, which exploits semantic pseudo-labels to provide extra supervision on the target domain.
arXiv Detail & Related papers (2021-05-17T13:42:09Z) - Explainability-aided Domain Generalization for Image Classification [0.0]
We show that applying methods and architectures from the explainability literature can achieve state-of-the-art performance for the challenging task of domain generalization.
We develop a set of novel algorithms including DivCAM, an approach where the network receives guidance during training via gradient based class activation maps to focus on a diverse set of discriminative features.
Since these methods offer competitive performance on top of explainability, we argue that the proposed methods can be used as a tool to improve the robustness of deep neural network architectures.
arXiv Detail & Related papers (2021-04-05T02:27:01Z) - Auxiliary Learning by Implicit Differentiation [54.92146615836611]
Training neural networks with auxiliary tasks is a common practice for improving the performance on a main task of interest.
Here, we propose a novel framework, AuxiLearn, that targets both challenges based on implicit differentiation.
First, when useful auxiliaries are known, we propose learning a network that combines all losses into a single coherent objective function.
Second, when no useful auxiliary task is known, we describe how to learn a network that generates a meaningful, novel auxiliary task.
arXiv Detail & Related papers (2020-06-22T19:35:07Z) - Dynamic Feature Integration for Simultaneous Detection of Salient
Object, Edge and Skeleton [108.01007935498104]
In this paper, we solve three low-level pixel-wise vision problems, including salient object segmentation, edge detection, and skeleton extraction.
We first show some similarities shared by these tasks and then demonstrate how they can be leveraged for developing a unified framework.
arXiv Detail & Related papers (2020-04-18T11:10:11Z) - Adversarial Continual Learning [99.56738010842301]
We propose a hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features.
Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills.
arXiv Detail & Related papers (2020-03-21T02:08:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.