Self-Supervised Learning Across Domains
- URL: http://arxiv.org/abs/2007.12368v2
- Date: Wed, 31 Mar 2021 13:51:53 GMT
- Title: Self-Supervised Learning Across Domains
- Authors: Silvia Bucci, Antonio D'Innocente, Yujun Liao, Fabio Maria Carlucci,
Barbara Caputo, Tatiana Tommasi
- Abstract summary: We propose to apply a similar approach to the problem of object recognition across domains.
Our model learns the semantic labels in a supervised fashion, and broadens its understanding of the data by learning from self-supervised signals on the same images.
This secondary task helps the network to focus on object shapes, learning concepts like spatial orientation and part correlation, while acting as a regularizer for the classification task.
- Score: 33.86614301708017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human adaptability relies crucially on learning and merging knowledge from
both supervised and unsupervised tasks: the parents point out few important
concepts, but then the children fill in the gaps on their own. This is
particularly effective, because supervised learning can never be exhaustive and
thus learning autonomously allows to discover invariances and regularities that
help to generalize. In this paper we propose to apply a similar approach to the
problem of object recognition across domains: our model learns the semantic
labels in a supervised fashion, and broadens its understanding of the data by
learning from self-supervised signals on the same images. This secondary task
helps the network to focus on object shapes, learning concepts like spatial
orientation and part correlation, while acting as a regularizer for the
classification task over multiple visual domains. Extensive experiments confirm
our intuition and show that our multi-task method combining supervised and
self-supervised knowledge shows competitive results with respect to more
complex domain generalization and adaptation solutions. It also proves its
potential in the novel and challenging predictive and partial domain adaptation
scenarios.
Related papers
- Modeling Multiple Views via Implicitly Preserving Global Consistency and
Local Complementarity [61.05259660910437]
We propose a global consistency and complementarity network (CoCoNet) to learn representations from multiple views.
On the global stage, we reckon that the crucial knowledge is implicitly shared among views, and enhancing the encoder to capture such knowledge can improve the discriminability of the learned representations.
Lastly on the local stage, we propose a complementarity-factor, which joints cross-view discriminative knowledge, and it guides the encoders to learn not only view-wise discriminability but also cross-view complementary information.
arXiv Detail & Related papers (2022-09-16T09:24:00Z) - Alignment Attention by Matching Key and Query Distributions [48.93793773929006]
This paper introduces alignment attention that explicitly encourages self-attention to match the distributions of the key and query within each head.
It is simple to convert any models with self-attention, including pre-trained ones, to the proposed alignment attention.
On a variety of language understanding tasks, we show the effectiveness of our method in accuracy, uncertainty estimation, generalization across domains, and robustness to adversarial attacks.
arXiv Detail & Related papers (2021-10-25T00:54:57Z) - Domain-Robust Visual Imitation Learning with Mutual Information
Constraints [0.0]
We introduce a new algorithm called Disentangling Generative Adversarial Imitation Learning (DisentanGAIL)
Our algorithm enables autonomous agents to learn directly from high dimensional observations of an expert performing a task.
arXiv Detail & Related papers (2021-03-08T21:18:58Z) - Behavior Priors for Efficient Reinforcement Learning [97.81587970962232]
We consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors.
We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives.
We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains.
arXiv Detail & Related papers (2020-10-27T13:17:18Z) - Sense and Learn: Self-Supervision for Omnipresent Sensors [9.442811508809994]
We present a framework named Sense and Learn for representation or feature learning from raw sensory data.
It consists of several auxiliary tasks that can learn high-level and broadly useful features entirely from unannotated data without any human involvement in the tedious labeling process.
Our methodology achieves results that are competitive with the supervised approaches and close the gap through fine-tuning a network while learning the downstream tasks in most cases.
arXiv Detail & Related papers (2020-09-28T11:57:43Z) - Learning from Extrinsic and Intrinsic Supervisions for Domain
Generalization [95.73898853032865]
We present a new domain generalization framework that learns how to generalize across domains simultaneously.
We demonstrate the effectiveness of our approach on two standard object recognition benchmarks.
arXiv Detail & Related papers (2020-07-18T03:12:24Z) - Self-supervised Learning from a Multi-view Perspective [121.63655399591681]
We show that self-supervised representations can extract task-relevant information and discard task-irrelevant information.
Our theoretical framework paves the way to a larger space of self-supervised learning objective design.
arXiv Detail & Related papers (2020-06-10T00:21:35Z) - Improving out-of-distribution generalization via multi-task
self-supervised pretraining [48.29123326140466]
We show that features obtained using self-supervised learning are comparable to, or better than, supervised learning for domain generalization in computer vision.
We introduce a new self-supervised pretext task of predicting responses to Gabor filter banks.
arXiv Detail & Related papers (2020-03-30T14:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.