Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation
- URL: http://arxiv.org/abs/2204.05104v2
- Date: Mon, 15 Jan 2024 10:48:19 GMT
- Title: Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation
- Authors: Jin Yuan, Feng Hou, Yangzhou Du, Zhongchao Shi, Xin Geng, Jianping
Fan, Yong Rui
- Abstract summary: Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
- Score: 51.21190751266442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation (DA) tries to tackle the scenarios when the test data does
not fully follow the same distribution of the training data, and multi-source
domain adaptation (MSDA) is very attractive for real world applications. By
learning from large-scale unlabeled samples, self-supervised learning has now
become a new trend in deep learning. It is worth noting that both
self-supervised learning and multi-source domain adaptation share a similar
goal: they both aim to leverage unlabeled data to learn more expressive
representations. Unfortunately, traditional multi-task self-supervised learning
faces two challenges: (1) the pretext task may not strongly relate to the
downstream task, thus it could be difficult to learn useful knowledge being
shared from the pretext task to the target task; (2) when the same feature
extractor is shared between the pretext task and the downstream one and only
different prediction heads are used, it is ineffective to enable inter-task
information exchange and knowledge sharing. To address these issues, we propose
a novel \textbf{S}elf-\textbf{S}upervised \textbf{G}raph Neural Network (SSG),
where a graph neural network is used as the bridge to enable more effective
inter-task information exchange and knowledge sharing. More expressive
representation is learned by adopting a mask token strategy to mask some domain
information. Our extensive experiments have demonstrated that our proposed SSG
method has achieved state-of-the-art results over four multi-source domain
adaptation datasets, which have shown the effectiveness of our proposed SSG
method from different aspects.
Related papers
- Prompt-Based Spatio-Temporal Graph Transfer Learning [22.855189872649376]
We propose a prompt-based framework capable of adapting to multi-diverse tasks in a data-scarce domain.
We employ learnable prompts to achieve domain and task transfer in a two-stage pipeline.
Our experiments demonstrate that STGP outperforms state-of-the-art baselines in three tasks-forecasting, kriging, and extrapolation-achieving an improvement of up to 10.7%.
arXiv Detail & Related papers (2024-05-21T02:06:40Z) - Direct Distillation between Different Domains [97.39470334253163]
We propose a new one-stage method dubbed Direct Distillation between Different Domains" (4Ds)
We first design a learnable adapter based on the Fourier transform to separate the domain-invariant knowledge from the domain-specific knowledge.
We then build a fusion-activation mechanism to transfer the valuable domain-invariant knowledge to the student network.
arXiv Detail & Related papers (2024-01-12T02:48:51Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt [67.8934749027315]
We propose a unified framework for graph hybrid pre-training which injects the task identification and position identification into GNNs.
We also propose a novel pre-training paradigm based on a group of $k$-nearest neighbors.
arXiv Detail & Related papers (2023-10-23T12:11:13Z) - Unsupervised Domain Adaptation on Person Re-Identification via
Dual-level Asymmetric Mutual Learning [108.86940401125649]
This paper proposes a Dual-level Asymmetric Mutual Learning method (DAML) to learn discriminative representations from a broader knowledge scope with diverse embedding spaces.
The knowledge transfer between two networks is based on an asymmetric mutual learning manner.
Experiments in Market-1501, CUHK-SYSU, and MSMT17 public datasets verified the superiority of DAML over state-of-the-arts.
arXiv Detail & Related papers (2023-01-29T12:36:17Z) - Deep transfer learning for partial differential equations under
conditional shift with DeepONet [0.0]
We propose a novel TL framework for task-specific learning under conditional shift with a deep operator network (DeepONet)
Inspired by the conditional embedding operator theory, we measure the statistical distance between the source domain and the target feature domain.
We show that the proposed TL framework enables fast and efficient multi-task operator learning, despite significant differences between the source and target domains.
arXiv Detail & Related papers (2022-04-20T23:23:38Z) - Learning Downstream Task by Selectively Capturing Complementary
Knowledge from Multiple Self-supervisedly Learning Pretexts [20.764378638979704]
We propose a novel solution by leveraging the attention mechanism to adaptively squeeze suitable representations for the tasks.
Our scheme significantly exceeds current popular pretext-matching based methods in gathering knowledge.
arXiv Detail & Related papers (2022-04-11T16:46:50Z) - Counting with Adaptive Auxiliary Learning [23.715818463425503]
This paper proposes an adaptive auxiliary task learning based approach for object counting problems.
We develop an attention-enhanced adaptively shared backbone network to enable both task-shared and task-tailored features learning.
Our method achieves superior performance to the state-of-the-art auxiliary task learning based counting methods.
arXiv Detail & Related papers (2022-03-08T13:10:17Z) - Adaptive Transfer Learning on Graph Neural Networks [4.233435459239147]
Graph neural networks (GNNs) are widely used to learn a powerful representation of graph-structured data.
Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation.
We propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task.
arXiv Detail & Related papers (2021-07-19T11:46:28Z) - Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation [78.28390172958643]
We identify two key aspects that can help to alleviate multiple domain-shifts in the multi-target domain adaptation (MTDA)
We propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains.
When the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones.
arXiv Detail & Related papers (2021-04-01T23:41:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.