Bridged-GNN: Knowledge Bridge Learning for Effective Knowledge Transfer
- URL: http://arxiv.org/abs/2308.09499v1
- Date: Fri, 18 Aug 2023 12:14:51 GMT
- Title: Bridged-GNN: Knowledge Bridge Learning for Effective Knowledge Transfer
- Authors: Wendong Bi, Xueqi Cheng, Bingbing Xu, Xiaoqian Sun, Li Xu, Huawei Shen
- Abstract summary: Graph Neural Networks (GNNs) aggregate information from neighboring nodes.
Knowledge Bridge Learning (KBL) learns a knowledge-enhanced posterior distribution for target domains.
Bridged-GNN includes an Adaptive Knowledge Retrieval module to build Bridged-Graph and a Graph Knowledge Transfer module.
- Score: 65.42096702428347
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The data-hungry problem, characterized by insufficiency and low-quality of
data, poses obstacles for deep learning models. Transfer learning has been a
feasible way to transfer knowledge from high-quality external data of source
domains to limited data of target domains, which follows a domain-level
knowledge transfer to learn a shared posterior distribution. However, they are
usually built on strong assumptions, e.g., the domain invariant posterior
distribution, which is usually unsatisfied and may introduce noises, resulting
in poor generalization ability on target domains. Inspired by Graph Neural
Networks (GNNs) that aggregate information from neighboring nodes, we redefine
the paradigm as learning a knowledge-enhanced posterior distribution for target
domains, namely Knowledge Bridge Learning (KBL). KBL first learns the scope of
knowledge transfer by constructing a Bridged-Graph that connects knowledgeable
samples to each target sample and then performs sample-wise knowledge transfer
via GNNs.KBL is free from strong assumptions and is robust to noises in the
source data. Guided by KBL, we propose the Bridged-GNN} including an Adaptive
Knowledge Retrieval module to build Bridged-Graph and a Graph Knowledge
Transfer module. Comprehensive experiments on both un-relational and relational
data-hungry scenarios demonstrate the significant improvements of Bridged-GNN
compared with SOTA methods
Related papers
- Federated Graph Learning for Cross-Domain Recommendation [33.33321213257222]
Cross-domain recommendation (CDR) offers a promising solution to the data sparsity problem by enabling knowledge transfer across source and target domains.
We propose FedGCDR, a novel graph learning framework that securely and effectively leverages positive knowledge from multiple source domains.
We conduct extensive experiments on 16 popular domains of the Amazon dataset, demonstrating that FedGCDR significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-10-10T12:19:51Z) - Adapting to Distribution Shift by Visual Domain Prompt Generation [34.19066857066073]
We adapt a model at test-time using a few unlabeled data to address distribution shifts.
We build a knowledge bank to learn the transferable knowledge from source domains.
The proposed method outperforms previous work on 5 large-scale benchmarks including WILDS and DomainNet.
arXiv Detail & Related papers (2024-05-05T02:44:04Z) - Bayesian Neural Networks with Domain Knowledge Priors [52.80929437592308]
We propose a framework for integrating general forms of domain knowledge into a BNN prior.
We show that BNNs using our proposed domain knowledge priors outperform those with standard priors.
arXiv Detail & Related papers (2024-02-20T22:34:53Z) - Learning State-Augmented Policies for Information Routing in
Communication Networks [92.59624401684083]
We develop a novel State Augmentation (SA) strategy to maximize the aggregate information at source nodes using graph neural network (GNN) architectures.
We leverage an unsupervised learning procedure to convert the output of the GNN architecture to optimal information routing strategies.
In the experiments, we perform the evaluation on real-time network topologies to validate our algorithms.
arXiv Detail & Related papers (2023-09-30T04:34:25Z) - Prior Knowledge Guided Unsupervised Domain Adaptation [82.9977759320565]
We propose a Knowledge-guided Unsupervised Domain Adaptation (KUDA) setting where prior knowledge about the target class distribution is available.
In particular, we consider two specific types of prior knowledge about the class distribution in the target domain: Unary Bound and Binary Relationship.
We propose a rectification module that uses such prior knowledge to refine model generated pseudo labels.
arXiv Detail & Related papers (2022-07-18T18:41:36Z) - Unified Instance and Knowledge Alignment Pretraining for Aspect-based
Sentiment Analysis [96.53859361560505]
Aspect-based Sentiment Analysis (ABSA) aims to determine the sentiment polarity towards an aspect.
There always exists severe domain shift between the pretraining and downstream ABSA datasets.
We introduce a unified alignment pretraining framework into the vanilla pretrain-finetune pipeline.
arXiv Detail & Related papers (2021-10-26T04:03:45Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.