Toward Improved Generalization: Meta Transfer of Self-supervised
Knowledge on Graphs
- URL: http://arxiv.org/abs/2212.08217v1
- Date: Fri, 16 Dec 2022 01:10:49 GMT
- Title: Toward Improved Generalization: Meta Transfer of Self-supervised
Knowledge on Graphs
- Authors: Wenhui Cui, Haleh Akrami, Anand A. Joshi, Richard M. Leahy
- Abstract summary: We propose a novel knowledge transfer strategy by integrating meta-learning with self-supervised learning.
Specifically, we perform a self-supervised task on the source domain and apply meta-learning, which strongly improves the generalizability of the model.
We demonstrate that the proposed strategy significantly improves target task performance by increasing the generalizability and transferability of graph-based knowledge.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the remarkable success achieved by graph convolutional networks for
functional brain activity analysis, the heterogeneity of functional patterns
and the scarcity of imaging data still pose challenges in many tasks.
Transferring knowledge from a source domain with abundant training data to a
target domain is effective for improving representation learning on scarce
training data. However, traditional transfer learning methods often fail to
generalize the pre-trained knowledge to the target task due to domain
discrepancy. Self-supervised learning on graphs can increase the
generalizability of graph features since self-supervision concentrates on
inherent graph properties that are not limited to a particular supervised task.
We propose a novel knowledge transfer strategy by integrating meta-learning
with self-supervised learning to deal with the heterogeneity and scarcity of
fMRI data. Specifically, we perform a self-supervised task on the source domain
and apply meta-learning, which strongly improves the generalizability of the
model using the bi-level optimization, to transfer the self-supervised
knowledge to the target domain. Through experiments on a neurological disorder
classification task, we demonstrate that the proposed strategy significantly
improves target task performance by increasing the generalizability and
transferability of graph-based knowledge.
Related papers
- Feature-based Graph Attention Networks Improve Online Continual Learning [19.557518080476566]
We present a novel online continual learning framework based on Graph Attention Networks (GATs)
GATs effectively capture contextual relationships and dynamically update the task-specific representation via learned attention weights.
In addition, we propose the rehearsal memory duplication technique that improves the representation of the previous tasks while maintaining the memory budget.
arXiv Detail & Related papers (2025-02-13T10:18:44Z) - Towards Graph Foundation Models: Learning Generalities Across Graphs via Task-Trees [50.78679002846741]
We introduce a novel approach for learning cross-task generalities in graphs.
We propose task-trees as basic learning instances to align task spaces on graphs.
Our findings indicate that when a graph neural network is pretrained on diverse task-trees, it acquires transferable knowledge.
arXiv Detail & Related papers (2024-12-21T02:07:43Z) - MLDGG: Meta-Learning for Domain Generalization on Graphs [9.872254367103057]
Domain generalization on graphs aims to develop models with robust generalization capabilities.
Our framework, MLDGG, endeavors to achieve adaptable generalization across diverse domains by integrating cross-multi-domain meta-learning.
Our empirical results demonstrate that MLDGG surpasses baseline methods, showcasing its effectiveness in three different distribution shift settings.
arXiv Detail & Related papers (2024-11-19T22:57:38Z) - Perturbation-based Graph Active Learning for Weakly-Supervised Belief Representation Learning [13.311498341765772]
The objective is to strategically identify valuable messages on social media graphs that are worth labeling within a constrained budget.
This paper proposes a graph data augmentation-inspired active learning strategy (PerbALGraph) that progressively selects messages for labeling.
arXiv Detail & Related papers (2024-10-24T22:11:06Z) - Core Knowledge Learning Framework for Graph Adaptation and Scalability Learning [7.239264041183283]
Graph classification faces several hurdles, including adapting to diverse prediction tasks, training across multiple target domains, and handling small-sample prediction scenarios.
By incorporating insights from various types of tasks, our method aims to enhance adaptability, scalability, and generalizability in graph classification.
Experimental results demonstrate significant performance enhancements achieved by our method compared to state-of-the-art approaches.
arXiv Detail & Related papers (2024-07-02T02:16:43Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Self-supervised Auxiliary Learning for Graph Neural Networks via
Meta-Learning [16.847149163314462]
We propose a novel self-supervised auxiliary learning framework to effectively learn graph neural networks.
Our method is learning to learn a primary task with various auxiliary tasks to improve generalization performance.
Our methods can be applied to any graph neural networks in a plug-in manner without manual labeling or additional data.
arXiv Detail & Related papers (2021-03-01T05:52:57Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.