Deep Graph Reprogramming
- URL: http://arxiv.org/abs/2304.14593v1
- Date: Fri, 28 Apr 2023 02:04:29 GMT
- Title: Deep Graph Reprogramming
- Authors: Yongcheng Jing, Chongbin Yuan, Li Ju, Yiding Yang, Xinchao Wang,
Dacheng Tao
- Abstract summary: "Deep graph reprogramming" is a model reusing task tailored for graph neural networks (GNNs)
We propose an innovative Data Reprogramming paradigm alongside a Model Reprogramming paradigm.
- Score: 112.34663053130073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we explore a novel model reusing task tailored for graph
neural networks (GNNs), termed as "deep graph reprogramming". We strive to
reprogram a pre-trained GNN, without amending raw node features nor model
parameters, to handle a bunch of cross-level downstream tasks in various
domains. To this end, we propose an innovative Data Reprogramming paradigm
alongside a Model Reprogramming paradigm. The former one aims to address the
challenge of diversified graph feature dimensions for various tasks on the
input side, while the latter alleviates the dilemma of fixed per-task-per-model
behavior on the model side. For data reprogramming, we specifically devise an
elaborated Meta-FeatPadding method to deal with heterogeneous input dimensions,
and also develop a transductive Edge-Slimming as well as an inductive
Meta-GraPadding approach for diverse homogenous samples. Meanwhile, for model
reprogramming, we propose a novel task-adaptive Reprogrammable-Aggregator, to
endow the frozen model with larger expressive capacities in handling
cross-domain tasks. Experiments on fourteen datasets across node/graph
classification/regression, 3D object recognition, and distributed action
recognition, demonstrate that the proposed methods yield gratifying results, on
par with those by re-training from scratch.
Related papers
- Scalable Weibull Graph Attention Autoencoder for Modeling Document Networks [50.42343781348247]
We develop a graph Poisson factor analysis (GPFA) which provides analytic conditional posteriors to improve the inference accuracy.
We also extend GPFA to a multi-stochastic-layer version named graph Poisson gamma belief network (GPGBN) to capture the hierarchical document relationships at multiple semantic levels.
Our models can extract high-quality hierarchical latent document representations and achieve promising performance on various graph analytic tasks.
arXiv Detail & Related papers (2024-10-13T02:22:14Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Heterogeneous Multi-Task Gaussian Cox Processes [61.67344039414193]
We present a novel extension of multi-task Gaussian Cox processes for modeling heterogeneous correlated tasks jointly.
A MOGP prior over the parameters of the dedicated likelihoods for classification, regression and point process tasks can facilitate sharing of information between heterogeneous tasks.
We derive a mean-field approximation to realize closed-form iterative updates for estimating model parameters.
arXiv Detail & Related papers (2023-08-29T15:01:01Z) - RegExplainer: Generating Explanations for Graph Neural Networks in Regression Tasks [10.473178462412584]
We propose a novel explanation method to interpret the graph regression models (XAIG-R)
Our method addresses the distribution shifting problem and continuously ordered decision boundary issues.
We present a self-supervised learning strategy to tackle the continuously ordered labels in regression tasks.
arXiv Detail & Related papers (2023-07-15T16:16:22Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Text Representation Enrichment Utilizing Graph based Approaches: Stock
Market Technical Analysis Case Study [0.0]
We propose a transductive hybrid approach composed of an unsupervised node representation learning model followed by a node classification/edge prediction model.
The proposed model is developed to classify stock market technical analysis reports, which to our knowledge is the first work in this domain.
arXiv Detail & Related papers (2022-11-29T11:26:08Z) - Hierarchical Graph-Convolutional Variational AutoEncoding for Generative
Modelling of Human Motion [1.2599533416395767]
Models of human motion commonly focus either on trajectory prediction or action classification but rarely both.
Here we propose a novel architecture based on hierarchical variational autoencoders and deep graph convolutional neural networks for generating a holistic model of action over multiple time-scales.
We show this Hierarchical Graph-conational Varivolutional Autoencoder (HG-VAE) to be capable of generating coherent actions, detecting out-of-distribution data, and imputing missing data by gradient ascent on the model's posterior.
arXiv Detail & Related papers (2021-11-24T16:21:07Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - Gradients as Features for Deep Representation Learning [26.996104074384263]
We address the problem of deep representation learning--the efficient adaption of a pre-trained deep network to different tasks.
Our key innovation is the design of a linear model that incorporates both gradient and activation of the pre-trained network.
We present an efficient algorithm for the training and inference of our model without computing the actual gradient.
arXiv Detail & Related papers (2020-04-12T02:57:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.