On Generalization of Graph Autoencoders with Adversarial Training
- URL: http://arxiv.org/abs/2107.02658v1
- Date: Tue, 6 Jul 2021 14:53:19 GMT
- Title: On Generalization of Graph Autoencoders with Adversarial Training
- Authors: Tianjin huang, Yulong Pei, Vlado Menkovski and Mykola Pechenizkiy
- Abstract summary: Adversarial training is an approach for increasing model's resilience against adversarial perturbations.
We formulate L2 and L1 versions of adversarial training in two powerful node embedding methods.
We demonstrate that both L2 and L1 adversarial training boost the generalization of GAE and VGAE.
- Score: 8.608288231153304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training is an approach for increasing model's resilience against
adversarial perturbations. Such approaches have been demonstrated to result in
models with feature representations that generalize better. However, limited
works have been done on adversarial training of models on graph data. In this
paper, we raise such a question { does adversarial training improve the
generalization of graph representations. We formulate L2 and L1 versions of
adversarial training in two powerful node embedding methods: graph autoencoder
(GAE) and variational graph autoencoder (VGAE). We conduct extensive
experiments on three main applications, i.e. link prediction, node clustering,
graph anomaly detection of GAE and VGAE, and demonstrate that both L2 and L1
adversarial training boost the generalization of GAE and VGAE.
Related papers
- Uncovering Capabilities of Model Pruning in Graph Contrastive Learning [0.0]
We reformulate the problem of graph contrastive learning via contrasting different model versions rather than augmented views.
We extensively validate our method on various benchmarks regarding graph classification via unsupervised and transfer learning.
arXiv Detail & Related papers (2024-10-27T07:09:31Z) - Do We Really Need Graph Convolution During Training? Light Post-Training Graph-ODE for Efficient Recommendation [34.93725892725111]
Graph convolution networks (GCNs) in training recommender systems (RecSys) have been persistent concerns.
This paper presents a critical examination of the necessity of graph convolutions during the training phase.
We introduce an innovative alternative: the Light Post-Training Graph Ordinary-Differential-Equation (LightGODE)
arXiv Detail & Related papers (2024-07-26T17:59:32Z) - Deep Contrastive Graph Learning with Clustering-Oriented Guidance [61.103996105756394]
Graph Convolutional Network (GCN) has exhibited remarkable potential in improving graph-based clustering.
Models estimate an initial graph beforehand to apply GCN.
Deep Contrastive Graph Learning (DCGL) model is proposed for general data clustering.
arXiv Detail & Related papers (2024-02-25T07:03:37Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Contributions to Representation Learning with Graph Autoencoders and
Applications to Music Recommendation [1.2691047660244335]
Graph autoencoders (GAE) and variational graph autoencoders (VGAE) emerged as powerful groups of unsupervised node embedding methods.
At the beginning of this Ph.D. project, GAE and VGAE models were also suffering from key limitations, preventing them from being adopted in the industry.
We present several contributions to improve these models, with the general aim of facilitating their use to address industrial-level problems involving graph representations.
arXiv Detail & Related papers (2022-05-29T13:14:53Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Dynamic Graph Representation Learning via Graph Transformer Networks [41.570839291138114]
We propose a Transformer-based dynamic graph learning method named Dynamic Graph Transformer (DGT)
DGT has spatial-temporal encoding to effectively learn graph topology and capture implicit links.
We show that DGT presents superior performance compared with several state-of-the-art baselines.
arXiv Detail & Related papers (2021-11-19T21:44:23Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Iterative Graph Self-Distillation [161.04351580382078]
We propose a novel unsupervised graph learning paradigm called Iterative Graph Self-Distillation (IGSD)
IGSD iteratively performs the teacher-student distillation with graph augmentations.
We show that we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings.
arXiv Detail & Related papers (2020-10-23T18:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.