HAT-GAE: Self-Supervised Graph Auto-encoders with Hierarchical Adaptive
Masking and Trainable Corruption
- URL: http://arxiv.org/abs/2301.12063v1
- Date: Sat, 28 Jan 2023 02:43:54 GMT
- Title: HAT-GAE: Self-Supervised Graph Auto-encoders with Hierarchical Adaptive
Masking and Trainable Corruption
- Authors: Chengyu Sun
- Abstract summary: We propose a novel auto-encoder model for graph representation learning.
Our model incorporates a hierarchical adaptive masking mechanism to incrementally increase the difficulty of training.
We demonstrate the superiority of our proposed method over state-of-the-art graph representation learning models.
- Score: 0.76146285961466
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Self-supervised auto-encoders have emerged as a successful framework for
representation learning in computer vision and natural language processing in
recent years, However, their application to graph data has been met with
limited performance due to the non-Euclidean and complex structure of graphs in
comparison to images or text, as well as the limitations of conventional
auto-encoder architectures. In this paper, we investigate factors impacting the
performance of auto-encoders on graph data and propose a novel auto-encoder
model for graph representation learning. Our model incorporates a hierarchical
adaptive masking mechanism to incrementally increase the difficulty of training
in order to mimic the process of human cognitive learning, and a trainable
corruption scheme to enhance the robustness of learned representations. Through
extensive experimentation on ten benchmark datasets, we demonstrate the
superiority of our proposed method over state-of-the-art graph representation
learning models.
Related papers
- Incremental Learning with Concept Drift Detection and Prototype-based Embeddings for Graph Stream Classification [11.811637154674939]
This work introduces a novel method for graph stream classification.
It operates under the general setting where a data generating process produces graphs with varying nodes and edges over time.
It incorporates a loss-based concept drift detection mechanism to recalculate graph prototypes when drift is detected.
arXiv Detail & Related papers (2024-04-03T08:47:32Z) - Variational Graph Auto-Encoder Based Inductive Learning Method for Semi-Supervised Classification [10.497590357666114]
We propose the Self-Label Augmented VGAE model for inductive graph representation learning.
To leverage the label information for training, our model takes node labels as one-hot encoded inputs and then performs label reconstruction in model training.
Our proposed model archives promise results on node classification with particular superiority under semi-supervised learning settings.
arXiv Detail & Related papers (2024-03-26T08:59:37Z) - CogCoM: Train Large Vision-Language Models Diving into Details through Chain of Manipulations [61.21923643289266]
Chain of Manipulations is a mechanism that enables Vision-Language Models to solve problems step-by-step with evidence.
After training, models can solve various visual problems by eliciting intrinsic manipulations (e.g., grounding, zoom in) actively without involving external tools.
Our trained model, textbfCogCoM, achieves state-of-the-art performance across 9 benchmarks from 4 categories.
arXiv Detail & Related papers (2024-02-06T18:43:48Z) - Robust Graph Representation Learning via Predictive Coding [46.22695915912123]
Predictive coding is a message-passing framework initially developed to model information processing in the brain.
In this work, we build models that rely on the message-passing rule of predictive coding.
We show that the proposed models are comparable to standard ones in terms of performance in both inductive and transductive tasks.
arXiv Detail & Related papers (2022-12-09T03:58:22Z) - Distilling Knowledge from Self-Supervised Teacher by Embedding Graph
Alignment [52.704331909850026]
We formulate a new knowledge distillation framework to transfer the knowledge from self-supervised pre-trained models to any other student network.
Inspired by the spirit of instance discrimination in self-supervised learning, we model the instance-instance relations by a graph formulation in the feature embedding space.
Our distillation scheme can be flexibly applied to transfer the self-supervised knowledge to enhance representation learning on various student networks.
arXiv Detail & Related papers (2022-11-23T19:27:48Z) - Graph-based Neural Modules to Inspect Attention-based Architectures: A
Position Paper [0.0]
encoder-decoder models offer an exciting opportunity for visualization and editing by humans of the knowledge implicitly represented in model weights.
In this work, we explore ways to create an abstraction for segments of the network as a two-way graph-based representation.
Such two-way graph representation enables new neuro-symbolic systems by leveraging the pattern recognition capabilities of the encoder-decoder along with symbolic reasoning carried out on the graphs.
arXiv Detail & Related papers (2022-10-13T15:52:12Z) - GraphMAE: Self-Supervised Masked Graph Autoencoders [52.06140191214428]
We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-22T11:57:08Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Adaptive Graph Auto-Encoder for General Data Clustering [90.8576971748142]
Graph-based clustering plays an important role in the clustering area.
Recent studies about graph convolution neural networks have achieved impressive success on graph type data.
We propose a graph auto-encoder for general data clustering, which constructs the graph adaptively according to the generative perspective of graphs.
arXiv Detail & Related papers (2020-02-20T10:11:28Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.