Siamese Attribute-missing Graph Auto-encoder
- URL: http://arxiv.org/abs/2112.04842v1
- Date: Thu, 9 Dec 2021 11:21:31 GMT
- Title: Siamese Attribute-missing Graph Auto-encoder
- Authors: Wenxuan Tu, Sihang Zhou, Yue Liu, Xinwang Liu
- Abstract summary: We propose Siamese Attribute-missing Graph Auto-encoder (SAGA)
First, we entangle the attribute embedding and structure embedding by introducing a siamese network structure to share the parameters learned by both processes.
Second, we introduce a K-nearest neighbor (KNN) and structural constraint enhanced learning mechanism to improve the quality of latent features of the missing attributes.
- Score: 35.79233150253881
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph representation learning (GRL) on attribute-missing graphs, which is a
common yet challenging problem, has recently attracted considerable attention.
We observe that existing literature: 1) isolates the learning of attribute and
structure embedding thus fails to take full advantages of the two types of
information; 2) imposes too strict distribution assumption on the latent space
variables, leading to less discriminative feature representations. In this
paper, based on the idea of introducing intimate information interaction
between the two information sources, we propose our Siamese Attribute-missing
Graph Auto-encoder (SAGA). Specifically, three strategies have been conducted.
First, we entangle the attribute embedding and structure embedding by
introducing a siamese network structure to share the parameters learned by both
processes, which allows the network training to benefit from more abundant and
diverse information. Second, we introduce a K-nearest neighbor (KNN) and
structural constraint enhanced learning mechanism to improve the quality of
latent features of the missing attributes by filtering unreliable connections.
Third, we manually mask the connections on multiple adjacent matrices and force
the structural information embedding sub-network to recover the true adjacent
matrix, thus enforcing the resulting network to be able to selectively exploit
more high-order discriminative features for data completion. Extensive
experiments on six benchmark datasets demonstrate the superiority of our SAGA
against the state-of-the-art methods.
Related papers
- A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Node Classification via Semantic-Structural Attention-Enhanced Graph Convolutional Networks [0.9463895540925061]
We introduce the semantic-structural attention-enhanced graph convolutional network (SSA-GCN)
It not only models the graph structure but also extracts generalized unsupervised features to enhance classification performance.
Our experiments on the Cora and CiteSeer datasets demonstrate the performance improvements achieved by our proposed method.
arXiv Detail & Related papers (2024-03-24T06:28:54Z) - UGMAE: A Unified Framework for Graph Masked Autoencoders [67.75493040186859]
We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
arXiv Detail & Related papers (2024-02-12T19:39:26Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Cell Attention Networks [25.72671436731666]
We introduce Cell Attention Networks (CANs), a neural architecture operating on data defined over the vertices of a graph.
CANs exploit the lower and upper neighborhoods, as encoded in the cell complex, to design two independent masked self-attention mechanisms.
The experimental results show that CAN is a low complexity strategy that compares favorably with state of the art results on graph-based learning tasks.
arXiv Detail & Related papers (2022-09-16T21:57:39Z) - Adaptive Attribute and Structure Subspace Clustering Network [49.040136530379094]
We propose a novel self-expressiveness-based subspace clustering network.
We first consider an auto-encoder to represent input data samples.
Then, we construct a mixed signed and symmetric structure matrix to capture the local geometric structure underlying data.
We perform self-expressiveness on the constructed attribute structure and matrices to learn their affinity graphs.
arXiv Detail & Related papers (2021-09-28T14:00:57Z) - Deep Attributed Network Representation Learning via Attribute Enhanced
Neighborhood [10.954489956418191]
Attributed network representation learning aims at learning node embeddings by integrating network structure and attribute information.
It is a challenge to fully capture the microscopic structure and the attribute semantics simultaneously.
We propose a deep attributed network representation learning via attribute enhanced neighborhood (DANRL-ANE) model to improve the robustness and effectiveness of node representations.
arXiv Detail & Related papers (2021-04-12T07:03:16Z) - GAGE: Geometry Preserving Attributed Graph Embeddings [34.25102483600248]
This paper presents a novel approach for node embedding in attributed networks.
It preserves the distances of both the connections and the attributes.
An effective and lightweight algorithm is developed to tackle the learning task.
arXiv Detail & Related papers (2020-11-03T02:07:02Z) - Graph Information Bottleneck [77.21967740646784]
Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features.
Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task.
We show that our proposed models are more robust than state-of-the-art graph defense models.
arXiv Detail & Related papers (2020-10-24T07:13:00Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.