Handling Missing Data via Max-Entropy Regularized Graph Autoencoder
- URL: http://arxiv.org/abs/2211.16771v1
- Date: Wed, 30 Nov 2022 06:22:40 GMT
- Title: Handling Missing Data via Max-Entropy Regularized Graph Autoencoder
- Authors: Ziqi Gao, Yifan Niu, Jiashun Cheng, Jianheng Tang, Tingyang Xu, Peilin
Zhao, Lanqing Li, Fugee Tsung, Jia Li
- Abstract summary: MEGAE is a regularized graph autoencoder for graph attribute imputation.
It aims at mitigating spectral concentration problem by maximizing the graph spectral entropy.
It outperforms all the other state-of-the-art imputation methods on a variety of benchmark datasets.
- Score: 37.8103274049137
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) are popular weapons for modeling relational
data. Existing GNNs are not specified for attribute-incomplete graphs, making
missing attribute imputation a burning issue. Until recently, many works notice
that GNNs are coupled with spectral concentration, which means the spectrum
obtained by GNNs concentrates on a local part in spectral domain, e.g.,
low-frequency due to oversmoothing issue. As a consequence, GNNs may be
seriously flawed for reconstructing graph attributes as graph spectral
concentration tends to cause a low imputation precision. In this work, we
present a regularized graph autoencoder for graph attribute imputation, named
MEGAE, which aims at mitigating spectral concentration problem by maximizing
the graph spectral entropy. Notably, we first present the method for estimating
graph spectral entropy without the eigen-decomposition of Laplacian matrix and
provide the theoretical upper error bound. A maximum entropy regularization
then acts in the latent space, which directly increases the graph spectral
entropy. Extensive experiments show that MEGAE outperforms all the other
state-of-the-art imputation methods on a variety of benchmark datasets.
Related papers
- A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - ADA-GAD: Anomaly-Denoised Autoencoders for Graph Anomaly Detection [84.0718034981805]
We introduce a novel framework called Anomaly-Denoised Autoencoders for Graph Anomaly Detection (ADA-GAD)
In the first stage, we design a learning-free anomaly-denoised augmentation method to generate graphs with reduced anomaly levels.
In the next stage, the decoders are retrained for detection on the original graph.
arXiv Detail & Related papers (2023-12-22T09:02:01Z) - Graph Distillation with Eigenbasis Matching [43.59076214528843]
We propose Graph Distillation with Eigenbasis Matching (GDEM) to replace the real large graph.
GDEM aligns the eigenbasis and node features of real and synthetic graphs.
It directly replicates the spectrum of the real graph and thus prevents the influence of GNNs.
arXiv Detail & Related papers (2023-10-13T15:48:12Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - A Spectral Analysis of Graph Neural Networks on Dense and Sparse Graphs [13.954735096637298]
We analyze how sparsity affects the graph spectra, and thus the performance of graph neural networks (GNNs) in node classification on dense and sparse graphs.
We show that GNNs can outperform spectral methods on sparse graphs, and illustrate these results with numerical examples on both synthetic and real graphs.
arXiv Detail & Related papers (2022-11-06T22:38:13Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Deep Graph-level Anomaly Detection by Glocal Knowledge Distillation [61.39364567221311]
Graph-level anomaly detection (GAD) describes the problem of detecting graphs that are abnormal in their structure and/or the features of their nodes.
One of the challenges in GAD is to devise graph representations that enable the detection of both locally- and globally-anomalous graphs.
We introduce a novel deep anomaly detection approach for GAD that learns rich global and local normal pattern information by joint random distillation of graph and node representations.
arXiv Detail & Related papers (2021-12-19T05:04:53Z) - Beyond Low-Pass Filters: Adaptive Feature Propagation on Graphs [6.018995094882323]
Graph neural networks (GNNs) have been extensively studied for prediction tasks on graphs.
Most GNNs assume local homophily, i.e., strong similarities in localneighborhoods.
We propose a flexible GNN model, which is capable of handling any graphs without beingrestricted by their underlying homophily.
arXiv Detail & Related papers (2021-03-26T00:35:36Z) - Graph Networks with Spectral Message Passing [1.0742675209112622]
We introduce the Spectral Graph Network, which applies message passing to both the spatial and spectral domains.
Our results show that the Spectral GN promotes efficient training, reaching high performance with fewer training iterations despite having more parameters.
arXiv Detail & Related papers (2020-12-31T21:33:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.