Temporal Generalization Estimation in Evolving Graphs
- URL: http://arxiv.org/abs/2404.04969v1
- Date: Sun, 7 Apr 2024 14:19:22 GMT
- Title: Temporal Generalization Estimation in Evolving Graphs
- Authors: Bin Lu, Tingyan Ma, Xiaoying Gan, Xinbing Wang, Yunqiang Zhu, Chenghu Zhou, Shiyu Liang,
- Abstract summary: Graph Neural Networks (GNNs) are widely deployed in vast fields, but they often struggle to maintain accurate representations as graphs evolve.
We theoretically establish a lower bound, proving that under mild conditions, representation distortion inevitably occurs over time.
We introduce Smart, a straightforward and effective baseline enhanced by an adaptive feature extractor through self-supervised graph reconstruction.
- Score: 35.859484484574956
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph Neural Networks (GNNs) are widely deployed in vast fields, but they often struggle to maintain accurate representations as graphs evolve. We theoretically establish a lower bound, proving that under mild conditions, representation distortion inevitably occurs over time. To estimate the temporal distortion without human annotation after deployment, one naive approach is to pre-train a recurrent model (e.g., RNN) before deployment and use this model afterwards, but the estimation is far from satisfactory. In this paper, we analyze the representation distortion from an information theory perspective, and attribute it primarily to inaccurate feature extraction during evolution. Consequently, we introduce Smart, a straightforward and effective baseline enhanced by an adaptive feature extractor through self-supervised graph reconstruction. In synthetic random graphs, we further refine the former lower bound to show the inevitable distortion over time and empirically observe that Smart achieves good estimation performance. Moreover, we observe that Smart consistently shows outstanding generalization estimation on four real-world evolving graphs. The ablation studies underscore the necessity of graph reconstruction. For example, on OGB-arXiv dataset, the estimation metric MAPE deteriorates from 2.19% to 8.00% without reconstruction.
Related papers
- Detecting Anomalies in Dynamic Graphs via Memory enhanced Normality [39.476378833827184]
Anomaly detection in dynamic graphs presents a significant challenge due to the temporal evolution of graph structures and attributes.
We introduce a novel spatial- temporal memories-enhanced graph autoencoder (STRIPE)
STRIPE significantly outperforms existing methods with 5.8% improvement in AUC scores and 4.62X faster in training time.
arXiv Detail & Related papers (2024-03-14T02:26:10Z) - Investigating the Histogram Loss in Regression [16.83443393563771]
Histogram Loss is a regression approach to learning the conditional distribution of a target variable.
We show that the benefits of learning distributions in this setup come from improvements in optimization rather than modelling extra information.
arXiv Detail & Related papers (2024-02-20T23:29:41Z) - Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - ADA-GAD: Anomaly-Denoised Autoencoders for Graph Anomaly Detection [84.0718034981805]
We introduce a novel framework called Anomaly-Denoised Autoencoders for Graph Anomaly Detection (ADA-GAD)
In the first stage, we design a learning-free anomaly-denoised augmentation method to generate graphs with reduced anomaly levels.
In the next stage, the decoders are retrained for detection on the original graph.
arXiv Detail & Related papers (2023-12-22T09:02:01Z) - Temporal Graph Neural Networks for Irregular Data [14.653008985229615]
TGNN4I model is designed to handle both irregular time steps and partial observations of the graph.
Time-continuous dynamics enables the model to make predictions at arbitrary time steps.
Experiments on simulated data and real-world data from traffic and climate modeling validate the usefulness of both the graph structure and time-continuous dynamics.
arXiv Detail & Related papers (2023-02-16T16:47:55Z) - Learning to Reconstruct Missing Data from Spatiotemporal Graphs with
Sparse Observations [11.486068333583216]
This paper tackles the problem of learning effective models to reconstruct missing data points.
We propose a class of attention-based architectures, that given a set of highly sparse observations, learn a representation for points in time and space.
Compared to the state of the art, our model handles sparse data without propagating prediction errors or requiring a bidirectional model to encode forward and backward time dependencies.
arXiv Detail & Related papers (2022-05-26T16:40:48Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Confidence Adaptive Anytime Pixel-Level Recognition [86.75784498879354]
Anytime inference requires a model to make a progression of predictions which might be halted at any time.
We propose the first unified and end-to-end model approach for anytime pixel-level recognition.
arXiv Detail & Related papers (2021-04-01T20:01:57Z) - Residual Correlation in Graph Neural Network Regression [39.54530450932135]
We show that conditional independence assumption severely limits predictive power.
We address this problem with an interpretable and efficient framework.
Our framework achieves substantially higher accuracy than competing baselines.
arXiv Detail & Related papers (2020-02-19T16:32:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.