Control the GNN: Utilizing Neural Controller with Lyapunov Stability for Test-Time Feature Reconstruction
- URL: http://arxiv.org/abs/2410.09708v1
- Date: Sun, 13 Oct 2024 03:34:19 GMT
- Title: Control the GNN: Utilizing Neural Controller with Lyapunov Stability for Test-Time Feature Reconstruction
- Authors: Jielong Yang, Rui Ding, Feng Ji, Hongbin Wang, Linbo Xie,
- Abstract summary: The performance of graph neural networks (GNNs) is susceptible to discrepancies between training and testing sample distributions.
We propose a novel node feature reconstruction method grounded in Lyapunov stability theory.
We validate the effectiveness of our approach through extensive experiments across multiple datasets, demonstrating significant performance improvements.
- Score: 15.066912209426542
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of graph neural networks (GNNs) is susceptible to discrepancies between training and testing sample distributions. Prior studies have attempted to enhance GNN performance by reconstructing node features during the testing phase without modifying the model parameters. However, these approaches lack theoretical analysis of the proximity between predictions and ground truth at test time. In this paper, we propose a novel node feature reconstruction method grounded in Lyapunov stability theory. Specifically, we model the GNN as a control system during the testing phase, considering node features as control variables. A neural controller that adheres to the Lyapunov stability criterion is then employed to reconstruct these node features, ensuring that the predictions progressively approach the ground truth at test time. We validate the effectiveness of our approach through extensive experiments across multiple datasets, demonstrating significant performance improvements.
Related papers
- Online GNN Evaluation Under Test-time Graph Distribution Shifts [92.4376834462224]
A new research problem, online GNN evaluation, aims to provide valuable insights into the well-trained GNNs's ability to generalize to real-world unlabeled graphs.
We develop an effective learning behavior discrepancy score, dubbed LeBeD, to estimate the test-time generalization errors of well-trained GNN models.
arXiv Detail & Related papers (2024-03-15T01:28:08Z) - SimCalib: Graph Neural Network Calibration based on Similarity between
Nodes [60.92081159963772]
Graph neural networks (GNNs) have exhibited impressive performance in modeling graph data as exemplified in various applications.
We shed light on the relationship between GNN calibration and nodewise similarity via theoretical analysis.
A novel calibration framework, named SimCalib, is accordingly proposed to consider similarity between nodes at global and local levels.
arXiv Detail & Related papers (2023-12-19T04:58:37Z) - An LSTM-Based Predictive Monitoring Method for Data with Time-varying
Variability [3.5246670856011035]
This paper explores the ability of the recurrent neural network structure to monitor processes.
It proposes a control chart based on long short-term memory (LSTM) prediction intervals for data with time-varying variability.
The proposed method is also applied to time series sensor data, which confirms that the proposed method is an effective technique for detecting abnormalities.
arXiv Detail & Related papers (2023-09-05T06:13:09Z) - FRGNN: Mitigating the Impact of Distribution Shift on Graph Neural
Networks via Test-Time Feature Reconstruction [13.21683198528012]
A distribution shift can adversely affect the test performance of Graph Neural Networks (GNNs)
We propose FR-GNN, a general framework for GNNs to conduct feature reconstruction.
Notably, the reconstructed node features can be directly utilized for testing the well-trained model.
arXiv Detail & Related papers (2023-08-18T02:34:37Z) - A New PHO-rmula for Improved Performance of Semi-Structured Networks [0.0]
We show that techniques to properly identify the contributions of the different model components in SSNs lead to suboptimal network estimation.
We propose a non-invasive post-hocization (PHO) that guarantees identifiability of model components and provides better estimation and prediction quality.
Our theoretical findings are supported by numerical experiments, a benchmark comparison as well as a real-world application to COVID-19 infections.
arXiv Detail & Related papers (2023-06-01T10:23:28Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Neural network optimal feedback control with enhanced closed loop
stability [3.0981875303080795]
Recent research has shown that supervised learning can be an effective tool for designing optimal feedback controllers for high-dimensional nonlinear dynamic systems.
But the behavior of these neural network (NN) controllers is still not well understood.
In this paper we use numerical simulations to demonstrate that typical test accuracy metrics do not effectively capture the ability of an NN controller to stabilize a system.
arXiv Detail & Related papers (2021-09-15T17:59:20Z) - Stochastic Deep Model Reference Adaptive Control [9.594432031144715]
We present a Deep Neural Network-based Model Reference Adaptive Control.
Deep Model Reference Adaptive Control uses a Lyapunov-based method to adapt the output-layer weights of the DNN model in real-time.
A data-driven supervised learning algorithm is used to update the inner-layers parameters.
arXiv Detail & Related papers (2021-08-04T14:05:09Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z) - Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees [49.91477656517431]
Quantization-based solvers have been widely adopted in Federated Learning (FL)
No existing methods enjoy all the aforementioned properties.
We propose an intuitively-simple yet theoretically-simple method based on SIGNSGD to bridge the gap.
arXiv Detail & Related papers (2020-02-25T15:12:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.