Generalized energy and gradient flow via graph framelets
- URL: http://arxiv.org/abs/2210.04124v1
- Date: Sat, 8 Oct 2022 23:40:45 GMT
- Title: Generalized energy and gradient flow via graph framelets
- Authors: Andi Han, Dai Shi, Zhiqi Shao, Junbin Gao
- Abstract summary: We provide a theoretical understanding of the framelet-based graph neural networks through the perspective of energy gradient flow.
By viewing the framelet-based models as discretized gradient flows of some energy, we show it can induce both low-frequency and high-frequency-dominated dynamics.
- Score: 27.71018932795014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we provide a theoretical understanding of the framelet-based
graph neural networks through the perspective of energy gradient flow. By
viewing the framelet-based models as discretized gradient flows of some energy,
we show it can induce both low-frequency and high-frequency-dominated dynamics,
via the separate weight matrices for different frequency components. This
substantiates its good empirical performance on both homophilic and
heterophilic graphs. We then propose a generalized energy via framelet
decomposition and show its gradient flow leads to a novel graph neural network,
which includes many existing models as special cases. We then explain how the
proposed model generally leads to more flexible dynamics, thus potentially
enhancing the representation power of graph neural networks.
Related papers
- Neural Message Passing Induced by Energy-Constrained Diffusion [79.9193447649011]
We propose an energy-constrained diffusion model as a principled interpretable framework for understanding the mechanism of MPNNs.
We show that the new model can yield promising performance for cases where the data structures are observed (as a graph), partially observed or completely unobserved.
arXiv Detail & Related papers (2024-09-13T17:54:41Z) - Injecting Hamiltonian Architectural Bias into Deep Graph Networks for Long-Range Propagation [55.227976642410766]
dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning.
Motivated by this, we introduce (port-)Hamiltonian Deep Graph Networks.
We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors.
arXiv Detail & Related papers (2024-05-27T13:36:50Z) - Dirichlet Energy Enhancement of Graph Neural Networks by Framelet
Augmentation [19.56268823452656]
We introduce a framelet system into the analysis of Dirichlet energy and take a multi-scale perspective to leverage the Dirichlet energy.
Based on that, we design the Energy Enhanced Convolution (EEConv), which is an effective and practical operation.
Experiments show that deep GNNs with EEConv achieve state-of-the-art performance over various node classification datasets.
arXiv Detail & Related papers (2023-11-09T22:22:18Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - SEGNO: Generalizing Equivariant Graph Neural Networks with Physical
Inductive Biases [66.61789780666727]
We show how the second-order continuity can be incorporated into GNNs while maintaining the equivariant property.
We also offer theoretical insights into SEGNO, highlighting that it can learn a unique trajectory between adjacent states.
Our model yields a significant improvement over the state-of-the-art baselines.
arXiv Detail & Related papers (2023-08-25T07:15:58Z) - A Fractional Graph Laplacian Approach to Oversmoothing [15.795926248847026]
We generalize the concept of oversmoothing from undirected to directed graphs.
We propose fractional graph Laplacian neural ODEs, which describe non-local dynamics.
Our method is more flexible with respect to the convergence of the graph's Dirichlet energy, thereby mitigating oversmoothing.
arXiv Detail & Related papers (2023-05-22T14:52:33Z) - Dynamic Causal Explanation Based Diffusion-Variational Graph Neural
Network for Spatio-temporal Forecasting [60.03169701753824]
We propose a novel Dynamic Diffusion-al Graph Neural Network (DVGNN) fortemporal forecasting.
The proposed DVGNN model outperforms state-of-the-art approaches and achieves outstanding Root Mean Squared Error result.
arXiv Detail & Related papers (2023-05-16T11:38:19Z) - Unravelling the Performance of Physics-informed Graph Neural Networks
for Dynamical Systems [5.787429262238507]
We evaluate the performance of graph neural networks (GNNs) and their variants with explicit constraints and different architectures.
Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance.
All the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.
arXiv Detail & Related papers (2022-11-10T12:29:30Z) - Understanding convolution on graphs via energies [23.18124653469668]
Graph Networks (GNNs) typically operate by message-passing, where the state of a node is updated based on the information received from its neighbours.
Most message-passing models act as graph convolutions, where features are mixed by a shared, linear transformation before being propagated over the edges.
On node-classification tasks, graph convolutions have been shown to suffer from two limitations: poor performance on heterophilic graphs, and over-smoothing.
arXiv Detail & Related papers (2022-06-22T11:45:36Z) - Graph-Coupled Oscillator Networks [23.597444325599835]
Graph-Coupled Networks (GraphCON) is a novel framework for deep learning on graphs.
We show that our framework offers competitive performance with respect to the state-of-the-art on a variety of graph-based learning tasks.
arXiv Detail & Related papers (2022-02-04T18:29:49Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.