Interpretable Stability Bounds for Spectral Graph Filters
- URL: http://arxiv.org/abs/2102.09587v1
- Date: Thu, 18 Feb 2021 19:25:52 GMT
- Title: Interpretable Stability Bounds for Spectral Graph Filters
- Authors: Henry Kenlay, Dorina Thanou, Xiaowen Dong
- Abstract summary: We study filter stability and provide a novel and interpretable upper bound on the change of filter output.
This upper bound allows us to reason, in terms of structural properties of the graph, when a spectral graph filter will be stable.
- Score: 12.590415345079991
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph-structured data arise in a variety of real-world context ranging from
sensor and transportation to biological and social networks. As a ubiquitous
tool to process graph-structured data, spectral graph filters have been used to
solve common tasks such as denoising and anomaly detection, as well as design
deep learning architectures such as graph neural networks. Despite being an
important tool, there is a lack of theoretical understanding of the stability
properties of spectral graph filters, which are important for designing robust
machine learning models. In this paper, we study filter stability and provide a
novel and interpretable upper bound on the change of filter output, where the
bound is expressed in terms of the endpoint degrees of the deleted and newly
added edges, as well as the spatial proximity of those edges. This upper bound
allows us to reason, in terms of structural properties of the graph, when a
spectral graph filter will be stable. We further perform extensive experiments
to verify intuition that can be gained from the bound.
Related papers
- Online Graph Filtering Over Expanding Graphs [14.594691605523005]
We propose an online graph filtering framework by relying on online learning principles.
We design filters for scenarios where the topology is both known and unknown, including a learner adaptive to such evolution.
We conduct a regret analysis to highlight the role played by the different components such as the online algorithm, the filter order, and the growing graph model.
arXiv Detail & Related papers (2024-09-11T11:50:16Z) - HoloNets: Spectral Convolutions do extend to Directed Graphs [59.851175771106625]
Conventional wisdom dictates that spectral convolutional networks may only be deployed on undirected graphs.
Here we show this traditional reliance on the graph Fourier transform to be superfluous.
We provide a frequency-response interpretation of newly developed filters, investigate the influence of the basis used to express filters and discuss the interplay with characteristic operators on which networks are based.
arXiv Detail & Related papers (2023-10-03T17:42:09Z) - Geometric Graph Filters and Neural Networks: Limit Properties and
Discriminability Trade-offs [122.06927400759021]
We study the relationship between a graph neural network (GNN) and a manifold neural network (MNN) when the graph is constructed from a set of points sampled from the manifold.
We prove non-asymptotic error bounds showing that convolutional filters and neural networks on these graphs converge to convolutional filters and neural networks on the continuous manifold.
arXiv Detail & Related papers (2023-05-29T08:27:17Z) - Specformer: Spectral Graph Neural Networks Meet Transformers [51.644312964537356]
Spectral graph neural networks (GNNs) learn graph representations via spectral-domain graph convolutions.
We introduce Specformer, which effectively encodes the set of all eigenvalues and performs self-attention in the spectral domain.
By stacking multiple Specformer layers, one can build a powerful spectral GNN.
arXiv Detail & Related papers (2023-03-02T07:36:23Z) - Graph Filters for Signal Processing and Machine Learning on Graphs [83.29608206147515]
We provide a comprehensive overview of graph filters, including the different filtering categories, design strategies for each type, and trade-offs between different types of graph filters.
We discuss how to extend graph filters into filter banks and graph neural networks to enhance the representational power.
Our aim is that this article provides a unifying framework for both beginner and experienced researchers, as well as a common understanding.
arXiv Detail & Related papers (2022-11-16T11:56:45Z) - Stability to Deformations of Manifold Filters and Manifold Neural Networks [89.53585099149973]
The paper defines and studies manifold (M) convolutional filters and neural networks (NNs)
The main technical contribution of the paper is to analyze the stability of manifold filters and MNNs to smooth deformations of the manifold.
arXiv Detail & Related papers (2021-06-07T15:41:03Z) - An Experimental Study of the Transferability of Spectral Graph Networks [5.736353542430439]
Spectral graph convolutional networks are generalizations of standard convolutional networks for graph-structured data using the Laplacian operator.
Recent works have proved the stability of spectral filters under graph benchmarks.
arXiv Detail & Related papers (2020-12-18T14:15:07Z) - FiGLearn: Filter and Graph Learning using Optimal Transport [49.428169585114496]
We introduce a novel graph signal processing framework for learning the graph and its generating filter from signal observations.
We show how this framework can be used to infer missing values if only very little information is available.
arXiv Detail & Related papers (2020-10-29T10:00:42Z) - A User Guide to Low-Pass Graph Signal Processing and its Applications [31.90359683602266]
We show how to leverage properties of low-pass graph filters to learn the graph topology or identify its community structure.
We illustrate how to represent graph data through sampling, recover missing measurements, and de-noise graph data.
arXiv Detail & Related papers (2020-08-04T03:27:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.