Towards Inductive Robustness: Distilling and Fostering Wave-induced
Resonance in Transductive GCNs Against Graph Adversarial Attacks
- URL: http://arxiv.org/abs/2312.08651v1
- Date: Thu, 14 Dec 2023 04:25:50 GMT
- Title: Towards Inductive Robustness: Distilling and Fostering Wave-induced
Resonance in Transductive GCNs Against Graph Adversarial Attacks
- Authors: Ao Liu, Wenshan Li, Tao Li, Beibei Li, Hanyuan Huang, Pan Zhou
- Abstract summary: Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks, where slight perturbations in the graph structure can lead to erroneous predictions.
Here, we discover that transductive GCNs inherently possess a distillable robustness, achieved through a wave-induced resonance process.
We present Graph Resonance-fostering Network (GRN) to foster this resonance via learning node representations.
- Score: 56.56052273318443
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have recently been shown to be vulnerable to
adversarial attacks, where slight perturbations in the graph structure can lead
to erroneous predictions. However, current robust models for defending against
such attacks inherit the transductive limitations of graph convolutional
networks (GCNs). As a result, they are constrained by fixed structures and do
not naturally generalize to unseen nodes. Here, we discover that transductive
GCNs inherently possess a distillable robustness, achieved through a
wave-induced resonance process. Based on this, we foster this resonance to
facilitate inductive and robust learning. Specifically, we first prove that the
signal formed by GCN-driven message passing (MP) is equivalent to the
edge-based Laplacian wave, where, within a wave system, resonance can naturally
emerge between the signal and its transmitting medium. This resonance provides
inherent resistance to malicious perturbations inflicted on the signal system.
We then prove that merely three MP iterations within GCNs can induce signal
resonance between nodes and edges, manifesting as a coupling between nodes and
their distillable surrounding local subgraph. Consequently, we present Graph
Resonance-fostering Network (GRN) to foster this resonance via learning node
representations from their distilled resonating subgraphs. By capturing the
edge-transmitted signals within this subgraph and integrating them with the
node signal, GRN embeds these combined signals into the central node's
representation. This node-wise embedding approach allows for generalization to
unseen nodes. We validate our theoretical findings with experiments, and
demonstrate that GRN generalizes robustness to unseen nodes, whilst maintaining
state-of-the-art classification accuracy on perturbed graphs.
Related papers
- Bundle Neural Networks for message diffusion on graphs [10.018379001231356]
We show that Bundle Neural Networks (BuNNs) can approximate any feature transformation over nodes on any graphs given injective positional encodings.
We also prove that BuNNs can approximate any feature transformation over nodes on any family of graphs given injective positional encodings, resulting in universal node-level expressivity.
arXiv Detail & Related papers (2024-05-24T13:28:48Z) - Robust Node Representation Learning via Graph Variational Diffusion
Networks [7.335425547621226]
In recent years, compelling evidence has revealed that GNN-based node representation learning can be substantially deteriorated by perturbations in a graph structure.
To learn robust node representation in the presence of perturbations, various works have been proposed to safeguard GNNs.
We propose the Graph Variational Diffusion Network (GVDN), a new node encoder that effectively manipulates Gaussian noise to safeguard robustness on perturbed graphs.
arXiv Detail & Related papers (2023-12-18T03:18:53Z) - Distributional Signals for Node Classification in Graph Neural Networks [36.30743671968087]
In graph neural networks (GNNs) both node features and labels are examples of graph signals, a key notion in graph signal processing (GSP)
In our framework, we work with the distributions of node labels instead of their values and propose notions of smoothness and non-uniformity of such distributional graph signals.
We then propose a general regularization method for GNNs that allows us to encode distributional smoothness and non-uniformity of the model output in semi-supervised node classification tasks.
arXiv Detail & Related papers (2023-04-07T06:54:42Z) - Stable and Transferable Hyper-Graph Neural Networks [95.07035704188984]
We introduce an architecture for processing signals supported on hypergraphs via graph neural networks (GNNs)
We provide a framework for bounding the stability and transferability error of GNNs across arbitrary graphs via spectral similarity.
arXiv Detail & Related papers (2022-11-11T23:44:20Z) - Can one hear the position of nodes? [5.634825161148484]
The sound emitted by vibrations of individual nodes reflects the structure of the overall network topology.
A sound recognition neural network is trained to infer centrality measures from the nodes' wave-forms.
Auralization of the network topology may open new directions in arts, competing with network visualization.
arXiv Detail & Related papers (2022-11-10T16:00:53Z) - Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph
Wavelets [81.63035727821145]
Spectral graph convolutional networks (SGCNs) have been attracting increasing attention in graph representation learning.
We propose a novel class of spectral graph convolutional networks that implement graph convolutions with adaptive graph wavelets.
arXiv Detail & Related papers (2021-08-03T17:57:53Z) - On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation [60.34028546202372]
Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
arXiv Detail & Related papers (2020-10-23T13:57:39Z) - AN-GCN: An Anonymous Graph Convolutional Network Defense Against
Edge-Perturbing Attack [53.06334363586119]
Recent studies have revealed the vulnerability of graph convolutional networks (GCNs) to edge-perturbing attacks.
We first generalize the formulation of edge-perturbing attacks and strictly prove the vulnerability of GCNs to such attacks in node classification tasks.
Following this, an anonymous graph convolutional network, named AN-GCN, is proposed to counter edge-perturbing attacks.
arXiv Detail & Related papers (2020-05-06T08:15:24Z) - Scattering GCN: Overcoming Oversmoothness in Graph Convolutional
Networks [0.0]
Graph convolutional networks (GCNs) have shown promising results in processing graph data by extracting structure-aware features.
Here, we propose to augment conventional GCNs with geometric scattering transforms and residual convolutions.
The former enables band-pass filtering of graph signals, thus alleviating the so-called oversmoothing often encountered in GCNs.
arXiv Detail & Related papers (2020-03-18T18:03:08Z) - Bilinear Graph Neural Network with Neighbor Interactions [106.80781016591577]
Graph Neural Network (GNN) is a powerful model to learn representations and make predictions on graph data.
We propose a new graph convolution operator, which augments the weighted sum with pairwise interactions of the representations of neighbor nodes.
We term this framework as Bilinear Graph Neural Network (BGNN), which improves GNN representation ability with bilinear interactions between neighbor nodes.
arXiv Detail & Related papers (2020-02-10T06:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.