An Adversarial Robustness Perspective on the Topology of Neural Networks
- URL: http://arxiv.org/abs/2211.02675v1
- Date: Fri, 4 Nov 2022 18:00:53 GMT
- Title: An Adversarial Robustness Perspective on the Topology of Neural Networks
- Authors: Morgane Goibert, Thomas Ricatte, Elvis Dohmatob
- Abstract summary: We study the impact of neural networks (NNs) topology on adversarial robustness.
We find that graphs from clean inputs are more centralized around highway edges, whereas those from adversaries are more diffuse.
- Score: 12.416690940269772
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we investigate the impact of neural networks (NNs) topology on
adversarial robustness. Specifically, we study the graph produced when an input
traverses all the layers of a NN, and show that such graphs are different for
clean and adversarial inputs. We find that graphs from clean inputs are more
centralized around highway edges, whereas those from adversaries are more
diffuse, leveraging under-optimized edges. Through experiments on a variety of
datasets and architectures, we show that these under-optimized edges are a
source of adversarial vulnerability and that they can be used to detect
adversarial inputs.
Related papers
- Explainability-Based Adversarial Attack on Graphs Through Edge
Perturbation [1.6385815610837167]
We investigate the impact of test time adversarial attacks through edge perturbations which involve both edge insertions and deletions.
A novel explainability-based method is proposed to identify important nodes in the graph and perform edge perturbation between these nodes.
Results suggest that introducing edges between nodes of different classes has higher impact as compared to removing edges among nodes within the same class.
arXiv Detail & Related papers (2023-12-28T17:41:30Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - BOURNE: Bootstrapped Self-supervised Learning Framework for Unified
Graph Anomaly Detection [50.26074811655596]
We propose a novel unified graph anomaly detection framework based on bootstrapped self-supervised learning (named BOURNE)
By swapping the context embeddings between nodes and edges, we enable the mutual detection of node and edge anomalies.
BOURNE can eliminate the need for negative sampling, thereby enhancing its efficiency in handling large graphs.
arXiv Detail & Related papers (2023-07-28T00:44:57Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - The Devil is in the Conflict: Disentangled Information Graph Neural
Networks for Fraud Detection [17.254383007779616]
We argue that the performance degradation is mainly attributed to the inconsistency between topology and attribute.
We propose a simple and effective method that uses the attention mechanism to adaptively fuse two views.
Our model can significantly outperform stateof-the-art baselines on real-world fraud detection datasets.
arXiv Detail & Related papers (2022-10-22T08:21:49Z) - Efficient and Robust Classification for Sparse Attacks [34.48667992227529]
We consider perturbations bounded by the $ell$--norm, which have been shown as effective attacks in the domains of image-recognition, natural language processing, and malware-detection.
We propose a novel defense method that consists of "truncation" and "adrial training"
Motivated by the insights we obtain, we extend these components to neural network classifiers.
arXiv Detail & Related papers (2022-01-23T21:18:17Z) - Revisiting Edge Detection in Convolutional Neural Networks [3.5281112495479245]
We show that edges cannot be represented properly in the first convolutional layer of a neural network.
We propose edge-detection units and show that they reduce performance loss and generate qualitatively different representations.
arXiv Detail & Related papers (2020-12-25T13:53:04Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.