Exact Certification of (Graph) Neural Networks Against Label Poisoning
- URL: http://arxiv.org/abs/2412.00537v1
- Date: Sat, 30 Nov 2024 17:05:12 GMT
- Title: Exact Certification of (Graph) Neural Networks Against Label Poisoning
- Authors: Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar,
- Abstract summary: Machine learning models are vulnerable to label flipping, i.e., the adversarial modification (poisoning) of training labels to compromise performance.
We introduce an exact certification method, deriving both sample-wise and collective certificates.
Our work presents the first exact certificate to a poisoning attack ever derived for neural networks, which could be of independent interest.
- Score: 50.87615167799367
- License:
- Abstract: Machine learning models are highly vulnerable to label flipping, i.e., the adversarial modification (poisoning) of training labels to compromise performance. Thus, deriving robustness certificates is important to guarantee that test predictions remain unaffected and to understand worst-case robustness behavior. However, for Graph Neural Networks (GNNs), the problem of certifying label flipping has so far been unsolved. We change this by introducing an exact certification method, deriving both sample-wise and collective certificates. Our method leverages the Neural Tangent Kernel (NTK) to capture the training dynamics of wide networks enabling us to reformulate the bilevel optimization problem representing label flipping into a Mixed-Integer Linear Program (MILP). We apply our method to certify a broad range of GNN architectures in node classification tasks. Thereby, concerning the worst-case robustness to label flipping: $(i)$ we establish hierarchies of GNNs on different benchmark graphs; $(ii)$ quantify the effect of architectural choices such as activations, depth and skip-connections; and surprisingly, $(iii)$ uncover a novel phenomenon of the robustness plateauing for intermediate perturbation budgets across all investigated datasets and architectures. While we focus on GNNs, our certificates are applicable to sufficiently wide NNs in general through their NTK. Thus, our work presents the first exact certificate to a poisoning attack ever derived for neural networks, which could be of independent interest.
Related papers
- Bridging Domain Adaptation and Graph Neural Networks: A Tensor-Based Framework for Effective Label Propagation [23.79865440689265]
Graph Neural Networks (GNNs) have recently become the predominant tools for studying graph data.
Despite state-of-the-art performance on graph classification tasks, GNNs are overwhelmingly trained in a single domain under supervision.
We propose the Label-Propagation Graph Neural Network (LP-TGNN) framework to bridge the gap between graph data and traditional domain adaptation methods.
arXiv Detail & Related papers (2025-02-12T15:36:38Z) - Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks [50.87615167799367]
We certify Graph Neural Networks (GNNs) against poisoning attacks, including backdoors, targeting the node features of a given graph.
Our framework provides fundamental insights into the role of graph structure and its connectivity on the worst-case behavior of convolution-based and PageRank-based GNNs.
arXiv Detail & Related papers (2024-07-15T16:12:51Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - MAPPING: Debiasing Graph Neural Networks for Fair Node Classification with Limited Sensitive Information Leakage [1.5438758943381854]
We propose a novel model-agnostic debiasing framework named MAPPING for fair node classification.
Our results show that MAPPING can achieve better trade-offs between utility and fairness, and privacy risks of sensitive information leakage.
arXiv Detail & Related papers (2024-01-23T14:59:46Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks [95.89825298412016]
We propose novel gray-box certificates for Graph Neural Networks (GNNs)
We randomly intercept messages and analyze the probability that messages from adversarially controlled nodes reach their target nodes.
Our certificates provide stronger guarantees for attacks at larger distances.
arXiv Detail & Related papers (2023-01-05T12:21:48Z) - Robust Training of Graph Neural Networks via Noise Governance [27.767913371777247]
Graph Neural Networks (GNNs) have become widely-used models for semi-supervised learning.
In this paper, we consider an important yet challenging scenario where labels on nodes of graphs are not only noisy but also scarce.
We propose a novel RTGNN framework that achieves better robustness by learning to explicitly govern label noise.
arXiv Detail & Related papers (2022-11-12T09:25:32Z) - Robustness Certificates for Implicit Neural Networks: A Mixed Monotone
Contractive Approach [60.67748036747221]
Implicit neural networks offer competitive performance and reduced memory consumption.
They can remain brittle with respect to input adversarial perturbations.
This paper proposes a theoretical and computational framework for robustness verification of implicit neural networks.
arXiv Detail & Related papers (2021-12-10T03:08:55Z) - Efficient Exact Verification of Binarized Neural Networks [15.639601066641099]
Binarized Neural Networks (BNNs) provide comparable robustness and allow exact and significantly more efficient verification.
We present a new system, EEV, for efficient and exact verification of BNNs.
We demonstrate the effectiveness of EEV by presenting the first exact verification results for L-inf-bounded adversarial robustness of nontrivial convolutional BNNs.
arXiv Detail & Related papers (2020-05-07T16:34:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.