Collective Robustness Certificates: Exploiting Interdependence in Graph
Neural Networks
- URL: http://arxiv.org/abs/2302.02829v1
- Date: Mon, 6 Feb 2023 14:46:51 GMT
- Title: Collective Robustness Certificates: Exploiting Interdependence in Graph
Neural Networks
- Authors: Jan Schuchardt, Aleksandar Bojchevski, Johannes Gasteiger, Stephan
G\"unnemann
- Abstract summary: In tasks like node classification, image segmentation, and named-entity recognition we have a classifier that simultaneously outputs multiple predictions.
Existing adversarial robustness certificates consider each prediction independently and are thus overly pessimistic for such tasks.
We propose the first collective robustness certificate which computes the number of predictions that are simultaneously guaranteed to remain stable under perturbation.
- Score: 71.78900818931847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In tasks like node classification, image segmentation, and named-entity
recognition we have a classifier that simultaneously outputs multiple
predictions (a vector of labels) based on a single input, i.e. a single graph,
image, or document respectively. Existing adversarial robustness certificates
consider each prediction independently and are thus overly pessimistic for such
tasks. They implicitly assume that an adversary can use different perturbed
inputs to attack different predictions, ignoring the fact that we have a single
shared input. We propose the first collective robustness certificate which
computes the number of predictions that are simultaneously guaranteed to remain
stable under perturbation, i.e. cannot be attacked. We focus on Graph Neural
Networks and leverage their locality property - perturbations only affect the
predictions in a close neighborhood - to fuse multiple single-node certificates
into a drastically stronger collective certificate. For example, on the
Citeseer dataset our collective certificate for node classification increases
the average number of certifiable feature perturbations from $7$ to $351$.
Related papers
- Refined Edge Usage of Graph Neural Networks for Edge Prediction [51.06557652109059]
We propose a novel edge prediction paradigm named Edge-aware Message PassIng neuRal nEtworks (EMPIRE)
We first introduce an edge splitting technique to specify use of each edge where each edge is solely used as either the topology or the supervision.
In order to emphasize the differences between pairs connected by supervision edges and pairs unconnected, we further weight the messages to highlight the relative ones that can reflect the differences.
arXiv Detail & Related papers (2022-12-25T23:19:56Z) - Birds of a Feather Trust Together: Knowing When to Trust a Classifier
via Adaptive Neighborhood Aggregation [30.34223543030105]
We show how NeighborAgg can leverage the two essential information via an adaptive neighborhood aggregation.
We also extend our approach to the closely related task of mislabel detection and provide a theoretical coverage guarantee to bound the false negative.
arXiv Detail & Related papers (2022-11-29T18:43:15Z) - Localized Randomized Smoothing for Collective Robustness Certification [60.83383487495282]
We propose a more general collective robustness certificate for all types of models.
We show that this approach is beneficial for the larger class of softly local models.
The certificate is based on our novel localized randomized smoothing approach.
arXiv Detail & Related papers (2022-10-28T14:10:24Z) - Label-Only Membership Inference Attack against Node-Level Graph Neural
Networks [30.137860266059004]
Graph Neural Networks (GNNs) are vulnerable to Membership Inference Attacks (MIAs)
We propose a label-only MIA against GNNs for node classification with the help of GNNs' flexible prediction mechanism.
Our attacking method achieves around 60% accuracy, precision, and Area Under the Curve (AUC) for most datasets and GNN models.
arXiv Detail & Related papers (2022-07-27T19:46:26Z) - Shared Certificates for Neural Network Verification [8.777291205946444]
Existing neural network verifiers compute a proof that each input is handled correctly under a given perturbation.
This process is repeated from scratch independently for each input.
We introduce a new method for reducing this verification cost without losing precision.
arXiv Detail & Related papers (2021-09-01T16:59:09Z) - Almost Tight L0-norm Certified Robustness of Top-k Predictions against
Adversarial Perturbations [78.23408201652984]
Top-k predictions are used in many real-world applications such as machine learning as a service, recommender systems, and web searches.
Our work is based on randomized smoothing, which builds a provably robust classifier via randomizing an input.
For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69.2% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.
arXiv Detail & Related papers (2020-11-15T21:34:44Z) - Knowing what you know: valid and validated confidence sets in multiclass
and multilabel prediction [0.8594140167290097]
We develop conformal prediction methods for constructing valid confidence sets in multiclass and multilabel problems.
By leveraging ideas from quantile regression, we build methods that always guarantee correct coverage but additionally provide conditional coverage for both multiclass and multilabel prediction problems.
arXiv Detail & Related papers (2020-04-21T17:45:38Z) - Heterogeneous Graph Neural Networks for Malicious Account Detection [64.0046412312209]
We present GEM, the first heterogeneous graph neural network approach for detecting malicious accounts.
We learn discriminative embeddings from heterogeneous account-device graphs based on two fundamental weaknesses of attackers, i.e. device aggregation and activity aggregation.
Experiments show that our approaches consistently perform promising results compared with competitive methods over time.
arXiv Detail & Related papers (2020-02-27T18:26:44Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.