Structure-Aware Robustness Certificates for Graph Classification
- URL: http://arxiv.org/abs/2306.11915v2
- Date: Sat, 24 Jun 2023 21:31:21 GMT
- Title: Structure-Aware Robustness Certificates for Graph Classification
- Authors: Pierre Osselin, Henry Kenlay and Xiaowen Dong
- Abstract summary: Certifying the robustness of a graph-based machine learning model poses a critical challenge for safety.
We develop a randomised smoothing method based on adding an anisotropic noise distribution to the input graph structure.
We show that our process generates structural-aware certificates for our classifiers, whereby the magnitude of robustness certificates can vary across different pre-defined structures of the graph.
- Score: 13.643197515573029
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Certifying the robustness of a graph-based machine learning model poses a
critical challenge for safety. Current robustness certificates for graph
classifiers guarantee output invariance with respect to the total number of
node pair flips (edge addition or edge deletion), which amounts to an $l_{0}$
ball centred on the adjacency matrix. Although theoretically attractive, this
type of isotropic structural noise can be too restrictive in practical
scenarios where some node pairs are more critical than others in determining
the classifier's output. The certificate, in this case, gives a pessimistic
depiction of the robustness of the graph model. To tackle this issue, we
develop a randomised smoothing method based on adding an anisotropic noise
distribution to the input graph structure. We show that our process generates
structural-aware certificates for our classifiers, whereby the magnitude of
robustness certificates can vary across different pre-defined structures of the
graph. We demonstrate the benefits of these certificates in both synthetic and
real-world experiments.
Related papers
- Sparse Bayesian Message Passing under Structural Uncertainty [12.402205032785497]
Semi-supervised learning on real-world graphs is frequently challenged by heterophily, where the observed graph is unreliable or label-disassortative.<n>Many existing graph neural networks either rely on a fixed adjacency structure or attempt to handle structural noise through regularization.<n>We explicitly capture structural uncertainty by modeling a posterior distribution over signed adjacency matrices, allowing each edge to be positive, negative, or absent.
arXiv Detail & Related papers (2026-01-03T15:16:12Z) - Less is More: Towards Simple Graph Contrastive Learning [28.21633108265473]
Graph Contrastive Learning (GCL) has shown strong promise for unsupervised graph representation learning, yet its effectiveness on heterophilic graphs remains limited.<n>Most existing methods rely on complex augmentation schemes, intricate encoders, or negative sampling.<n>We uncover a simple yet effective principle for GCL: mitigating node feature noise by aggregating it with structural features from the graph topology.
arXiv Detail & Related papers (2025-09-30T03:56:50Z) - Unfolding Tensors to Identify the Graph in Discrete Latent Bipartite Graphical Models [1.7132914341329848]
We use a tensor unfolding technique to prove a new identifiability result for discrete bipartite graphical models.
Our result has useful implications for these models' trustworthy applications in scientific disciplines and interpretable machine learning.
arXiv Detail & Related papers (2025-01-18T23:08:25Z) - Exact Certification of (Graph) Neural Networks Against Label Poisoning [50.87615167799367]
We introduce an exact certification method for label flipping in Graph Neural Networks (GNNs)
We apply our method to certify a broad range of GNN architectures in node classification tasks.
Our work presents the first exact certificate to a poisoning attack ever derived for neural networks.
arXiv Detail & Related papers (2024-11-30T17:05:12Z) - ALEX: Towards Effective Graph Transfer Learning with Noisy Labels [11.115297917940829]
We introduce a novel technique termed Balance Alignment and Information-aware Examination (ALEX) to address the problem of graph transfer learning.
ALEX first employs singular value decomposition to generate different views with crucial structural semantics, which help provide robust node representations.
Building on this foundation, an adversarial domain discriminator is incorporated for the implicit domain alignment of complex multi-modal distributions.
arXiv Detail & Related papers (2023-09-26T04:59:49Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Robust Model Selection of Gaussian Graphical Models [16.933125281564163]
Noise-corrupted samples present significant challenges in graphical model selection.
We propose an algorithm which provably recovers the underlying graph up to the identified ambiguity.
This information is useful in a range of real-world problems, including power grids, social networks, protein-protein interactions, and neural structures.
arXiv Detail & Related papers (2022-11-10T16:50:50Z) - Localized Randomized Smoothing for Collective Robustness Certification [60.83383487495282]
We propose a more general collective robustness certificate for all types of models.
We show that this approach is beneficial for the larger class of softly local models.
The certificate is based on our novel localized randomized smoothing approach.
arXiv Detail & Related papers (2022-10-28T14:10:24Z) - Smoothed Embeddings for Certified Few-Shot Learning [63.68667303948808]
We extend randomized smoothing to few-shot learning models that map inputs to normalized embeddings.
Our results are confirmed by experiments on different datasets.
arXiv Detail & Related papers (2022-02-02T18:19:04Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - An Interpretable Graph Generative Model with Heterophily [38.59200985962146]
We propose the first edge-independent graph generative model that is expressive enough to capture heterophily.
Our experiments demonstrate the effectiveness of our model for a variety of important application tasks.
arXiv Detail & Related papers (2021-11-04T17:34:39Z) - Adversarial Robustness of Supervised Sparse Coding [34.94566482399662]
We consider a model that involves learning a representation while at the same time giving a precise generalization bound and a robustness certificate.
We focus on the hypothesis class obtained by combining a sparsity-promoting encoder coupled with a linear encoder.
We provide a robustness certificate for end-to-end classification.
arXiv Detail & Related papers (2020-10-22T22:05:21Z) - Efficient Robustness Certificates for Discrete Data: Sparsity-Aware
Randomized Smoothing for Graphs, Images and More [85.52940587312256]
We propose a model-agnostic certificate based on the randomized smoothing framework which subsumes earlier work and is tight, efficient, and sparsity-aware.
We show the effectiveness of our approach on a wide variety of models, datasets, and tasks -- specifically highlighting its use for Graph Neural Networks.
arXiv Detail & Related papers (2020-08-29T10:09:02Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.