Provable Adversarial Robustness for Group Equivariant Tasks: Graphs,
Point Clouds, Molecules, and More
- URL: http://arxiv.org/abs/2312.02708v2
- Date: Mon, 15 Jan 2024 10:07:36 GMT
- Title: Provable Adversarial Robustness for Group Equivariant Tasks: Graphs,
Point Clouds, Molecules, and More
- Authors: Jan Schuchardt, Yan Scholten, Stephan G\"unnemann
- Abstract summary: We propose a sound notion of adversarial robustness that accounts for task equivariance.
certification methods are, however, unavailable for many models.
We derive the first architecture-specific graph edit distance certificates, i.e. sound robustness guarantees for isomorphism equivariant tasks like node classification.
- Score: 9.931513542441612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A machine learning model is traditionally considered robust if its prediction
remains (almost) constant under input perturbations with small norm. However,
real-world tasks like molecular property prediction or point cloud segmentation
have inherent equivariances, such as rotation or permutation equivariance. In
such tasks, even perturbations with large norm do not necessarily change an
input's semantic content. Furthermore, there are perturbations for which a
model's prediction explicitly needs to change. For the first time, we propose a
sound notion of adversarial robustness that accounts for task equivariance. We
then demonstrate that provable robustness can be achieved by (1) choosing a
model that matches the task's equivariances (2) certifying traditional
adversarial robustness. Certification methods are, however, unavailable for
many models, such as those with continuous equivariances. We close this gap by
developing the framework of equivariance-preserving randomized smoothing, which
enables architecture-agnostic certification. We additionally derive the first
architecture-specific graph edit distance certificates, i.e. sound robustness
guarantees for isomorphism equivariant tasks like node classification. Overall,
a sound notion of robustness is an important prerequisite for future work at
the intersection of robust and geometric machine learning.
Related papers
- Equivariant Adaptation of Large Pretrained Models [20.687626756753563]
We show that a canonicalization network can effectively be used to make a large pretrained network equivariant.
Using dataset-dependent priors to inform the canonicalization function, we are able to make large pretrained models equivariant while maintaining their performance.
arXiv Detail & Related papers (2023-10-02T21:21:28Z) - Self-Supervised Learning for Group Equivariant Neural Networks [75.62232699377877]
Group equivariant neural networks are the models whose structure is restricted to commute with the transformations on the input.
We propose two concepts for self-supervised tasks: equivariant pretext labels and invariant contrastive loss.
Experiments on standard image recognition benchmarks demonstrate that the equivariant neural networks exploit the proposed self-supervised tasks.
arXiv Detail & Related papers (2023-03-08T08:11:26Z) - Robustness of Machine Learning Models Beyond Adversarial Attacks [0.0]
We show that the widely used concept of adversarial robustness and closely related metrics are not necessarily valid metrics for determining the robustness of ML models.
We propose a flexible approach that models possible perturbations in input data individually for each application.
This is then combined with a probabilistic approach that computes the likelihood that a real-world perturbation will change a prediction.
arXiv Detail & Related papers (2022-04-21T12:09:49Z) - Certifying Model Accuracy under Distribution Shifts [151.67113334248464]
We present provable robustness guarantees on the accuracy of a model under bounded Wasserstein shifts of the data distribution.
We show that a simple procedure that randomizes the input of the model within a transformation space is provably robust to distributional shifts under the transformation.
arXiv Detail & Related papers (2022-01-28T22:03:50Z) - Training on Test Data with Bayesian Adaptation for Covariate Shift [96.3250517412545]
Deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.
We derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters.
We show that our method improves both accuracy and uncertainty estimation.
arXiv Detail & Related papers (2021-09-27T01:09:08Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Training or Architecture? How to Incorporate Invariance in Neural
Networks [14.162739081163444]
We propose a method for provably invariant network architectures with respect to group actions.
In a nutshell, we intend to 'undo' any possible transformation before feeding the data into the actual network.
We analyze properties of such approaches, extend them to equivariant networks, and demonstrate their advantages in terms of robustness as well as computational efficiency in several numerical examples.
arXiv Detail & Related papers (2021-06-18T10:31:00Z) - PAC Prediction Sets Under Covariate Shift [40.67733209235852]
Uncertainty is important when there are changes to the underlying data distribution.
Most existing uncertainty quantification algorithms break down in the presence of such shifts.
We propose a novel approach that addresses this challenge by constructing emphprobably approximately correct (PAC) prediction sets.
arXiv Detail & Related papers (2021-06-17T23:28:42Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z) - Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial
Perturbations [65.05561023880351]
Adversarial examples are malicious inputs crafted to induce misclassification.
This paper studies a complementary failure mode, invariance-based adversarial examples.
We show that defenses against sensitivity-based attacks actively harm a model's accuracy on invariance-based attacks.
arXiv Detail & Related papers (2020-02-11T18:50:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.