Localized Randomized Smoothing for Collective Robustness Certification
- URL: http://arxiv.org/abs/2210.16140v3
- Date: Mon, 26 Feb 2024 14:20:49 GMT
- Title: Localized Randomized Smoothing for Collective Robustness Certification
- Authors: Jan Schuchardt, Tom Wollschl\"ager, Aleksandar Bojchevski, Stephan
G\"unnemann
- Abstract summary: We propose a more general collective robustness certificate for all types of models.
We show that this approach is beneficial for the larger class of softly local models.
The certificate is based on our novel localized randomized smoothing approach.
- Score: 60.83383487495282
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Models for image segmentation, node classification and many other tasks map a
single input to multiple labels. By perturbing this single shared input (e.g.
the image) an adversary can manipulate several predictions (e.g. misclassify
several pixels). Collective robustness certification is the task of provably
bounding the number of robust predictions under this threat model. The only
dedicated method that goes beyond certifying each output independently is
limited to strictly local models, where each prediction is associated with a
small receptive field. We propose a more general collective robustness
certificate for all types of models. We further show that this approach is
beneficial for the larger class of softly local models, where each output is
dependent on the entire input but assigns different levels of importance to
different input regions (e.g. based on their proximity in the image). The
certificate is based on our novel localized randomized smoothing approach,
where the random perturbation strength for different input regions is
proportional to their importance for the outputs. Localized smoothing
Pareto-dominates existing certificates on both image segmentation and node
classification tasks, simultaneously offering higher accuracy and stronger
certificates.
Related papers
- Deep Domain Isolation and Sample Clustered Federated Learning for Semantic Segmentation [2.515027627030043]
In this paper, we explore for the first time the effect of covariate shifts between participants' data in 2D segmentation tasks.
We develop Deep Domain Isolation (DDI) to isolate image domains directly in the gradient space of the model.
We leverage this clustering algorithm through a Sample Clustered Federated Learning (SCFL) framework.
arXiv Detail & Related papers (2024-10-04T12:43:07Z) - Towards Better Certified Segmentation via Diffusion Models [62.21617614504225]
segmentation models can be vulnerable to adversarial perturbations, which hinders their use in critical-decision systems like healthcare or autonomous driving.
Recently, randomized smoothing has been proposed to certify segmentation predictions by adding Gaussian noise to the input to obtain theoretical guarantees.
In this paper, we address the problem of certifying segmentation prediction using a combination of randomized smoothing and diffusion models.
arXiv Detail & Related papers (2023-06-16T16:30:39Z) - Collective Robustness Certificates: Exploiting Interdependence in Graph
Neural Networks [71.78900818931847]
In tasks like node classification, image segmentation, and named-entity recognition we have a classifier that simultaneously outputs multiple predictions.
Existing adversarial robustness certificates consider each prediction independently and are thus overly pessimistic for such tasks.
We propose the first collective robustness certificate which computes the number of predictions that are simultaneously guaranteed to remain stable under perturbation.
arXiv Detail & Related papers (2023-02-06T14:46:51Z) - SpanProto: A Two-stage Span-based Prototypical Network for Few-shot
Named Entity Recognition [45.012327072558975]
Few-shot Named Entity Recognition (NER) aims to identify named entities with very little annotated data.
We propose a seminal span-based prototypical network (SpanProto) that tackles few-shot NER via a two-stage approach.
In the span extraction stage, we transform the sequential tags into a global boundary matrix, enabling the model to focus on the explicit boundary information.
For mention classification, we leverage prototypical learning to capture the semantic representations for each labeled span and make the model better adapt to novel-class entities.
arXiv Detail & Related papers (2022-10-17T12:59:33Z) - Semi-supervised Semantic Segmentation with Prototype-based Consistency
Regularization [20.4183741427867]
Semi-supervised semantic segmentation requires the model to propagate the label information from limited annotated images to unlabeled ones.
A challenge for such a per-pixel prediction task is the large intra-class variation.
We propose a novel approach to regularize the distribution of within-class features to ease label propagation difficulty.
arXiv Detail & Related papers (2022-10-10T01:38:01Z) - Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From
Learned Pairwise Affinity [59.1823948436411]
We propose a novel approach for mask proposals, Generic Grouping Networks (GGNs)
Our approach combines a local measure of pixel affinity with instance-level mask supervision, producing a training regimen designed to make the model as generic as the data diversity allows.
arXiv Detail & Related papers (2022-04-12T22:37:49Z) - Certifying Model Accuracy under Distribution Shifts [151.67113334248464]
We present provable robustness guarantees on the accuracy of a model under bounded Wasserstein shifts of the data distribution.
We show that a simple procedure that randomizes the input of the model within a transformation space is provably robust to distributional shifts under the transformation.
arXiv Detail & Related papers (2022-01-28T22:03:50Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.