Shared Certificates for Neural Network Verification
- URL: http://arxiv.org/abs/2109.00542v4
- Date: Thu, 23 Nov 2023 11:23:26 GMT
- Title: Shared Certificates for Neural Network Verification
- Authors: Marc Fischer, Christian Sprecher, Dimitar I. Dimitrov, Gagandeep
Singh, Martin Vechev
- Abstract summary: Existing neural network verifiers compute a proof that each input is handled correctly under a given perturbation.
This process is repeated from scratch independently for each input.
We introduce a new method for reducing this verification cost without losing precision.
- Score: 8.777291205946444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing neural network verifiers compute a proof that each input is handled
correctly under a given perturbation by propagating a symbolic abstraction of
reachable values at each layer. This process is repeated from scratch
independently for each input (e.g., image) and perturbation (e.g., rotation),
leading to an expensive overall proof effort when handling an entire dataset.
In this work, we introduce a new method for reducing this verification cost
without losing precision based on a key insight that abstractions obtained at
intermediate layers for different inputs and perturbations can overlap or
contain each other. Leveraging our insight, we introduce the general concept of
shared certificates, enabling proof effort reuse across multiple inputs to
reduce overall verification costs. We perform an extensive experimental
evaluation to demonstrate the effectiveness of shared certificates in reducing
the verification cost on a range of datasets and attack specifications on image
classifiers including the popular patch and geometric perturbations. We release
our implementation at https://github.com/eth-sri/proof-sharing.
Related papers
- SINDER: Repairing the Singular Defects of DINOv2 [61.98878352956125]
Vision Transformer models trained on large-scale datasets often exhibit artifacts in the patch token they extract.
We propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset.
arXiv Detail & Related papers (2024-07-23T20:34:23Z) - QGait: Toward Accurate Quantization for Gait Recognition with Binarized Input [17.017127559393398]
We propose a differentiable soft quantizer, which better simulates the gradient of the round function during backpropagation.
This enables the network to learn from subtle input perturbations.
We further refine the training strategy to ensure convergence while simulating quantization errors.
arXiv Detail & Related papers (2024-05-22T17:34:18Z) - Collective Robustness Certificates: Exploiting Interdependence in Graph
Neural Networks [71.78900818931847]
In tasks like node classification, image segmentation, and named-entity recognition we have a classifier that simultaneously outputs multiple predictions.
Existing adversarial robustness certificates consider each prediction independently and are thus overly pessimistic for such tasks.
We propose the first collective robustness certificate which computes the number of predictions that are simultaneously guaranteed to remain stable under perturbation.
arXiv Detail & Related papers (2023-02-06T14:46:51Z) - Deepfake Detection via Joint Unsupervised Reconstruction and Supervised
Classification [25.84902508816679]
We introduce a novel approach for deepfake detection, which considers the reconstruction and classification tasks simultaneously.
This method shares the information learned by one task with the other, which focuses on a different aspect other existing works rarely consider.
Our method achieves state-of-the-art performance on three commonly-used datasets.
arXiv Detail & Related papers (2022-11-24T05:44:26Z) - Localized Randomized Smoothing for Collective Robustness Certification [60.83383487495282]
We propose a more general collective robustness certificate for all types of models.
We show that this approach is beneficial for the larger class of softly local models.
The certificate is based on our novel localized randomized smoothing approach.
arXiv Detail & Related papers (2022-10-28T14:10:24Z) - ORF-Net: Deep Omni-supervised Rib Fracture Detection from Chest CT Scans [47.7670302148812]
radiologists need to investigate and annotate rib fractures on a slice-by-slice basis.
We propose a novel omni-supervised object detection network, which can exploit multiple different forms of annotated data.
Our proposed method outperforms other state-of-the-art approaches consistently.
arXiv Detail & Related papers (2022-07-05T07:06:57Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Co-mining: Self-Supervised Learning for Sparsely Annotated Object
Detection [29.683119976550007]
We propose a simple but effective mechanism, called Co-mining, for sparsely annotated object detection.
In our Co-mining, two branches of a Siamese network predict the pseudo-label sets for each other.
Experiments are performed on MS dataset with three different sparsely annotated settings.
arXiv Detail & Related papers (2020-12-03T14:23:43Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.