DualApp: Tight Over-Approximation for Neural Network Robustness
Verification via Under-Approximation
- URL: http://arxiv.org/abs/2211.11186v1
- Date: Mon, 21 Nov 2022 05:09:34 GMT
- Title: DualApp: Tight Over-Approximation for Neural Network Robustness
Verification via Under-Approximation
- Authors: Yiting Wu, Zhaodi Zhang, Zhiyi Xue, Si Liu, Min Zhang
- Abstract summary: We propose a novel under-approximation-guided approach to define tight over-approximations and two complementary under-approximation algorithms.
The overestimated domain guarantees the soundness while the underestimated one guides the tightness.
We implement our approach into a tool called DualApp and extensively evaluate it on a benchmark of 84 collected and trained neural networks.
- Score: 17.924507519230424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The robustness of neural networks is fundamental to the hosting system's
reliability and security. Formal verification has been proven to be effective
in providing provable robustness guarantees. To improve the verification
scalability, over-approximating the non-linear activation functions in neural
networks by linear constraints is widely adopted, which transforms the
verification problem into an efficiently solvable linear programming problem.
As over-approximations inevitably introduce overestimation, many efforts have
been dedicated to defining the tightest possible approximations. Recent studies
have however showed that the existing so-called tightest approximations are
superior to each other. In this paper we identify and report an crucial factor
in defining tight approximations, namely the approximation domains of
activation functions. We observe that existing approaches only rely on
overestimated domains, while the corresponding tight approximation may not
necessarily be tight on its actual domain. We propose a novel
under-approximation-guided approach, called dual-approximation, to define tight
over-approximations and two complementary under-approximation algorithms based
on sampling and gradient descent. The overestimated domain guarantees the
soundness while the underestimated one guides the tightness. We implement our
approach into a tool called DualApp and extensively evaluate it on a
comprehensive benchmark of 84 collected and trained neural networks with
different architectures. The experimental results show that DualApp outperforms
the state-of-the-art approximation-based approaches, with up to 71.22%
improvement to the verification result.
Related papers
- Automated Design of Linear Bounding Functions for Sigmoidal Nonlinearities in Neural Networks [23.01933325606068]
Existing complete verification techniques offer provable guarantees for all robustness queries but struggle to scale beyond small neural networks.
We propose a novel parameter search method to improve the quality of these linear approximations.
Specifically, we show that using a simple search method, carefully adapted to the given verification problem through state-of-the-art algorithm configuration techniques, improves the average global lower bound by 25% on average over the current state of the art.
arXiv Detail & Related papers (2024-06-14T16:16:26Z) - Bridging the Gap Between End-to-End and Two-Step Text Spotting [88.14552991115207]
Bridging Text Spotting is a novel approach that resolves the error accumulation and suboptimal performance issues in two-step methods.
We demonstrate the effectiveness of the proposed method through extensive experiments.
arXiv Detail & Related papers (2024-04-06T13:14:04Z) - Enabling Uncertainty Estimation in Iterative Neural Networks [49.56171792062104]
We develop an approach to uncertainty estimation that provides state-of-the-art estimates at a much lower computational cost than techniques like Ensembles.
We demonstrate its practical value by embedding it in two application domains: road detection in aerial images and the estimation of aerodynamic properties of 2D and 3D shapes.
arXiv Detail & Related papers (2024-03-25T13:06:31Z) - A Tale of Two Approximations: Tightening Over-Approximation for DNN
Robustness Verification via Under-Approximation [17.924507519230424]
We propose a novel dual-approximation approach to tighten over-approximations, leveraging an activation function's underestimated domain to define tight approximation bounds.
Our results show that DualApp significantly outperforms the state-of-the-art approaches with 100% - 1000% improvement on the verified robustness ratio and 10.64% on average (up to 66.53%) on the certified lower bound.
arXiv Detail & Related papers (2023-05-26T14:58:30Z) - Provably Tightest Linear Approximation for Robustness Verification of
Sigmoid-like Neural Networks [22.239149433169747]
The robustness of deep neural networks is crucial to modern AI-enabled systems.
Due to their non-linearity, Sigmoid-like neural networks have been adopted in a wide range of applications.
arXiv Detail & Related papers (2022-08-21T12:07:36Z) - Complete Verification via Multi-Neuron Relaxation Guided
Branch-and-Bound [2.896192909215469]
We present a novel complete verifier which combines the strengths of both paradigms.
It uses multi-neuron relaxations to drastically reduce the number of subproblems generated during the BaB process.
An extensive evaluation demonstrates that our verifier achieves a new state-of-the-art on both established benchmarks as well as networks with significantly higher accuracy than previously considered.
arXiv Detail & Related papers (2022-04-30T13:12:33Z) - Real-time landmark detection for precise endoscopic submucosal
dissection via shape-aware relation network [51.44506007844284]
We propose a shape-aware relation network for accurate and real-time landmark detection in endoscopic submucosal dissection surgery.
We first devise an algorithm to automatically generate relation keypoint heatmaps, which intuitively represent the prior knowledge of spatial relations among landmarks.
We then develop two complementary regularization schemes to progressively incorporate the prior knowledge into the training process.
arXiv Detail & Related papers (2021-11-08T07:57:30Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - Debona: Decoupled Boundary Network Analysis for Tighter Bounds and
Faster Adversarial Robustness Proofs [2.1320960069210484]
Neural networks are commonly used in safety-critical real-world applications.
Proving that either no such adversarial examples exist, or providing a concrete instance, is therefore crucial to ensure safe applications.
We provide proofs for tight upper and lower bounds on max-pooling layers in convolutional networks.
arXiv Detail & Related papers (2020-06-16T10:00:33Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.