Understanding Certified Training with Interval Bound Propagation
- URL: http://arxiv.org/abs/2306.10426v2
- Date: Tue, 27 Feb 2024 20:14:47 GMT
- Title: Understanding Certified Training with Interval Bound Propagation
- Authors: Yuhao Mao, Mark Niklas M\"uller, Marc Fischer, Martin Vechev
- Abstract summary: Training certifiably robust neural networks is becoming more relevant.
We show that training methods based on the imprecise interval bound propagation (IBP) consistently outperform those leveraging more precise bounding methods.
This hints at the existence of new training methods that do not induce the strong regularization required for tight IBP bounds.
- Score: 6.688598900034783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As robustness verification methods are becoming more precise, training
certifiably robust neural networks is becoming ever more relevant. To this end,
certified training methods compute and then optimize an upper bound on the
worst-case loss over a robustness specification. Curiously, training methods
based on the imprecise interval bound propagation (IBP) consistently outperform
those leveraging more precise bounding methods. Still, we lack an understanding
of the mechanisms making IBP so successful.
In this work, we thoroughly investigate these mechanisms by leveraging a
novel metric measuring the tightness of IBP bounds. We first show theoretically
that, for deep linear models, tightness decreases with width and depth at
initialization, but improves with IBP training, given sufficient network width.
We, then, derive sufficient and necessary conditions on weight matrices for IBP
bounds to become exact and demonstrate that these impose strong regularization,
explaining the empirically observed trade-off between robustness and accuracy
in certified training.
Our extensive experimental evaluation validates our theoretical predictions
for ReLU networks, including that wider networks improve performance, yielding
state-of-the-art results. Interestingly, we observe that while all IBP-based
training methods lead to high tightness, this is neither sufficient nor
necessary to achieve high certifiable robustness. This hints at the existence
of new training methods that do not induce the strong regularization required
for tight IBP bounds, leading to improved robustness and standard accuracy.
Related papers
- Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - CARE: Certifiably Robust Learning with Reasoning via Variational
Inference [26.210129662748862]
We propose a certifiably robust learning with reasoning pipeline (CARE)
CARE achieves significantly higher certified robustness compared with the state-of-the-art baselines.
We additionally conducted different ablation studies to demonstrate the empirical robustness of CARE and the effectiveness of different knowledge integration.
arXiv Detail & Related papers (2022-09-12T07:15:52Z) - IBP Regularization for Verified Adversarial Robustness via
Branch-and-Bound [85.6899802468343]
We present IBP-R, a novel verified training algorithm that is both simple effective.
We also present UPB, a novel robustness based on $beta$-CROWN, that reduces the cost state-of-the-art branching algorithms.
arXiv Detail & Related papers (2022-06-29T17:13:25Z) - On the Convergence of Certified Robust Training with Interval Bound
Propagation [147.77638840942447]
We present a theoretical analysis on the convergence of IBP training.
We show that when using IBP training to train a randomly two-layer ReLU neural network with logistic loss, gradient descent can linearly converge to zero robust training error.
arXiv Detail & Related papers (2022-03-16T21:49:13Z) - Towards Scaling Difference Target Propagation by Learning Backprop
Targets [64.90165892557776]
Difference Target Propagation is a biologically-plausible learning algorithm with close relation with Gauss-Newton (GN) optimization.
We propose a novel feedback weight training scheme that ensures both that DTP approximates BP and that layer-wise feedback weight training can be restored.
We report the best performance ever achieved by DTP on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2022-01-31T18:20:43Z) - Fast Certified Robust Training via Better Initialization and Shorter
Warmup [95.81628508228623]
We propose a new IBP and principled regularizers during the warmup stage to stabilize certified bounds.
We find that batch normalization (BN) is a crucial architectural element to build best-performing networks for certified training.
arXiv Detail & Related papers (2021-03-31T17:58:58Z) - The Benefit of the Doubt: Uncertainty Aware Sensing for Edge Computing
Platforms [10.86298377998459]
We propose an efficient framework for predictive uncertainty estimation in NNs deployed on embedded edge systems.
The framework is built from the ground up to provide predictive uncertainty based only on one forward pass.
Our approach not only obtains robust and accurate uncertainty estimations but also outperforms state-of-the-art methods in terms of systems performance.
arXiv Detail & Related papers (2021-02-11T11:44:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.