IBP Regularization for Verified Adversarial Robustness via
Branch-and-Bound
- URL: http://arxiv.org/abs/2206.14772v2
- Date: Wed, 31 May 2023 09:35:21 GMT
- Title: IBP Regularization for Verified Adversarial Robustness via
Branch-and-Bound
- Authors: Alessandro De Palma, Rudy Bunel, Krishnamurthy Dvijotham, M. Pawan
Kumar, Robert Stanforth
- Abstract summary: We present IBP-R, a novel verified training algorithm that is both simple effective.
We also present UPB, a novel robustness based on $beta$-CROWN, that reduces the cost state-of-the-art branching algorithms.
- Score: 85.6899802468343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works have tried to increase the verifiability of adversarially
trained networks by running the attacks over domains larger than the original
perturbations and adding various regularization terms to the objective.
However, these algorithms either underperform or require complex and expensive
stage-wise training procedures, hindering their practical applicability. We
present IBP-R, a novel verified training algorithm that is both simple and
effective. IBP-R induces network verifiability by coupling adversarial attacks
on enlarged domains with a regularization term, based on inexpensive interval
bound propagation, that minimizes the gap between the non-convex verification
problem and its approximations. By leveraging recent branch-and-bound
frameworks, we show that IBP-R obtains state-of-the-art verified
robustness-accuracy trade-offs for small perturbations on CIFAR-10 while
training significantly faster than relevant previous work. Additionally, we
present UPB, a novel branching strategy that, relying on a simple heuristic
based on $\beta$-CROWN, reduces the cost of state-of-the-art branching
algorithms while yielding splits of comparable quality.
Related papers
- Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Understanding Certified Training with Interval Bound Propagation [6.688598900034783]
Training certifiably robust neural networks is becoming more relevant.
We show that training methods based on the imprecise interval bound propagation (IBP) consistently outperform those leveraging more precise bounding methods.
This hints at the existence of new training methods that do not induce the strong regularization required for tight IBP bounds.
arXiv Detail & Related papers (2023-06-17T21:13:30Z) - Improving Robust Generalization by Direct PAC-Bayesian Bound
Minimization [27.31806334022094]
Recent research has shown an overfitting-like phenomenon in which models trained against adversarial attacks exhibit higher robustness on the training set compared to the test set.
In this paper we consider a different form of the robust PAC-Bayesian bound and directly minimize it with respect to the model posterior.
We evaluate our TrH regularization approach over CIFAR-10/100 and ImageNet using Vision Transformers (ViT) and compare against baseline adversarial robustness algorithms.
arXiv Detail & Related papers (2022-11-22T23:12:00Z) - On the Convergence of Certified Robust Training with Interval Bound
Propagation [147.77638840942447]
We present a theoretical analysis on the convergence of IBP training.
We show that when using IBP training to train a randomly two-layer ReLU neural network with logistic loss, gradient descent can linearly converge to zero robust training error.
arXiv Detail & Related papers (2022-03-16T21:49:13Z) - Back to Basics: Efficient Network Compression via IMP [22.586474627159287]
Iterative Magnitude Pruning (IMP) is one of the most established approaches for network pruning.
IMP is often argued that it reaches suboptimal states by not incorporating sparsification into the training phase.
We find that IMP with SLR for retraining can outperform state-of-the-art pruning-during-training approaches.
arXiv Detail & Related papers (2021-11-01T11:23:44Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z) - Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality [131.45028999325797]
We develop a doubly robust off-policy AC (DR-Off-PAC) for discounted MDP.
DR-Off-PAC adopts a single timescale structure, in which both actor and critics are updated simultaneously with constant stepsize.
We study the finite-time convergence rate and characterize the sample complexity for DR-Off-PAC to attain an $epsilon$-accurate optimal policy.
arXiv Detail & Related papers (2021-02-23T18:56:13Z) - Fast and Complete: Enabling Complete Neural Network Verification with
Rapid and Massively Parallel Incomplete Verifiers [112.23981192818721]
We propose to use backward mode linear relaxation based analysis (LiRPA) to replace Linear Programming (LP) during the BaB process.
Unlike LP, LiRPA when applied naively can produce much weaker bounds and even cannot check certain conflicts of sub-domains during splitting.
We demonstrate an order of magnitude speedup compared to existing LP-based approaches.
arXiv Detail & Related papers (2020-11-27T16:42:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.