Lipschitz Bounds and Provably Robust Training by Laplacian Smoothing
- URL: http://arxiv.org/abs/2006.03712v3
- Date: Thu, 22 Oct 2020 23:02:14 GMT
- Title: Lipschitz Bounds and Provably Robust Training by Laplacian Smoothing
- Authors: Vishaal Krishnan, Abed AlRahman Al Makdah, Fabio Pasqualetti
- Abstract summary: We formulate the adversarially robust learning problem as one of loss minimization with a Lipschitz constraint.
We show that the saddle point of the associated Lagrangian is characterized by a Poisson equation with weighted Laplace operator.
We design a provably robust training scheme using graph-based discretization of the input space and a primal-dual algorithm to converge to the Lagrangian's saddle point.
- Score: 7.4769019455423855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we propose a graph-based learning framework to train models with
provable robustness to adversarial perturbations. In contrast to
regularization-based approaches, we formulate the adversarially robust learning
problem as one of loss minimization with a Lipschitz constraint, and show that
the saddle point of the associated Lagrangian is characterized by a Poisson
equation with weighted Laplace operator. Further, the weighting for the Laplace
operator is given by the Lagrange multiplier for the Lipschitz constraint,
which modulates the sensitivity of the minimizer to perturbations. We then
design a provably robust training scheme using graph-based discretization of
the input space and a primal-dual algorithm to converge to the Lagrangian's
saddle point. Our analysis establishes a novel connection between elliptic
operators with constraint-enforced weighting and adversarial learning. We also
study the complementary problem of improving the robustness of minimizers with
a margin on their loss, formulated as a loss-constrained minimization problem
of the Lipschitz constant. We propose a technique to obtain robustified
minimizers, and evaluate fundamental Lipschitz lower bounds by approaching
Lipschitz constant minimization via a sequence of gradient $p$-norm
minimization problems. Ultimately, our results show that, for a desired nominal
performance, there exists a fundamental lower bound on the sensitivity to
adversarial perturbations that depends only on the loss function and the data
distribution, and that improvements in robustness beyond this bound can only be
made at the expense of nominal performance. Our training schemes provably
achieve these bounds both under constraints on performance and~robustness.
Related papers
- Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Certified Robustness via Dynamic Margin Maximization and Improved
Lipschitz Regularization [43.98504250013897]
We develop a robust training algorithm to increase the margin in the output (logit) space while regularizing the Lipschitz constant of the model along vulnerable directions.
The relative accuracy of the bounds prevents excessive regularization and allows for more direct manipulation of the decision boundary.
Experiments on the MNIST, CIFAR-10, and Tiny-ImageNet data sets verify that our proposed algorithm obtains competitively improved results compared to the state-of-the-art.
arXiv Detail & Related papers (2023-09-29T20:07:02Z) - Beyond the Edge of Stability via Two-step Gradient Updates [49.03389279816152]
Gradient Descent (GD) is a powerful workhorse of modern machine learning.
GD's ability to find local minimisers is only guaranteed for losses with Lipschitz gradients.
This work focuses on simple, yet representative, learning problems via analysis of two-step gradient updates.
arXiv Detail & Related papers (2022-06-08T21:32:50Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - Sparsest Univariate Learning Models Under Lipschitz Constraint [31.28451181040038]
We propose continuous-domain formulations for one-dimensional regression problems.
We control the Lipschitz constant explicitly using a user-defined upper-bound.
We show that both problems admit global minimizers that are continuous and piecewise-linear.
arXiv Detail & Related papers (2021-12-27T07:03:43Z) - Concave Utility Reinforcement Learning with Zero-Constraint Violations [43.29210413964558]
We consider the problem of concave utility reinforcement learning (CURL) with convex constraints.
We propose a model-based learning algorithm that also achieves zero constraint violations.
arXiv Detail & Related papers (2021-09-12T06:13:33Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Lipschitz Bounded Equilibrium Networks [3.2872586139884623]
This paper introduces new parameterizations of equilibrium neural networks, i.e. networks defined by implicit equations.
The new parameterization admits a Lipschitz bound during training via unconstrained optimization.
In image classification experiments we show that the Lipschitz bounds are very accurate and improve robustness to adversarial attacks.
arXiv Detail & Related papers (2020-10-05T01:00:40Z) - On Lower Bounds for Standard and Robust Gaussian Process Bandit
Optimization [55.937424268654645]
We consider algorithm-independent lower bounds for the problem of black-box optimization of functions having a bounded norm.
We provide a novel proof technique for deriving lower bounds on the regret, with benefits including simplicity, versatility, and an improved dependence on the error probability.
arXiv Detail & Related papers (2020-08-20T03:48:14Z) - Dynamic Regret of Convex and Smooth Functions [93.71361250701075]
We investigate online convex optimization in non-stationary environments.
We choose the dynamic regret as the performance measure.
We show that it is possible to further enhance the dynamic regret by exploiting the smoothness condition.
arXiv Detail & Related papers (2020-07-07T14:10:57Z) - Achieving robustness in classification using optimal transport with
hinge regularization [7.780418853571034]
We propose a new framework for binary classification, based on optimal transport.
We learn 1-Lipschitz networks using a new loss that is an hinge regularized version of the Kantorovich-Rubinstein dual formulation for the Wasserstein distance estimation.
arXiv Detail & Related papers (2020-06-11T15:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.