Optimization and Optimizers for Adversarial Robustness
- URL: http://arxiv.org/abs/2303.13401v1
- Date: Thu, 23 Mar 2023 16:22:59 GMT
- Title: Optimization and Optimizers for Adversarial Robustness
- Authors: Hengyue Liang, Buyun Liang, Le Peng, Ying Cui, Tim Mitchell and Ju Sun
- Abstract summary: In this paper, we introduce a novel framework that blends a general-purpose constrained-optimization solver with Constraint Folding.
Regarding reliability, PWCF provides solutions with stationarity measures and feasibility tests to assess the solution quality.
We further explore the distinct patterns in the solutions found for solving these problems using various combinations of losses, perturbation models, and optimization algorithms.
- Score: 10.279287131070157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Empirical robustness evaluation (RE) of deep learning models against
adversarial perturbations entails solving nontrivial constrained optimization
problems. Existing numerical algorithms that are commonly used to solve them in
practice predominantly rely on projected gradient, and mostly handle
perturbations modeled by the $\ell_1$, $\ell_2$ and $\ell_\infty$ distances. In
this paper, we introduce a novel algorithmic framework that blends a
general-purpose constrained-optimization solver PyGRANSO with Constraint
Folding (PWCF), which can add more reliability and generality to the
state-of-the-art RE packages, e.g., AutoAttack. Regarding reliability, PWCF
provides solutions with stationarity measures and feasibility tests to assess
the solution quality. For generality, PWCF can handle perturbation models that
are typically inaccessible to the existing projected gradient methods; the main
requirement is the distance metric to be almost everywhere differentiable.
Taking advantage of PWCF and other existing numerical algorithms, we further
explore the distinct patterns in the solutions found for solving these
optimization problems using various combinations of losses, perturbation
models, and optimization algorithms. We then discuss the implications of these
patterns on the current robustness evaluation and adversarial training.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - An Expandable Machine Learning-Optimization Framework to Sequential
Decision-Making [0.0]
We present an integrated prediction-optimization (PredOpt) framework to efficiently solve sequential decision-making problems.
We address the key issues of sequential dependence, infeasibility, and generalization in machine learning (ML) to make predictions for optimal solutions to instances problems.
arXiv Detail & Related papers (2023-11-12T21:54:53Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Robust expected improvement for Bayesian optimization [1.8130068086063336]
We propose a surrogate modeling and active learning technique called robust expected improvement (REI) that ports adversarial methodology into the BO/GP framework.
We illustrate and draw comparisons to several competitors on benchmark synthetic exercises and real problems of varying complexity.
arXiv Detail & Related papers (2023-02-16T22:34:28Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Optimization for Robustness Evaluation beyond $\ell_p$ Metrics [11.028091609739738]
Empirical evaluation of deep learning models against adversarial attacks involves solving nontrivial constrained optimization problems.
We introduce a novel framework that blends a general-purpose constrained-optimization solver PyGRANSO, With Constraint-Folding (PWCF) to add reliability and generality to robustness evaluation.
arXiv Detail & Related papers (2022-10-02T20:48:05Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Consistent Approximations in Composite Optimization [0.0]
We develop a framework for consistent approximations of optimization problems.
The framework is developed for a broad class of optimizations.
A programming analysis method illustrates extended nonlinear programming solutions.
arXiv Detail & Related papers (2022-01-13T23:57:08Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.