Robustness Certificates for Implicit Neural Networks: A Mixed Monotone
Contractive Approach
- URL: http://arxiv.org/abs/2112.05310v1
- Date: Fri, 10 Dec 2021 03:08:55 GMT
- Title: Robustness Certificates for Implicit Neural Networks: A Mixed Monotone
Contractive Approach
- Authors: Saber Jafarpour, Matthew Abate, Alexander Davydov, Francesco Bullo,
Samuel Coogan
- Abstract summary: Implicit neural networks offer competitive performance and reduced memory consumption.
They can remain brittle with respect to input adversarial perturbations.
This paper proposes a theoretical and computational framework for robustness verification of implicit neural networks.
- Score: 60.67748036747221
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural networks are a general class of learning models that replace
the layers in traditional feedforward models with implicit algebraic equations.
Compared to traditional learning models, implicit networks offer competitive
performance and reduced memory consumption. However, they can remain brittle
with respect to input adversarial perturbations.
This paper proposes a theoretical and computational framework for robustness
verification of implicit neural networks; our framework blends together mixed
monotone systems theory and contraction theory. First, given an implicit neural
network, we introduce a related embedded network and show that, given an
$\ell_\infty$-norm box constraint on the input, the embedded network provides
an $\ell_\infty$-norm box overapproximation for the output of the given
network. Second, using $\ell_{\infty}$-matrix measures, we propose sufficient
conditions for well-posedness of both the original and embedded system and
design an iterative algorithm to compute the $\ell_{\infty}$-norm box
robustness margins for reachability and classification problems. Third, of
independent value, we propose a novel relative classifier variable that leads
to tighter bounds on the certified adversarial robustness in classification
problems. Finally, we perform numerical simulations on a Non-Euclidean Monotone
Operator Network (NEMON) trained on the MNIST dataset. In these simulations, we
compare the accuracy and run time of our mixed monotone contractive approach
with the existing robustness verification approaches in the literature for
estimating the certified adversarial robustness.
Related papers
- Chaos Theory and Adversarial Robustness [0.0]
This paper uses ideas from Chaos Theory to explain, analyze, and quantify the degree to which neural networks are susceptible to or robust against adversarial attacks.
We present a new metric, the "susceptibility ratio," given by $hat Psi(h, theta)$, which captures how greatly a model's output will be changed by perturbations to a given input.
arXiv Detail & Related papers (2022-10-20T03:39:44Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Neural Network Training Using $\ell_1$-Regularization and Bi-fidelity
Data [0.0]
We study the effects of sparsity promoting $ell_$-regularization on training neural networks when only a small training dataset from a high-fidelity model is available.
We consider two variants of $ell_$-regularization informed by the parameters of an identical network trained using data from lower-fidelity models of the problem at hand.
These bifidelity strategies are generalizations of transfer learning of neural networks that uses the parameters learned from a large low-fidelity dataset to efficiently train networks for a small high-fidelity dataset.
arXiv Detail & Related papers (2021-05-27T08:56:17Z) - An Orthogonal Classifier for Improving the Adversarial Robustness of
Neural Networks [21.13588742648554]
Recent efforts have shown that imposing certain modifications on classification layer can improve the robustness of the neural networks.
We explicitly construct a dense orthogonal weight matrix whose entries have the same magnitude, leading to a novel robust classifier.
Our method is efficient and competitive to many state-of-the-art defensive approaches.
arXiv Detail & Related papers (2021-05-19T13:12:14Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Monotone operator equilibrium networks [97.86610752856987]
We develop a new class of implicit-depth model based on the theory of monotone operators, the Monotone Operator Equilibrium Network (monDEQ)
We show the close connection between finding the equilibrium point of an implicit network and solving a form of monotone operator splitting problem.
We then develop a parameterization of the network which ensures that all operators remain monotone, which guarantees the existence of a unique equilibrium point.
arXiv Detail & Related papers (2020-06-15T17:57:31Z) - Adversarial Robustness Guarantees for Random Deep Neural Networks [15.68430580530443]
adversarial examples are incorrectly classified inputs that are extremely close to a correctly classified input.
We prove that for any $pge1$, the $ellp$ distance of any given input from the classification boundary scales as one over the square root of the dimension of the input times the $ellp$ norm of the input.
The results constitute a fundamental advance in the theoretical understanding of adversarial examples, and open the way to a thorough theoretical characterization of the relation between network architecture and robustness to adversarial perturbations.
arXiv Detail & Related papers (2020-04-13T13:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.