Robust Implicit Networks via Non-Euclidean Contractions
- URL: http://arxiv.org/abs/2106.03194v1
- Date: Sun, 6 Jun 2021 18:05:02 GMT
- Title: Robust Implicit Networks via Non-Euclidean Contractions
- Authors: Saber Jafarpour, Alexander Davydov, Anton V. Proskurnikov, Francesco
Bullo
- Abstract summary: Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
- Score: 63.91638306025768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural networks, a.k.a., deep equilibrium networks, are a class of
implicit-depth learning models where function evaluation is performed by
solving a fixed point equation. They generalize classic feedforward models and
are equivalent to infinite-depth weight-tied feedforward networks. While
implicit models show improved accuracy and significant reduction in memory
consumption, they can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit
neural networks based upon contraction theory for the non-Euclidean norm
$\ell_\infty$. Our framework includes (i) a novel condition for well-posedness
based on one-sided Lipschitz constants, (ii) an average iteration for computing
fixed-points, and (iii) explicit estimates on input-output Lipschitz constants.
Additionally, we design a training problem with the well-posedness condition
and the average iteration as constraints and, to achieve robust models, with
the input-output Lipschitz constant as a regularizer. Our $\ell_\infty$
well-posedness condition leads to a larger polytopic training search space than
existing conditions and our average iteration enjoys accelerated convergence.
Finally, we perform several numerical experiments for function estimation and
digit classification through the MNIST data set. Our numerical results
demonstrate improved accuracy and robustness of the implicit models with
smaller input-output Lipschitz bounds.
Related papers
- Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram
Iteration [122.51142131506639]
We introduce a precise, fast, and differentiable upper bound for the spectral norm of convolutional layers using circulant matrix theory.
We show through a comprehensive set of experiments that our approach outperforms other state-of-the-art methods in terms of precision, computational cost, and scalability.
It proves highly effective for the Lipschitz regularization of convolutional neural networks, with competitive results against concurrent approaches.
arXiv Detail & Related papers (2023-05-25T15:32:21Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Lipschitz Continuity Retained Binary Neural Network [52.17734681659175]
We introduce the Lipschitz continuity as the rigorous criteria to define the model robustness for BNN.
We then propose to retain the Lipschitz continuity as a regularization term to improve the model robustness.
Our experiments prove that our BNN-specific regularization method can effectively strengthen the robustness of BNN.
arXiv Detail & Related papers (2022-07-13T22:55:04Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - Robust and Provably Monotonic Networks [0.0]
We present a new method to constrain the Lipschitz constant of dense deep learning models.
We show how the algorithm was used to train a powerful, robust, and interpretable discriminator for heavy-flavor decays in the LHCb realtime data-processing system.
arXiv Detail & Related papers (2021-11-30T19:01:32Z) - Stabilizing Equilibrium Models by Jacobian Regularization [151.78151873928027]
Deep equilibrium networks (DEQs) are a new class of models that eschews traditional depth in favor of finding the fixed point of a single nonlinear layer.
We propose a regularization scheme for DEQ models that explicitly regularizes the Jacobian of the fixed-point update equations to stabilize the learning of equilibrium models.
We show that this regularization adds only minimal computational cost, significantly stabilizes the fixed-point convergence in both forward and backward passes, and scales well to high-dimensional, realistic domains.
arXiv Detail & Related papers (2021-06-28T00:14:11Z) - Fixed Point Networks: Implicit Depth Models with Jacobian-Free Backprop [21.00060644438722]
A growing trend in deep learning replaces fixed depth models by approximations of the limit as network depth approaches infinity.
In particular, backpropagation through implicit depth models requires solving a Jacobian-based equation arising from the implicit function theorem.
We propose fixed point networks (FPNs) that guarantees convergence of forward propagation to a unique limit defined by network weights and input data.
arXiv Detail & Related papers (2021-03-23T19:20:33Z) - CLIP: Cheap Lipschitz Training of Neural Networks [0.0]
We investigate a variational regularization method named CLIP for controlling the Lipschitz constant of a neural network.
We mathematically analyze the proposed model, in particular discussing the impact of the chosen regularization parameter on the output of the network.
arXiv Detail & Related papers (2021-03-23T13:29:24Z) - Lipschitz Bounded Equilibrium Networks [3.2872586139884623]
This paper introduces new parameterizations of equilibrium neural networks, i.e. networks defined by implicit equations.
The new parameterization admits a Lipschitz bound during training via unconstrained optimization.
In image classification experiments we show that the Lipschitz bounds are very accurate and improve robustness to adversarial attacks.
arXiv Detail & Related papers (2020-10-05T01:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.