Approximation theory for 1-Lipschitz ResNets
- URL: http://arxiv.org/abs/2505.12003v1
- Date: Sat, 17 May 2025 13:36:55 GMT
- Title: Approximation theory for 1-Lipschitz ResNets
- Authors: Davide Murari, Takashi Furuya, Carola-Bibiane Schönlieb,
- Abstract summary: We focus on 1-Lipschitz residual networks (ResNets) based on explicit steps of negative gradient flows.<n>This paper provides the first universal approximation guarantees for 1-Lipschitz ResNets.
- Score: 10.316724084739889
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 1-Lipschitz neural networks are fundamental for generative modelling, inverse problems, and robust classifiers. In this paper, we focus on 1-Lipschitz residual networks (ResNets) based on explicit Euler steps of negative gradient flows and study their approximation capabilities. Leveraging the Restricted Stone-Weierstrass Theorem, we first show that these 1-Lipschitz ResNets are dense in the set of scalar 1-Lipschitz functions on any compact domain when width and depth are allowed to grow. We also show that these networks can exactly represent scalar piecewise affine 1-Lipschitz functions. We then prove a stronger statement: by inserting norm-constrained linear maps between the residual blocks, the same density holds when the hidden width is fixed. Because every layer obeys simple norm constraints, the resulting models can be trained with off-the-shelf optimisers. This paper provides the first universal approximation guarantees for 1-Lipschitz ResNets, laying a rigorous foundation for their practical use.
Related papers
- Hidden Synergy: $L_1$ Weight Normalization and 1-Path-Norm Regularization [0.0]
We show how PSiLON Net's design drastically simplifies the 1-path-norm.
We propose a pruning method to achieve exact sparsity in the final stages of training.
arXiv Detail & Related papers (2024-04-29T21:25:25Z) - A Unified Algebraic Perspective on Lipschitz Neural Networks [88.14073994459586]
This paper introduces a novel perspective unifying various types of 1-Lipschitz neural networks.
We show that many existing techniques can be derived and generalized via finding analytical solutions of a common semidefinite programming (SDP) condition.
Our approach, called SDP-based Lipschitz Layers (SLL), allows us to design non-trivial yet efficient generalization of convex potential layers.
arXiv Detail & Related papers (2023-03-06T14:31:09Z) - Improved techniques for deterministic l2 robustness [63.34032156196848]
Training convolutional neural networks (CNNs) with a strict 1-Lipschitz constraint under the $l_2$ norm is useful for adversarial robustness, interpretable gradients and stable training.
We introduce a procedure to certify robustness of 1-Lipschitz CNNs by replacing the last linear layer with a 1-hidden layer.
We significantly advance the state-of-the-art for standard and provable robust accuracies on CIFAR-10 and CIFAR-100.
arXiv Detail & Related papers (2022-11-15T19:10:12Z) - Efficiently Computing Local Lipschitz Constants of Neural Networks via
Bound Propagation [79.13041340708395]
Lipschitz constants are connected to many properties of neural networks, such as robustness, fairness, and generalization.
Existing methods for computing Lipschitz constants either produce relatively loose upper bounds or are limited to small networks.
We develop an efficient framework for computing the $ell_infty$ local Lipschitz constant of a neural network by tightly upper bounding the norm of Clarke Jacobian.
arXiv Detail & Related papers (2022-10-13T22:23:22Z) - Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness [33.72713778392896]
We study certified $ell_infty$ from a novel perspective of representing Boolean functions.
We develop a unified Lipschitz network that generalizes prior works, and design a practical version that can be efficiently trained.
arXiv Detail & Related papers (2022-10-04T17:55:27Z) - Almost-Orthogonal Layers for Efficient General-Purpose Lipschitz
Networks [23.46030810336596]
We propose a new technique for constructing deep networks with a small Lipschitz constant.
It provides formal guarantees on the Lipschitz constant, it is easy to implement and efficient to run, and it can be combined with any training objective and optimization method.
Experiments and ablation studies in the context of image classification with certified robust accuracy confirm that AOL layers achieve results that are on par with most existing methods.
arXiv Detail & Related papers (2022-08-05T13:34:33Z) - Scalable Lipschitz Residual Networks with Convex Potential Flows [120.27516256281359]
We show that using convex potentials in a residual network gradient flow provides a built-in $1$-Lipschitz transformation.
A comprehensive set of experiments on CIFAR-10 demonstrates the scalability of our architecture and the benefit of our approach for $ell$ provable defenses.
arXiv Detail & Related papers (2021-10-25T07:12:53Z) - The Many Faces of 1-Lipschitz Neural Networks [1.911678487931003]
We show that 1-Lipschitz neural network can fit arbitrarily difficult frontier making them as expressive as classical ones.
We also study the link between classification with 1-Lipschitz network and optimal transport thanks to regularized versions of Kantorovich-Rubinstein duality theory.
arXiv Detail & Related papers (2021-04-11T20:31:32Z) - Efficient Proximal Mapping of the 1-path-norm of Shallow Networks [47.20962674178505]
We show two new important properties of the 1-path-norm neural networks.
First, despite its non-smoothness and non-accuracy it allows a closed proximal operator to be efficiently computed.
Second, when the activation functions are differentiable, it provides an upper bound on the Lipschitz constant.
arXiv Detail & Related papers (2020-07-02T10:34:06Z) - On Lipschitz Regularization of Convolutional Layers using Toeplitz
Matrix Theory [77.18089185140767]
Lipschitz regularity is established as a key property of modern deep learning.
computing the exact value of the Lipschitz constant of a neural network is known to be NP-hard.
We introduce a new upper bound for convolutional layers that is both tight and easy to compute.
arXiv Detail & Related papers (2020-06-15T13:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.