LipKernel: Lipschitz-Bounded Convolutional Neural Networks via Dissipative Layers
- URL: http://arxiv.org/abs/2410.22258v1
- Date: Tue, 29 Oct 2024 17:20:14 GMT
- Title: LipKernel: Lipschitz-Bounded Convolutional Neural Networks via Dissipative Layers
- Authors: Patricia Pauli, Ruigang Wang, Ian Manchester, Frank Allgöwer,
- Abstract summary: We propose a layer-wise parameterization for convolutional neural networks (CNNs) that includes built-in robustness guarantees.
Our method Lip Kernel directly parameterizes dissipative convolution kernels using a 2-D Roesser-type state space model.
We show that the run-time using our method is orders of magnitude faster than state-of-the-art Lipschitz-bounded networks.
- Score: 0.0468732641979009
- License:
- Abstract: We propose a novel layer-wise parameterization for convolutional neural networks (CNNs) that includes built-in robustness guarantees by enforcing a prescribed Lipschitz bound. Each layer in our parameterization is designed to satisfy a linear matrix inequality (LMI), which in turn implies dissipativity with respect to a specific supply rate. Collectively, these layer-wise LMIs ensure Lipschitz boundedness for the input-output mapping of the neural network, yielding a more expressive parameterization than through spectral bounds or orthogonal layers. Our new method LipKernel directly parameterizes dissipative convolution kernels using a 2-D Roesser-type state space model. This means that the convolutional layers are given in standard form after training and can be evaluated without computational overhead. In numerical experiments, we show that the run-time using our method is orders of magnitude faster than state-of-the-art Lipschitz-bounded networks that parameterize convolutions in the Fourier domain, making our approach particularly attractive for improving robustness of learning-based real-time perception or control in robotics, autonomous vehicles, or automation systems. We focus on CNNs, and in contrast to previous works, our approach accommodates a wide variety of layers typically used in CNNs, including 1-D and 2-D convolutional layers, maximum and average pooling layers, as well as strided and dilated convolutions and zero padding. However, our approach naturally extends beyond CNNs as we can incorporate any layer that is incrementally dissipative.
Related papers
- Local Loss Optimization in the Infinite Width: Stable Parameterization of Predictive Coding Networks and Target Propagation [8.35644084613785]
We introduce the maximal update parameterization ($mu$P) in the infinite-width limit for two representative designs of local targets.
By analyzing deep linear networks, we found that PC's gradients interpolate between first-order and Gauss-Newton-like gradients.
We demonstrate that, in specific standard settings, PC in the infinite-width limit behaves more similarly to the first-order gradient.
arXiv Detail & Related papers (2024-11-04T11:38:27Z) - Automatic Optimisation of Normalised Neural Networks [1.0334138809056097]
We propose automatic optimisation methods considering the geometry of matrix manifold for the normalised parameters of neural networks.
Our approach first initialises the network and normalises the data with respect to the $ell2$-$ell2$ gain of the initialised network.
arXiv Detail & Related papers (2023-12-17T10:13:42Z) - Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram
Iteration [122.51142131506639]
We introduce a precise, fast, and differentiable upper bound for the spectral norm of convolutional layers using circulant matrix theory.
We show through a comprehensive set of experiments that our approach outperforms other state-of-the-art methods in terms of precision, computational cost, and scalability.
It proves highly effective for the Lipschitz regularization of convolutional neural networks, with competitive results against concurrent approaches.
arXiv Detail & Related papers (2023-05-25T15:32:21Z) - Lipschitz-bounded 1D convolutional neural networks using the Cayley
transform and the controllability Gramian [1.1060425537315086]
We establish a layer-wise parameterization for 1D convolutional neural networks (CNNs) with built-in end-to-end guarantees.
In doing so, we use the Lipschitz constant of the input-output mapping characterized by a CNN as a robustness measure.
We train Lipschitz-bounded 1D CNNs for the classification of heart arrythmia data and show their improved robustness.
arXiv Detail & Related papers (2023-03-20T12:25:43Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - A Unified Algebraic Perspective on Lipschitz Neural Networks [88.14073994459586]
This paper introduces a novel perspective unifying various types of 1-Lipschitz neural networks.
We show that many existing techniques can be derived and generalized via finding analytical solutions of a common semidefinite programming (SDP) condition.
Our approach, called SDP-based Lipschitz Layers (SLL), allows us to design non-trivial yet efficient generalization of convex potential layers.
arXiv Detail & Related papers (2023-03-06T14:31:09Z) - Lipschitz constant estimation for 1D convolutional neural networks [0.0]
We propose a dissipativity-based method for Lipschitz constant estimation of 1D convolutional neural networks (CNNs)
In particular, we analyze the dissipativity properties of convolutional, pooling, and fully connected layers.
arXiv Detail & Related papers (2022-11-28T12:09:06Z) - Numerical Computation of Partial Differential Equations by Hidden-Layer
Concatenated Extreme Learning Machine [0.0]
The extreme learning machine (ELM) method can yield highly accurate solutions to linear/nonlinear partial differential equations (PDEs)
ELM method requires the last hidden layer of the neural network to be wide to achieve a high accuracy.
We present a modified ELM method, termed HLConcELM (hidden-layerd ELM), to overcome the above drawback.
arXiv Detail & Related papers (2022-04-24T22:39:06Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - DO-Conv: Depthwise Over-parameterized Convolutional Layer [66.46704754669169]
We propose to augment a convolutional layer with an additional depthwise convolution, where each input channel is convolved with a different 2D kernel.
We show with extensive experiments that the mere replacement of conventional convolutional layers with DO-Conv layers boosts the performance of CNNs.
arXiv Detail & Related papers (2020-06-22T06:57:10Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.