Provable Lipschitz Certification for Generative Models
- URL: http://arxiv.org/abs/2107.02732v1
- Date: Tue, 6 Jul 2021 17:00:29 GMT
- Title: Provable Lipschitz Certification for Generative Models
- Authors: Matt Jordan, Alexandros G. Dimakis
- Abstract summary: We present a scalable technique for upper bounding the Lipschitz constant of generative models.
We approximate this set by layerwise convex approximations using zonotopes.
This provides efficient and tight bounds on small networks and can scale to generative models on VAE and DCGAN architectures.
- Score: 103.97252161103042
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a scalable technique for upper bounding the Lipschitz constant of
generative models. We relate this quantity to the maximal norm over the set of
attainable vector-Jacobian products of a given generative model. We approximate
this set by layerwise convex approximations using zonotopes. Our approach
generalizes and improves upon prior work using zonotope transformers and we
extend to Lipschitz estimation of neural networks with large output dimension.
This provides efficient and tight bounds on small networks and can scale to
generative models on VAE and DCGAN architectures.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - A Hybrid of Generative and Discriminative Models Based on the
Gaussian-coupled Softmax Layer [5.33024001730262]
We propose a method to train a hybrid of discriminative and generative models in a single neural network.
We demonstrate that the proposed hybrid model can be applied to semi-supervised learning and confidence calibration.
arXiv Detail & Related papers (2023-05-10T05:48:22Z) - A Unified Algebraic Perspective on Lipschitz Neural Networks [88.14073994459586]
This paper introduces a novel perspective unifying various types of 1-Lipschitz neural networks.
We show that many existing techniques can be derived and generalized via finding analytical solutions of a common semidefinite programming (SDP) condition.
Our approach, called SDP-based Lipschitz Layers (SLL), allows us to design non-trivial yet efficient generalization of convex potential layers.
arXiv Detail & Related papers (2023-03-06T14:31:09Z) - Efficiently Computing Local Lipschitz Constants of Neural Networks via
Bound Propagation [79.13041340708395]
Lipschitz constants are connected to many properties of neural networks, such as robustness, fairness, and generalization.
Existing methods for computing Lipschitz constants either produce relatively loose upper bounds or are limited to small networks.
We develop an efficient framework for computing the $ell_infty$ local Lipschitz constant of a neural network by tightly upper bounding the norm of Clarke Jacobian.
arXiv Detail & Related papers (2022-10-13T22:23:22Z) - Lower and Upper Bounds on the VC-Dimension of Tensor Network Models [8.997952791113232]
Network methods have been a key ingredient of advances in condensed matter physics.
They can be used to efficiently learn linear models in exponentially large feature spaces.
In this work, we derive upper and lower bounds on the VC dimension and pseudo-dimension of a large class of tensor network models.
arXiv Detail & Related papers (2021-06-22T14:39:25Z) - Provable Model-based Nonlinear Bandit and Reinforcement Learning: Shelve
Optimism, Embrace Virtual Curvature [61.22680308681648]
We show that global convergence is statistically intractable even for one-layer neural net bandit with a deterministic reward.
For both nonlinear bandit and RL, the paper presents a model-based algorithm, Virtual Ascent with Online Model Learner (ViOL)
arXiv Detail & Related papers (2021-02-08T12:41:56Z) - Collegial Ensembles [11.64359837358763]
We show that collegial ensembles can be efficiently implemented in practical architectures using group convolutions and block diagonal layers.
We also show how our framework can be used to analytically derive optimal group convolution modules without having to train a single model.
arXiv Detail & Related papers (2020-06-13T16:40:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.