Robustness Analysis of Continuous-Depth Models with Lagrangian
Techniques
- URL: http://arxiv.org/abs/2308.12192v1
- Date: Wed, 23 Aug 2023 15:30:44 GMT
- Title: Robustness Analysis of Continuous-Depth Models with Lagrangian
Techniques
- Authors: Sophie A. Neubauer (n\'ee Gruenbacher), Radu Grosu
- Abstract summary: This paper presents, in a unified fashion, deterministic as well as statistical Lagrangian-verification techniques.
They formally quantify the behavioral robustness of any time-continuous process, formulated as a continuous-depth model.
Experiments demonstrate the superior performance of Lagrangian techniques, when compared to LRT, Flow*, and CAPD, and illustrate their use in the robustness analysis of various continuous-depth models.
- Score: 9.141050828506804
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents, in a unified fashion, deterministic as well as
statistical Lagrangian-verification techniques. They formally quantify the
behavioral robustness of any time-continuous process, formulated as a
continuous-depth model. To this end, we review LRT-NG, SLR, and GoTube,
algorithms for constructing a tight reachtube, that is, an over-approximation
of the set of states reachable within a given time-horizon, and provide
guarantees for the reachtube bounds. We compare the usage of the variational
equations, associated to the system equations, the mean value theorem, and the
Lipschitz constants, in achieving deterministic and statistical guarantees. In
LRT-NG, the Lipschitz constant is used as a bloating factor of the initial
perturbation, to compute the radius of an ellipsoid in an optimal metric, which
over-approximates the set of reachable states. In SLR and GoTube, we get
statistical guarantees, by using the Lipschitz constants to compute local balls
around samples. These are needed to calculate the probability of having found
an upper bound, of the true maximum perturbation at every timestep. Our
experiments demonstrate the superior performance of Lagrangian techniques, when
compared to LRT, Flow*, and CAPD, and illustrate their use in the robustness
analysis of various continuous-depth models.
Related papers
- Bounds in Wasserstein Distance for Locally Stationary Functional Time Series [2.180952057802427]
This work investigates Nadaraya-Watson (NW) estimation procedure for the conditional distribution of locally stationary functional time series (LSFTS)
Under small ball probability and mixing condition, we establish convergence rates of NW estimator for LSFTS with respect to Wasserstein distance.
arXiv Detail & Related papers (2025-04-08T21:49:58Z) - Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Noise-Free Sampling Algorithms via Regularized Wasserstein Proximals [3.4240632942024685]
We consider the problem of sampling from a distribution governed by a potential function.
This work proposes an explicit score based MCMC method that is deterministic, resulting in a deterministic evolution for particles.
arXiv Detail & Related papers (2023-08-28T23:51:33Z) - Interacting Particle Langevin Algorithm for Maximum Marginal Likelihood
Estimation [2.53740603524637]
We develop a class of interacting particle systems for implementing a maximum marginal likelihood estimation procedure.
In particular, we prove that the parameter marginal of the stationary measure of this diffusion has the form of a Gibbs measure.
Using a particular rescaling, we then prove geometric ergodicity of this system and bound the discretisation error.
in a manner that is uniform in time and does not increase with the number of particles.
arXiv Detail & Related papers (2023-03-23T16:50:08Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Gaussian process regression and conditional Karhunen-Lo\'{e}ve models
for data assimilation in inverse problems [68.8204255655161]
We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models.
The CKLEMAP method provides better scalability compared to the standard MAP method.
arXiv Detail & Related papers (2023-01-26T18:14:12Z) - Lipschitz constant estimation for 1D convolutional neural networks [0.0]
We propose a dissipativity-based method for Lipschitz constant estimation of 1D convolutional neural networks (CNNs)
In particular, we analyze the dissipativity properties of convolutional, pooling, and fully connected layers.
arXiv Detail & Related papers (2022-11-28T12:09:06Z) - Learning from time-dependent streaming data with online stochastic
algorithms [7.283533791778357]
This paper addresses optimization in a streaming setting with time-dependent and biased estimates.
We analyze several first-order methods, including Gradient Descent (SGD), mini-batch SGD, and time-varying mini-batch SGD, along with their Polyak-Ruppert averages.
arXiv Detail & Related papers (2022-05-25T07:53:51Z) - A blob method method for inhomogeneous diffusion with applications to
multi-agent control and sampling [0.6562256987706128]
We develop a deterministic particle method for the weighted porous medium equation (WPME) and prove its convergence on bounded time intervals.
Our method has natural applications to multi-agent coverage algorithms and sampling probability measures.
arXiv Detail & Related papers (2022-02-25T19:49:05Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Mean-Square Analysis with An Application to Optimal Dimension Dependence
of Langevin Monte Carlo [60.785586069299356]
This work provides a general framework for the non-asymotic analysis of sampling error in 2-Wasserstein distance.
Our theoretical analysis is further validated by numerical experiments.
arXiv Detail & Related papers (2021-09-08T18:00:05Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.